id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
431746 | https://en.wikipedia.org/wiki/Amphipoda | Amphipoda | Amphipoda () is an order of malacostracan crustaceans with no carapace and generally with laterally compressed bodies. Amphipods () range in size from and are mostly detritivores or scavengers. There are more than 9,900 amphipod species so far described. They are mostly marine animals, but are found in almost all aquatic environments. Some 1,900 species live in fresh water, and the order also includes the terrestrial sandhoppers such as Talitrus saltator and Arcitalitrus sylvaticus.
Etymology and names
The name Amphipoda comes, via Neo-Latin , from the Greek roots 'on both/all sides' and 'foot'. This contrasts with the related Isopoda, which have a single kind of thoracic leg. Particularly among anglers, amphipods are known as freshwater shrimp, scuds, or sideswimmers.
Description
Anatomy
The body of an amphipod is divided into 13 segments, which can be grouped into a head, a thorax and an abdomen.
The head is fused to the thorax, and bears two pairs of antennae and one pair of sessile compound eyes. It also carries the mouthparts, but these are mostly concealed.
The thorax and abdomen are usually quite distinct and bear different kinds of legs; they are typically laterally compressed, and there is no carapace. The thorax bears eight pairs of uniramous appendages, the first of which are used as accessory mouthparts; the next four pairs are directed forwards, and the last three pairs are directed backwards. Gills are present on the thoracic segments, and there is an open circulatory system with a heart, using haemocyanin to carry oxygen in the haemolymph to the tissues. The uptake and excretion of salts is controlled by special glands on the antennae.
The abdomen is divided into two parts: the pleosome which bears swimming legs; and the urosome, which comprises a telson and three pairs of uropods which do not form a tail fan as they do in animals such as true shrimp.
Size
Amphipods are typically less than long, but the largest recorded living amphipods were long, and were photographed at a depth of in the Pacific Ocean. Samples retrieved from the stomach of a black-footed albatross had a reconstructed length of ; it was assigned to the same species, Alicella gigantea. A study of the Kermadec Trench observed more specimens of A. gigantea, the largest of which was estimated at 34.9 cm long, and collected some for examination, the largest of which was measured at 27.8 cm long. The smallest known amphipods are less than long. The size of amphipods is limited by the availability of dissolved oxygen, such that the amphipods in Lake Titicaca at an altitude of can only grow up to , compared to lengths of in Lake Baikal at .
Some amphipods exhibit sexual dimorphism. In dimorphic species, males are usually larger than females, although this is reversed in the genus Crangonyx.
Reproduction and life cycle
Amphipods engage in amplexus, a precopulatory guarding behavior in which males will grasp a female with their gnathopods (enlarged appendages used for feeding) and carry the female held against their ventral surface. Amplexus can last from two to over fifteen days, depending on water temperature, and ends when the female molts, at which point her eggs are ready for fertilisation.
Mature females bear a marsupium, or brood pouch, which holds her eggs while they are fertilised, and until the young are ready to hatch. As a female ages, she produces more eggs in each brood. Mortality is around 25–50% for the eggs. There are no larval stages; the eggs hatch directly into a juvenile form, and sexual maturity is generally reached after 6 moults. Some species have been known to eat their own exuviae after moulting
Diversity and classification
Over 10,500 species of amphipods are currently recognised. Traditionally they were placed in the four suborders Gammaridea, Caprellidea, Hyperiidea, and Ingolfiellidea.
Suborder Gammaridea contained the majority of taxa, including all the freshwater and terrestrial species. In contrast, the small suborder Ingolfiellidea only had 40 species.
Gammaridea had been recognised as a problematic group in need of taxonomic revision. It had no synapomorphies and became the repository for family-level taxa that didn't have synapomorphies for one of the other suborders. A new classification that breaks up and replaces Gammaridea has been developed in the work of J. K. Lowry and A. A. Myers using cladistic analysis of morphological characters. In 2003, suborder Corophiidea was reestablished for parts of Gammaridea and for the Caprellidea, which was found to be a derived part of the corophiidean clade and became infraorder Caprellida. Then in 2013, new large suborder Senticaudata was split off from the Gammaridea. The Senticaudata, which comprised over half of the known amphipod species., was divided into six infraorders, one of which was the former Corophiidea (including the former Caprellidea as a parvorder). The dismemberment of Gammaridea was completed in 2017 with the establishment of four new suborders in a six suborder classification: Pseudingolfiellidea,
Hyperiidea, Colomastigidea, Hyperiopsidea, Senticaudata and Amphilochidea. At the same time, Ingolfiellidea was split from Amphipoda and reclassified as order Ingolfiellida. The more recent work of Copilaş-Ciocianu et al. (2020) using analysis of molecular data (including 18S and 28S rRNA sequences and the protein coding COI and H3 sequences) found general support for three major groups corresponding to suborders Amphilochidea, Hyperiidea and Senticaudata, but suggests some groups need to move between Amphilochidea and Senticaudata in a taxonomic revision.
The classification listed immediately below, from the rank of suborder down to superfamily, represents the traditional division as given in Martin & Davis (2001), except that superfamilies are recognised here within the Gammaridea. The new classification of Lowry and Meyers (2017) is shown in the cladogram.
Fossil record
Amphipods are thought to have originated in the Lower Carboniferous. Despite the group's age, however, the fossil record of the order Amphipoda is meagre, comprising specimens of one species from the Lower Cretaceous (Hauterivian) Weald Clay (United Kingdom) and 12 species dating back only as far as the Upper Eocene, where they have been found in Baltic amber.
Ecology
Amphipods are found in almost all aquatic environments, from fresh water to water with twice the salinity of sea water and even in the Challenger Deep, the deepest known point in the ocean. They are almost always an important component of aquatic ecosystems, often acting as mesograzers. Most species in the suborder Gammaridea are epibenthic, although they are often collected in plankton samples. Members of the Hyperiidea are all planktonic and marine. Many are symbionts of gelatinous animals, including salps, medusae, siphonophores, colonial radiolarians and ctenophores, and most hyperiids are associated with gelatinous animals during some part of their life cycle. Some 1,900 species, or 20% of the total amphipod diversity, live in fresh water or other non-marine waters. Notably rich endemic amphipod faunas are found in the ancient Lake Baikal and waters of the Caspian Sea basin.
The landhoppers of the family Talitridae (which also includes semi-terrestrial and marine animals) are terrestrial, living in damp environments such as leaf litter. Landhoppers have a wide distribution in areas that were formerly part of Gondwana, but have colonised parts of Europe and North America in recent times.
Around 750 species in 160 genera and 30 families are troglobitic, and are found in almost all suitable habitats, but with their centres of diversity in the Mediterranean Basin, southeastern North America and the Caribbean.
In populations found in Benthic ecosystems, amphipods play an essential role in controlling brown algae growth. The mesograzer behaviour of amphipods greatly contributes to the suppression of brown algal dominance in the absence of amphipod predators. Amphipods display a strong preference for brown algae in Benthic ecosystems, but due to removal of mesograzers by predators such as fish, brown algae is able to dominate these communities over green and red algae species.
Morphology
Compared to other crustacean groups, such as the Isopoda, Rhizocephala or Copepoda, relatively few amphipods are parasitic on other animals. The most notable example of parasitic amphipods are the whale lice (family Cyamidae). Unlike other amphipods, these are dorso-ventrally flattened, and have large, strong claws, with which they attach themselves to baleen whales. They are the only parasitic crustaceans which cannot swim during any part of their life cycle.
Foraging behaviour
Most amphipods are detritivores or scavengers, with some being grazers of algae, omnivores or predators of small insects and crustaceans. Food is grasped with the front two pairs of legs, which are armed with large claws. More immobile species of amphipods eat higher quantities of less nutritious food rather than actively seeking more nutritious food. This is a type of compensatory feeding. This behaviour may have evolved to minimise predation risk when searching for other foods. Ampithoe longimana, for example, is more sedentary than other species and have been observed to remain on host plants longer. In fact, when presented with both high- and low-nutrition food options, the sedentary species Ampithoe longimana does not distinguish between the two options. Other amphipod species, such as Gammarus mucronatus and Elasmopus levis, which have superior predator avoidance and are more mobile, are better able to pursue different food sources. In species without the compensatory feeding ability, survivorship, fertility, and growth can be strongly negatively affected in the absence of high-quality food. Compensatory feeding may also explain the year-round presence of A. longimana in certain waters. Because algal presence changes throughout the year in certain communities, the evolution of flexible feeding techniques such as compensatory feeding may have been beneficial to survival.
Ampithoe longimana has been observed to avoid certain compounds when foraging for food. In response to this avoidance, species of seaweed such as Dictyopteris membranacea or Dictyopteris hoytii have evolved to produce C11 sulfur compounds and C-9 oxo-acids in their bodies as defense mechanisms that specifically deter amphipods instead of deterrence to consumption by other predators.
The incidence of cannibalism and intraguild predation is relatively high in some species, although adults may decrease cannibalistic behaviour directed at juveniles when they are likely to encounter their own offspring. In addition to age, sex may affect cannibalistic behaviour as males cannibalised newly moulted females less than males.
They have, rarely, been identified as feeding on humans; in Melbourne in 2017 a boy who stood in the sea for about half an hour had severe bleeding from wounds on his legs that did not coagulate easily. This was found to have been caused by "sea fleas" identified as lysianassid amphipods, possibly in a feeding group. Their bites are not venomous and do not cause lasting damage.
| Biology and health sciences | Malacostraca | Animals |
432097 | https://en.wikipedia.org/wiki/Hoe%20%28tool%29 | Hoe (tool) | A hoe is an ancient and versatile agricultural and horticultural hand tool used to shape soil, remove weeds, clear soil, and harvest root crops. Shaping the soil includes piling soil around the base of plants (hilling), digging narrow furrows (drills) and shallow trenches for planting seeds or bulbs. Weeding with a hoe includes agitating the surface of the soil or cutting foliage from roots, and clearing the soil of old roots and crop residues. Hoes for digging and moving soil are used to harvest root crops such as potatoes.
Types
There are many kinds of hoes of varied appearances and purposes. Some offer multiple functions, while others have only a singular and specific purpose.
There are two general types of hoe: draw hoes for shaping soil, and scuffle hoes for weeding and aerating soil.
A draw hoe has a blade set at approximately a right angle to the shaft. The user chops into the ground and then pulls (draws) the blade towards them. Altering the angle of the handle can cause the hoe to dig deeper or more shallowly as the hoe is pulled. A draw hoe can easily be used to cultivate soil to a depth of several centimetres. A typical design of draw hoe, the "eye hoe", has a ring in the head through which the handle is fitted. This design has been used since Roman times.
A scuffle hoe is used to scrape the surface of the soil, loosen the top few centimetres, and to cut the roots of, remove, and disrupt the growth of weeds efficiently. These are primarily of two different designs: the Dutch hoe and the hoop hoe.
A hand hoe is usually a light-weight, short-handled hoe of any type, although it may be used simply to contrast hand-held tools against animal- or machine-pulled tools.
Draw hoes
The typical farming and gardening hoe with a heavy, broad blade and a straight edge is known as the Italian hoe, grub hoe, grubbing hoe, azada (from Spanish), grab hoe, pattern hoe or dago hoe ("dago" being an ethnic slur referring to Italians, Spaniards, or Portuguese).
The ridging hoe, also known as the Warren hoe and the drill hoe, is a triangular (point-down) or heart-shaped draw hoe that is particularly useful for digging narrow furrows ("drills") and shallow trenches for the planting of seeds or bulbs.
The Paxton hoe is similar to the Italian hoe, but with a more rounded rectangular blade.
The flower hoe has a very small blade, rendering it useful for light weeding and aerating around growing plants, so as not to disturb their shallow roots while removing weeds beyond the reach of the gardener's arm.
The hoedad, hoedag or hodag is a hoe-like tool used to plant trees. According to Hartzell (1987, p. 29), "The hoedag [was] originally called skindvic hoe... Hans Rasmussen, legendary contractor and timber farm owner, is credited with having invented the curved, convex, round-nosed hoedag blade which is widely used today" (emphasis added).
The mortar hoe is a tool specific to the manual mixing of mortar and concrete, and has the appearance of a typical square-bladed draw hoe with the addition of large holes in the blade.
Scuffle hoes
The Dutch hoe is designed to be pushed or pulled through the soil to cut the roots of weeds just under the surface. A Dutch hoe has a blade "sharp on every side so as to cut either forward and backward". The blade must be set in a plane slightly upwardly inclined in relation to the dual axis of the shaft. The user pushes the handle to move the blade forward, forcing it below the surface of the soil and maintaining it at a shallow depth by altering the angle of the handle while pushing. A scuffle hoe can easily cultivate the soil and remove weeds from the surface layer.
The hoop hoe, also known as the action hoe, oscillating hoe, hula hoe, stirrup hoe, scuffle hoe, loop hoe, pendulum weeder, or swivel hoe) has a double-edge blade that bends around to form a rectangle attached to the shaft. Weeds are cut just below the surface of the soil as the blade is pushed and pulled. The back and forth motion is highly effective at cutting weeds in loose or friable soil. The width of the blade typically ranges between . The head is a loop of flat, sharpened strap metal. However, it is not as efficient as a draw hoe for moving soil.
The collinear hoe or collineal hoe has a narrow, razor-sharp blade which is used to slice the roots of weeds by skimming it just under the surface of the soil with a sweeping motion; it is unsuitable for tasks like soil moving and chopping. It was designed by Eliot Coleman in the late 1980s.
The swoe hoe is a modern, one-sided cutting hoe, being a variant of the Dutch hoe.
Other hoes
Hoes resembling neither draw nor scuffle hoes include:
Wheel hoes are, as the name suggests, a hoe or pair of hoes attached to one or more wheels. The hoes are frequently interchangeable with other tools. The historic manufacturer of the wheel hoe was Planet JR, these wheel hoes are still produced by Hoss Tools.
Horse hoes, resembling small ploughs, were a favourite implement of agricultural pioneer Jethro Tull, who claimed in his book "Horse Hoeing Husbandry" that "the horse-hoe will, in wide intervals, give wheat throughout all the stages of its life, as much nourishment as the discreet hoer pleases." The modern view is that, rather than nutrients being released, the crop simply benefits from the removal of competing plants. The introduction of the horse hoe, together with the better-known seed drill, brought about the great increase farming productivity seen during the British Agricultural Revolution.
Fork hoes (also known as prong hoes, tined hoes, Canterbury hoes, drag forks or bent forks) are hoes that have two or more tines at right angles to the shaft. Their use is typically to loosen the soil, prior to planting or sowing.
Clam hoes, made for clam digging
Adze hoes, with the basic hoe shape but heavier and stronger and with traditional uses in trail making.
Pacul or cangkul (hoes similar to adze hoe from Malaysia and Indonesia)
Gang hoes for powered use (in use at least from 1887 to 1964).
History
Hoes are an ancient technology, predating the plough and perhaps preceded only by the digging stick. In Sumerian mythology, the invention of the hoe was credited to Enlil, the chief of the council of gods. The hoe features in a Sumerian disputation poem known as the Debate between the hoe and the plough, dating to the 3rd millennium BC, where a personified hoe debates a personified plough over which tool is the better. At the end of the poem, the hoe is declared the winner. Another composition from the same era and language, the Song of the hoe, is dedicated to the praise of this tool.
The hand-plough (mr) was depicted in predynastic Egyptian art, and hoes are also mentioned in ancient documents like the Code of Hammurabi (ca. 18th century BC) and the Book of Isaiah (c. 8th century BC).
Long-term use of short-handled hoes, which required the user to bend over from the waist to reach the ground, could cause permanent, crippling lower back pain to farm workers. Over time, this resulted in change after a struggle led by César Chávez with the political help of Governor Jerry Brown in the California Supreme Court. They declared that the short-handled hoe was an unsafe hand tool, which was then banned under California law in 1975.
Archaeological use
Over the past fifteen or twenty years, hoes have become increasingly popular tools for professional archaeologists. While not as accurate as the traditional trowel, the hoe is an ideal tool for cleaning relatively large open areas of archaeological interest. It is faster to use than a trowel, and produces a much cleaner surface than an excavator bucket or shovel-scrape, and consequently on many open-area excavations the once-common line of kneeling archaeologists trowelling backwards has been replaced with a line of stooping archaeologists with hoes.
| Technology | Agricultural tools | null |
432138 | https://en.wikipedia.org/wiki/Growing%20season | Growing season | A season is a division of the year marked by changes in weather, ecology, and the amount of daylight. The growing season is that portion of the year in which local conditions (i.e. rainfall, temperature, daylight) permit normal plant growth. While each plant or crop has a specific growing season that depends on its genetic adaptation, growing seasons can generally be grouped into macro-environmental classes.
Axial tilt of the Earth inherently affect growing seasons across the globe.
Geography
Geographic conditions have major impacts on the growing season for any given area. Latitude is one of the major factors in the length of the growing season. The further from the equator one goes, the angle of the Sun gets lower in the sky. Consequently, sunlight is less direct and the low angle of the Sun means that soil takes longer to warm during the spring months, so the growing season begins later. The other factor is altitude, with high elevations having cooler temperatures which shortens the growing season compared with a low-lying area of the same latitude.
Season extension
Locations
North America
The continental United States ranges from 49° north at the US-Canadian border to 25° north at the southern tip of the US-Mexican border. Most populated areas of Canada are below the 55th parallel. North of the 45th parallel, the growing season is generally 4–5 months, beginning in late April or early May and continuing to late September-early October, and is characterized by warm summers and cold winters with heavy snow. South of the 30th parallel, the growing season is year-round in many areas with hot summers and mild winters. Cool season crops such as peas, lettuce, and spinach are planted in fall or late winter, while warm season crops such as beans and corn are planted in late winter to early spring. In the desert Southwest, the growing season effectively runs in winter, from October to April as the summer months are characterized by extreme heat and arid conditions, making it inhospitable for plants not adapted to this environment.
Certain crops such as tomatoes and melons originated in subtropical or tropical regions. Consequently, they require hot weather and a growing season of eight months or more. In colder climate areas where they cannot be directly sowed in the ground, these plants are usually started indoors in a greenhouse and transplanted outside in late spring or early summer.
Europe
The Pyrenees, Alps, and Southern Carpathians effectively divide Europe into two regions. Southern Europe and the Mediterranean are in general south of the 45th parallel. The growing seasons last six months or more, and the climate is characterized by hot summers and milder winters. Precipitation mainly falls between October and March, while summers are dry. In the extreme south of Europe, the growing season can be year-round. Vegetation on the Mediterranean islands is often evergreen because of the relatively warm winters.
Northern and Central Europe extend north from the 45th parallel past the Arctic Circle. The growing seasons are shorter because of the lower angle of the Sun and generally range from five months to as little as three in the highlands of Scandinavia and Russia. Climate on the Atlantic coast is considerably moderated by humid ocean air, which makes winters comparatively mild, and freezing weather or snow are rare. Because summers are also mild, many heat-loving plants such as maize do not typically grow in Northwestern Europe. Further inland, winters become considerably colder. Despite the short growing season in parts of Scandinavia and northern Russia, the extreme length of daylight during summer (17 hours or more) allows plants to put on significant growth.
Tropics and deserts
In some warm climates, such as the tropical savanna climates (Aw), the hot semi-arid climates (BSh), the hot desert climates (BWh) or the Mediterranean climates (Cs), the growing season is limited by the availability of water, with little growth in the dry season. Unlike in cooler climates where snow or soil freezing is a generally insurmountable obstacle to plant growth, it is often possible to greatly extend the growing season in hot climates by irrigation using water from cooler and/or wetter regions. This can in fact go so far as to allow year-round growth in areas that without irrigation could only support xerophytic plants.
In the tropical regions, the growing season can be interrupted by periods of heavy rainfall, called the rainy season. For example, in Colombia, where coffee is grown and can be harvested year-round, they do not see a rainy season. However, in Indonesia, another large coffee-producing area, they experience this rainy season and the growth of the coffee beans is interrupted.
| Technology | Basics_2 | null |
432174 | https://en.wikipedia.org/wiki/Mulch | Mulch | A mulch is a layer of material applied to the surface of soil. Reasons for applying mulch include conservation of soil moisture, improving fertility and health of the soil, reducing weed growth, and enhancing the visual appeal of the area.
A mulch is usually, but not exclusively, organic in nature. It may be permanent (e.g. plastic sheeting) or temporary (e.g. bark chips). It may be applied to bare soil or around existing plants. Mulches of manure and compost will be incorporated naturally into the soil by the activity of worms and other organisms. The process is used both in commercial crop production and in gardening, and when applied correctly, can improve soil productivity.
Living mulches include moss lawns and other ground covers.
Uses
Many materials are used as mulches, which are used to retain soil moisture, regulate soil temperature, suppress weed growth, and for aesthetics. They are applied to the soil surface, around trees, paths, flower beds, to prevent soil erosion on slopes, and in production areas for flower and vegetable crops. Mulch layers are normally or more deep when applied.
Although mulch can be applied around established plants at any time, they may be applied at various times of the year depending on the purpose. Towards the beginning of the growing season, mulches serve initially to warm the soil by helping it retain heat which is otherwise lost during the night. This allows early seeding and transplanting of certain crops, and encourages faster growth. Mulch acts as an insulator. As the season progresses, mulch stabilizes the soil temperature and moisture, and prevents the growing of weeds from seeds.
In temperate climates, the effects of mulches depend upon the time of year in which they are applied. When applied in fall and winter, mulches delay the growth of perennial plants in the spring and prevent growth in winter during warm spells, thus limiting freeze–thaw damage.
The effect of mulch upon soil moisture content is complex. Mulch forms a layer between the soil and the atmosphere reducing evaporation. However, mulch can also prevent water from reaching the soil by absorbing or blocking water from light rains and overly thick layers of mulch can reduce oxygen in the soil.
In order to maximise the benefits of mulch, while minimizing its negative influences, it is often applied in late spring/early summer when soil temperatures have risen sufficiently, but soil moisture content is still relatively high. However, permanent mulch is also widely used and valued for its simplicity, as popularized by author Ruth Stout, who said, "My way is simply to keep a thick mulch of any vegetable matter that rots on both sides of my vegetable and flower garden all year long. As it decays and enriches the soils, I add more."
Materials
Materials used as mulches vary and depend on a number of factors. Use takes into consideration availability, cost, appearance, the effect it has on the soil—including chemical reactions and pH, durability, combustibility, rate of decomposition, how clean it is—some can contain weed seeds or plant pathogens.
A variety of materials are used as mulch:
Organic residues: grass clippings, leaves, hay, straw, kitchen scraps, comfrey, shredded bark, whole bark nuggets, sawdust, shells, woodchips, shredded newspaper, cardboard, wool, animal manure, etc. Many of these materials also act as a direct composting system, such as the mulched clippings of a mulching lawn mower, or other organics applied as sheet composting.
Compost: fully composted materials (humus) are used to avoid possible phytotoxicity problems. Materials that are free of seeds are ideally used, to prevent weeds being introduced by the mulch.
Rubber mulch: made from recycled tire rubber.
Plastic mulch: crops grow through slits or holes in thin plastic sheeting. This method is predominant in large-scale vegetable growing, with millions of acres cultivated under plastic mulch worldwide each year. Disposal of plastic mulch is cited as an environmental problem but there are also degradable plastic mulches.
Rock and gravel can also be used as a mulch. In cooler climates the heat retained by rocks may extend the growing season.
In some areas of the United States, such as central Pennsylvania and northern California, mulch is often referred to as "tanbark", even by manufacturers and distributors. In these areas, the word "mulch" is used specifically to refer to very fine tanbark or peat moss.
Organic mulches
Organic mulches decay over time and are temporary. The way a particular organic mulch decomposes and reacts to wetting by rain and dew affects its usefulness. Some mulches such as straw, peat, sawdust and other wood products may for a while negatively affect plant growth because of their wide carbon to nitrogen ratio, because bacteria and fungi that decompose the materials remove nitrogen from the surrounding soil for growth. Organic mulches can mat down, forming a barrier that blocks water and air flow between the soil and the atmosphere. Vertically applied organic mulches can wick water from the soil to the surface, which can dry out the soil. Mulch made with wood can contain or feed termites, so care must be taken about not placing mulch too close to houses or building that can be damaged by those insects. Mulches placed too close to plant stems and tree trunks can contribute to their failure. Some mulch manufacturers recommend putting mulch several inches away from buildings.
Commonly available organic mulches include:
Leaves
Leaves from deciduous trees, which drop their foliage in the autumn/fall. They tend to be dry and blow around in the wind, so are often chopped or shredded before application. As they decompose they adhere to each other but also allow water and moisture to seep down to the soil surface. Thick layers of entire leaves, especially of maples and oaks, can form a soggy mat in winter and spring which can impede the new growth of lawn grass and other plants. Dry leaves are used as winter mulches to protect plants from freezing and thawing in areas with cold winters; they are normally removed during spring.
Grass clippings
Grass clippings, from mowed lawns are sometimes collected and used elsewhere as mulch. Grass clippings are dense and tend to mat down, so are mixed with tree leaves or rough compost to provide aeration and to facilitate their decomposition without smelly putrefaction. Rotting fresh grass clippings can damage plants; their rotting often produces a damaging buildup of trapped heat. Grass clippings are often dried thoroughly before application, which militates against rapid decomposition and excessive heat generation. Fresh green grass clippings are relatively high in nitrate content, and when used as a mulch, much of the nitrate is returned to the soil, conversely the routine removal of grass clippings from the lawn results in nitrogen deficiency for the lawn.
Peat moss
Peat moss, or sphagnum peat, is long lasting and packaged, making it convenient and popular as a mulch. When wetted and dried, it can form a dense crust that does not allow water to soak in. When dry it can also burn, producing a smoldering fire. It is sometimes mixed with pine needles to produce a mulch that is friable. It can also lower the pH of the soil surface, making it useful as a mulch under acid loving plants.
However, peat bogs are a valuable wildlife habitat, and peat is also one of the largest stores of carbon (in Britain, out of a total estimated 9952 million tonnes of carbon in British vegetation and soils, 6948 million tonnes carbon are estimated to be in Scottish, mostly peatland, soils)
Wood chips
Wood chips are a byproduct of the pruning of trees by arborists, utilities and parks; they are used to dispose of bulky waste. Tree branches and large stems are rather coarse after chipping and tend to be used as a mulch at least three inches thick. The chips are used to conserve soil moisture, moderate soil temperature and suppress weed growth. Wood chip mulches on the top of the soil increase nutrient levels in soils and associated plant foliage, contrary to the myth that wood chip mulch tie up nitrogen. Wood chips are most often used under trees and shrubs. When used around soft stemmed plants, an unmulched zone is left around the plant stems to prevent stem rot or other possible diseases. They are often used to mulch trails, because they are readily produced with little additional cost outside of the normal disposal cost of tree maintenance. Wood chips come in various colors.
Woodchip mulch is a byproduct of reprocessing used (untreated) timber (usually packaging pallets), to dispose of wood waste. The chips are used to conserve soil moisture, moderate soil temperature and suppress weed growth. Woodchip mulch is often used under trees, shrubs or large planting areas and can last much longer than arborist mulch. In addition, many consider woodchip mulch to be visually appealing, as it comes in various colors. Woodchips can also be reprocessed into playground woodchip to be used as an impact-attenuating playground surfacing.
Bark chips
Bark chips of various grades are produced from the outer corky bark layer of timber trees. Sizes vary from thin shredded strands to large coarse blocks. The finer types are very attractive but have a large exposed surface area that leads to quicker decay. Layers two or three inches deep are usually used, bark is relativity inert and its decay does not demand soil nitrates. Bark chips are also available in various colors.
Straw mulch / field hay / salt hay
Straw mulch or field hay or salt hay are lightweight and normally sold in compressed bales. They have an unkempt look and are used in vegetable gardens and as a winter covering. They are biodegradable and neutral in pH. They have good moisture retention and weed controlling properties but also are more likely to be contaminated with weed seeds. Salt hay is less likely to have weed seeds than field hay. Straw mulch is also available in various colors.
Pine straw
Needles that drop from pine trees are termed pine straw. It is available in bales. Pine straw has an attractive look and is used in landscape and garden settings. On application pine needles tend to weave together, a characteristic that helps the mulch hold stormwater on steeper slopes. This interlocking tendency combined with a resistance to floating gives it further advantages in maintaining cover and preventing soil erosion. The interlocking tendency also helps keep the mulch structure from collapsing and forming a barrier to infiltration. Pine straw is reputed to create ideal conditions for acid-loving plants. Pine straw may help to acidify soils but studies indicate this effect is often too small to be measurable.
Biodegradable mulch
Biodegradable mulches are made out of plant starches and sugars or polyester fibers. These starches can come from plants such as wheat and maize. These mulch films may be a bit more permeable allowing more water into the soil. This mulch can prevent soil erosion, reduce weeding, conserve soil moisture, and increase temperature of the soil. Ultimately this can reduce the amount of herbicides used and manual labor farmers may have to do throughout the growing season. At the end of the season these mulches will start to break down from heat. Microorganisms in the soil break down the mulch into two components, water and carbon dioxide, leaving no toxic residues behind. This source of mulch requires less manual labor since it does not need to be removed at the end of the season and can actually be tilled into the soil. With this mulch it is important to take into consideration that it is much more delicate than other kinds. It should be placed on a day which is not too hot and with less tension than other synthetic mulches. These also can be placed by machine or hand but it is ideal to have a more starchy mulch that will allow it to stick to the soil better.
Cardboard / newspaper
Cardboard or newspaper can be used as semi-organic mulches. These are best used as a base layer upon which a heavier mulch such as compost is placed to prevent the lighter cardboard/newspaper layer from blowing away. By incorporating a layer of cardboard/newspaper into a mulch, the quantity of heavier mulch can be reduced, whilst improving the weed suppressant and moisture retaining properties of the mulch. However, additional labour is expended when planting through a mulch containing a cardboard/newspaper layer, as holes must be cut for each plant. Sowing seed through mulches containing a cardboard/newspaper layer is impractical. Application of newspaper mulch in windy weather can be facilitated by briefly pre-soaking the newspaper in water to increase its weight.
Synthetic
Rubber
Plastics
Plastic mulch used in large-scale commercial production is laid down with a tractor-drawn or standalone layer of plastic mulch. This is usually part of a sophisticated mechanical process, where raised beds are formed, plastic is rolled out on top, and seedlings are transplanted through it. Drip irrigation is often required, with drip tape laid under the plastic, as plastic mulch is impermeable to water.
Polypropylene and polyethylene mulch
Polypropylene mulch is made up of polypropylene polymers where polyethylene mulch is made up of polyethylene polymers. These mulches are commonly used in many plastics. Polyethylene is used mainly for weed reduction, whereas polypropylene is used mainly on perennials. This mulch is placed on top of the soil and can be done by machine or hand with pegs to keep the mulch tight against the soil. This mulch can prevent soil erosion, reduce weeding, conserve soil moisture, and increase temperature of the soil. Ultimately this can reduce the amount of work a farmer may have to do, and the amount of herbicides applied during the growing period. The black and clear mulches capture sunlight and warm the soil increasing the growth rate. White and other reflective colours will also warm the soil, but they do not suppress weeds as well. This mulch may require other sources of obtaining water such as drip irrigation since it can reduce the amount of water that reaches the soil. This mulch needs to be manually removed at the end of the season since when it starts to break down it breaks down into smaller pieces. If the mulch is not removed before it starts to break down eventually it will break down into ketones and aldehydes polluting the soil. This mulch is technically biodegradable but does not break down into the same materials the more natural biodegradable mulch does.
Colored mulch
Some organic mulches are colored red, brown, black, and other colors using synthetic additives. Isopropanolamine, specifically 1-Amino-2-propanol or monoisopropanolamine, may be used as a pigment dispersant and color fastener in these mulches. Types of mulch which can be dyed include: wood chips, bark chips (barkdust) and pine straw. Colored mulch is made by dyeing the mulch in a water-based solution of colorant and chemical binder.
When colored mulch first entered the market, most formulas were suspected to contain toxic substances, heavy metals and other contaminates. Today, "current investigations indicate that mulch colorants pose no threat to people, pets or the environment. The dyes currently used by the mulch and soil industry are similar to those used in the cosmetic and other manufacturing industries (i.e., iron oxide)", as stated by the Mulch and Soil Council.
According to colorant manufacturer Colorbiotics, independent laboratory studies show that the colorants used in colored mulch are safer than table salt or baking soda.
Colored mulch can be applied anywhere non-colored mulch is used (such as large bedded areas or around plants) and features many of the same gardening benefits as traditional mulch, such as improving soil productivity and retaining moisture. As mulch decomposes, just as with non-colored mulch, more mulch may need to be added to continue providing benefits to the soil and plants. However, if mulch is faded, spraying dye to previously spread mulch in order to restore color is an option.
Anaerobic (sour) mulch
Organic mulches often smell like freshly cut wood but sometimes they start to smell like vinegar, ammonia, sulfur or silage. This happens when material with ample nitrogen content is not rotated often enough and it forms pockets of increased decomposition. When this occurs, the process may become anaerobic and produce phytotoxic materials in small quantities. Once exposed to the air, the process quickly reverts to an aerobic process, but the anaerobic metabolites may be present for a period of time. Plants low to the ground or freshly planted are the most susceptible, and phytotoxicity from the produced chemicals may prevent germination of some seeds.
Groundcovers (living mulches)
Groundcovers are plants which grow close to the ground, under the main crop, to slow the development of weeds and provide other benefits of mulch. They are usually fast-growing plants that continue growing with the main crops. By contrast, cover crops are incorporated into the soil or killed with herbicides. However, live mulches also may need to be mechanically or chemically killed eventually to prevent competition with the main crop.
Some groundcovers can perform additional roles in the garden such as nitrogen fixation in the case of clovers, dynamic accumulation of nutrients from the subsoil in the case of creeping comfrey (Symphytum ibericum), and even food production in the case of Rubus tricolor.
On-site production
Owing to the great bulk of mulch which is often required on a site, it is often impractical and expensive to source and import sufficient mulch materials. An alternative to importing mulch materials is to grow them on site in a "mulch garden" – an area of the site dedicated entirely to the production of mulch which is then transferred to the growing area. Mulch gardens should be sited as close as possible to the growing area so as to facilitate transfer of mulch materials.
| Technology | Soil and soil management | null |
432181 | https://en.wikipedia.org/wiki/Lewis%20structure | Lewis structure | Lewis structuresalso called Lewis dot formulas, Lewis dot structures, electron dot structures, or Lewis electron dot structures (LEDs)are diagrams that show the bonding between atoms of a molecule, as well as the lone pairs of electrons that may exist in the molecule. Introduced by Gilbert N. Lewis in his 1916 article The Atom and the Molecule, a Lewis structure can be drawn for any covalently bonded molecule, as well as coordination compounds. Lewis structures extend the concept of the electron dot diagram by adding lines between atoms to represent shared pairs in a chemical bond.
Lewis structures show each atom and its position in the structure of the molecule using its chemical symbol. Lines are drawn between atoms that are bonded to one another (pairs of dots can be used instead of lines). Excess electrons that form lone pairs are represented as pairs of dots, and are placed next to the atoms.
Although main group elements of the second period and beyond usually react by gaining, losing, or sharing electrons until they have achieved a valence shell electron configuration with a full octet of (8) electrons, hydrogen (H) can only form bonds which share just two electrons.
Construction and electron counting
For a neutral molecule, the total number of electrons represented in a Lewis structure is equal to the sum of the numbers of valence electrons on each individual atom. Non-valence electrons are not represented in Lewis structures.
Once the total number of valence electrons has been determined, they are placed into the structure according to these steps:
Initially, one line (representing a single bond) is drawn between each pair of connected atoms.
Each bond consists of a pair of electrons, so if t is the total number of electrons to be placed and n is the number of single bonds just drawn, t−2n electrons remain to be placed. These are temporarily drawn as dots, one per electron, to a maximum of eight per atom (two in the case of hydrogen), minus two for each bond.
Electrons are distributed first to the outer atoms and then to the others, until there are no more to be placed.
Finally, each atom (other than hydrogen) that is surrounded by fewer than eight electrons (counting each bond as two) is processed as follows: For every two electrons needed, two dots are deleted from a neighboring atom and an additional line is drawn between the two atoms. This represents the conversion of a lone pair of electrons into a bonding pair, which adds two electrons to the former atom's valence shell while leaving the latter's electron count unchanged.
In the preceding steps, if there are not enough electrons to fill the valence shells of all atoms, preference is given to those atoms whose electronegativity is higher.
Lewis structures for polyatomic ions may be drawn by the same method. However when counting electrons, negative ions should have extra electrons placed in their Lewis structures; positive ions should have fewer electrons than an uncharged molecule. When the Lewis structure of an ion is written, the entire structure is placed in brackets, and the charge is written as a superscript on the upper right, outside the brackets.
Miburo method
A simpler method has been proposed for constructing Lewis structures, eliminating the need for electron counting: the atoms are drawn showing the valence electrons; bonds are then formed by pairing up valence electrons of the atoms involved in the bond-making process, and anions and cations are formed by adding or removing electrons to/from the appropriate atoms.
A trick is to count up valence electrons, then count up the number of electrons needed to complete the octet rule (or with hydrogen just 2 electrons), then take the difference of these two numbers. The answer is the number of electrons that make up the bonds. The rest of the electrons just go to fill all the other atoms' octets.
Lever method
Another simple and general procedure to write Lewis structures and resonance forms has been proposed.
This system works in nearly all cases, however there are 3 instances where it will not work. These exceptions are outlined in the table below.
Formal charge
In terms of Lewis structures, formal charge is used in the description, comparison, and assessment of likely topological and resonance structures by determining the apparent electronic charge of each atom within, based upon its electron dot structure, assuming exclusive covalency or non-polar bonding. It has uses in determining possible electron re-configuration when referring to reaction mechanisms, and often results in the same sign as the partial charge of the atom, with exceptions. In general, the formal charge of an atom can be calculated using the following formula, assuming non-standard definitions for the markup used:
where:
is the formal charge.
represents the number of valence electrons in a free atom of the element.
represents the number of unshared electrons on the atom.
represents the total number of electrons in bonds the atom has with another.
The formal charge of an atom is computed as the difference between the number of valence electrons that a neutral atom would have and the number of electrons that belong to it in the Lewis structure. Electrons in covalent bonds are split equally between the atoms involved in the bond. The total of the formal charges on an ion should be equal to the charge on the ion, and the total of the formal charges on a neutral molecule should be equal to zero.
Resonance
For some molecules and ions, it is difficult to determine which lone pairs should be moved to form double or triple bonds, and two or more different resonance structures may be written for the same molecule or ion. In such cases it is usual to write all of them with two-way arrows in between . This is sometimes the case when multiple atoms of the same type surround the central atom, and is especially common for polyatomic ions.
When this situation occurs, the molecule's Lewis structure is said to be a resonance structure, and the molecule exists as a resonance hybrid. Each of the different possibilities is superimposed on the others, and the molecule is considered to have a Lewis structure equivalent to some combination of these states.
The nitrate ion (), for instance, must form a double bond between nitrogen and one of the oxygens to satisfy the octet rule for nitrogen. However, because the molecule is symmetrical, it does not matter which of the oxygens forms the double bond. In this case, there are three possible resonance structures. Expressing resonance when drawing Lewis structures may be done either by drawing each of the possible resonance forms and placing double-headed arrows between them or by using dashed lines to represent the partial bonds (although the latter is a good representation of the resonance hybrid which is not, formally speaking, a Lewis structure).
When comparing resonance structures for the same molecule, usually those with the fewest formal charges contribute more to the overall resonance hybrid. When formal charges are necessary, resonance structures that have negative charges on the more electronegative elements and positive charges on the less electronegative elements are favored.
Single bonds can also be moved in the same way to create resonance structures for hypervalent molecules such as sulfur hexafluoride, which is the correct description according to quantum chemical calculations instead of the common expanded octet model.
The resonance structure should not be interpreted to indicate that the molecule switches between forms, but that the molecule acts as the average of multiple forms.
Example
The formula of the nitrite ion is .
Nitrogen is the least electronegative atom of the two, so it is the central atom by multiple criteria.
Count valence electrons. Nitrogen has 5 valence electrons; each oxygen has 6, for a total of (6 × 2) + 5 = 17. The ion has a charge of −1, which indicates an extra electron, so the total number of electrons is 18.
Connect the atoms by single bonds. Each oxygen must be bonded to the nitrogen, which uses four electrons—two in each bond.
Place lone pairs. The 14 remaining electrons should initially be placed as 7 lone pairs. Each oxygen may take a maximum of 3 lone pairs, giving each oxygen 8 electrons including the bonding pair. The seventh lone pair must be placed on the nitrogen atom.
Satisfy the octet rule. Both oxygen atoms currently have 8 electrons assigned to them. The nitrogen atom has only 6 electrons assigned to it. One of the lone pairs on an oxygen atom must form a double bond, but either atom will work equally well. Therefore, there is a resonance structure.
Tie up loose ends. Two Lewis structures must be drawn: Each structure has one of the two oxygen atoms double-bonded to the nitrogen atom. The second oxygen atom in each structure will be single-bonded to the nitrogen atom. Place brackets around each structure, and add the charge (−) to the upper right outside the brackets. Draw a double-headed arrow between the two resonance forms.
Alternative formations
Chemical structures may be written in more compact forms, particularly when showing organic molecules. In condensed structural formulas, many or even all of the covalent bonds may be left out, with subscripts indicating the number of identical groups attached to a particular atom.
Another shorthand structural diagram is the skeletal formula (also known as a bond-line formula or carbon skeleton diagram). In a skeletal formula, carbon atoms are not signified by the symbol C but by the vertices of the lines. Hydrogen atoms bonded to carbon are not shown—they can be inferred by counting the number of bonds to a particular carbon atom—each carbon is assumed to have four bonds in total, so any bonds not shown are, by implication, to hydrogen atoms.
Other diagrams may be more complex than Lewis structures, showing bonds in 3D using various forms such as space-filling diagrams.
Usage and limitations
Despite their simplicity and development in the early twentieth century, when understanding of chemical bonding was still rudimentary, Lewis structures capture many of the key features of the electronic structure of a range of molecular systems, including those of relevance to chemical reactivity. Thus, they continue to enjoy widespread use by chemists and chemistry educators. This is especially true in the field of organic chemistry, where the traditional valence-bond model of bonding still dominates, and mechanisms are often understood in terms of curve-arrow notation superimposed upon skeletal formulae, which are shorthand versions of Lewis structures. Due to the greater variety of bonding schemes encountered in inorganic and organometallic chemistry, many of the molecules encountered require the use of fully delocalized molecular orbitals to adequately describe their bonding, making Lewis structures comparatively less important (although they are still common).
There are simple and archetypal molecular systems for which a Lewis description, at least in unmodified form, is misleading or inaccurate. Notably, the naive drawing of Lewis structures for molecules known experimentally to contain unpaired electrons (e.g., O2, NO, and ClO2) leads to incorrect inferences of bond orders, bond lengths, and/or magnetic properties. A simple Lewis model also does not account for the phenomenon of aromaticity. For instance, Lewis structures do not offer an explanation for why cyclic C6H6 (benzene) experiences special stabilization beyond normal delocalization effects, while C4H4 (cyclobutadiene) actually experiences a special destabilization. Molecular orbital theory provides the most straightforward explanation for these phenomena.
| Physical sciences | Substance | Chemistry |
432632 | https://en.wikipedia.org/wiki/Supergravity | Supergravity | In theoretical physics, supergravity (supergravity theory; SUGRA for short) is a modern field theory that combines the principles of supersymmetry and general relativity; this is in contrast to non-gravitational supersymmetric theories such as the Minimal Supersymmetric Standard Model. Supergravity is the gauge theory of local supersymmetry. Since the supersymmetry (SUSY) generators form together with the Poincaré algebra a superalgebra, called the super-Poincaré algebra, supersymmetry as a gauge theory makes gravity arise in a natural way.
Gravitons
Like all covariant approaches to quantum gravity, supergravity contains a spin-2 field whose quantum is the graviton. Supersymmetry requires the graviton field to have a superpartner. This field has spin 3/2 and its quantum is the gravitino. The number of gravitino fields is equal to the number of supersymmetries.
History
Gauge supersymmetry
The first theory of local supersymmetry was proposed by Dick Arnowitt and Pran Nath in 1975 and was called gauge supersymmetry.
Supergravity
The first model of 4-dimensional supergravity (without this denotation) was formulated by Dmitri Vasilievich Volkov and Vyacheslav A. Soroka in 1973, emphasizing the importance of spontaneous supersymmetry breaking for the possibility of a realistic model. The minimal version of 4-dimensional supergravity (with unbroken local supersymmetry) was constructed in detail in 1976 by Dan Freedman, Sergio Ferrara and Peter van Nieuwenhuizen. In 2019 the three were awarded a special Breakthrough Prize in Fundamental Physics for the discovery. The key issue of whether or not the spin 3/2 field is consistently coupled was resolved in the nearly simultaneous paper, by Deser and Zumino, which independently proposed the minimal 4-dimensional model. It was quickly generalized to many different theories in various numbers of dimensions and involving additional (N) supersymmetries. Supergravity theories with N>1 are usually referred to as extended supergravity (SUEGRA). Some supergravity theories were shown to be related to certain higher-dimensional supergravity theories via dimensional reduction (e.g. N=1, 11-dimensional supergravity is dimensionally reduced on T7 to 4-dimensional, ungauged, N = 8 supergravity). The resulting theories were sometimes referred to as Kaluza–Klein theories as Kaluza and Klein constructed in 1919 a 5-dimensional gravitational theory, that when dimensionally reduced on a circle, its 4-dimensional non-massive modes describe electromagnetism coupled to gravity.
mSUGRA
mSUGRA means minimal SUper GRAvity. The construction of a realistic model of particle interactions within the N = 1 supergravity framework where supersymmetry (SUSY) breaks by a super Higgs mechanism carried out by Ali Chamseddine, Richard Arnowitt and Pran Nath in 1982. Collectively now known as minimal supergravity Grand Unification Theories (mSUGRA GUT), gravity mediates the breaking of SUSY through the existence of a hidden sector. mSUGRA naturally generates the Soft SUSY breaking terms which are a consequence of the Super Higgs effect. Radiative breaking of electroweak symmetry through Renormalization Group Equations (RGEs) follows as an immediate consequence.
Due to its predictive power, requiring only four input parameters and a sign to determine the low energy phenomenology from the scale of Grand Unification, its interest is a widely investigated model of particle physics
11D: the maximal SUGRA
One of these supergravities, the 11-dimensional theory, generated considerable excitement as the first potential candidate for the theory of everything. This excitement was built on four pillars, two of which have now been largely discredited:
Werner Nahm showed 11 dimensions as the largest number of dimensions consistent with a single graviton, and more dimensions will show particles with spins greater than 2. However, if two of these dimensions are time-like, these problems are avoided in 12 dimensions. Itzhak Bars gives this emphasis.
In 1981 Ed Witten showed 11 as the smallest number of dimensions big enough to contain the gauge groups of the Standard Model, namely SU(3) for the strong interactions and SU(2) times U(1) for the electroweak interactions. Many techniques exist to embed the standard model gauge group in supergravity in any number of dimensions like the obligatory gauge symmetry in type I and heterotic string theories, and obtained in type II string theory by compactification on certain Calabi–Yau manifolds. The D-branes engineer gauge symmetries too.
In 1978 Eugène Cremmer, Bernard Julia and Joël Scherk (CJS) found the classical action for an 11-dimensional supergravity theory. This remains today the only known classical 11-dimensional theory with local supersymmetry and no fields of spin higher than two. Other 11-dimensional theories known and quantum-mechanically inequivalent reduce to the CJS theory when one imposes the classical equations of motion. However, in the mid-1980s Bernard de Wit and Hermann Nicolai found an alternate theory in D=11 Supergravity with Local SU(8) Invariance. While not manifestly Lorentz-invariant, it is in many ways superior, because it dimensionally-reduces to the 4-dimensional theory without recourse to the classical equations of motion.
In 1980 Peter Freund and M. A. Rubin showed that compactification from 11 dimensions preserving all the SUSY generators could occur in two ways, leaving only 4 or 7 macroscopic dimensions, the others compact. The noncompact dimensions have to form an anti-de Sitter space. There are many possible compactifications, but the Freund-Rubin compactification's invariance under all of the supersymmetry transformations preserves the action.
Finally, the first two results each appeared to establish 11 dimensions, the third result appeared to specify the theory, and the last result explained why the observed universe appears to be four-dimensional.
Many of the details of the theory were fleshed out by Peter van Nieuwenhuizen, Sergio Ferrara and Daniel Z. Freedman.
The end of the SUGRA era
The initial excitement over 11-dimensional supergravity soon waned, as various failings were discovered, and attempts to repair the model failed as well. Problems included:
The compact manifolds which were known at the time and which contained the standard model were not compatible with supersymmetry, and could not hold quarks or leptons. One suggestion was to replace the compact dimensions with the 7-sphere, with the symmetry group SO(8), or the squashed 7-sphere, with symmetry group SO(5) times SU(2).
Until recently, the physical neutrinos seen in experiments were believed to be massless, and appeared to be left-handed, a phenomenon referred to as the chirality of the Standard Model. It was very difficult to construct a chiral fermion from a compactification — the compactified manifold needed to have singularities, but physics near singularities did not begin to be understood until the advent of orbifold conformal field theories in the late 1980s.
Supergravity models generically result in an unrealistically large cosmological constant in four dimensions, and that constant is difficult to remove, and so require fine-tuning. This is still a problem today.
Quantization of the theory led to quantum field theory gauge anomalies rendering the theory inconsistent. In the intervening years physicists have learned how to cancel these anomalies.
Some of these difficulties could be avoided by moving to a 10-dimensional theory involving superstrings. However, by moving to 10 dimensions one loses the sense of uniqueness of the 11-dimensional theory.
The core breakthrough for the 10-dimensional theory, known as the first superstring revolution, was a demonstration by Michael B. Green, John H. Schwarz and David Gross that there are only three supergravity models in 10 dimensions which have gauge symmetries and in which all of the gauge and gravitational anomalies cancel. These were theories built on the groups SO(32) and , the direct product of two copies of E8. Today we know that, using D-branes for example, gauge symmetries can be introduced in other 10-dimensional theories as well.
The second superstring revolution
Initial excitement about the 10-dimensional theories, and the string theories that provide their quantum completion, died by the end of the 1980s. There were too many Calabi–Yaus to compactify on, many more than Yau had estimated, as he admitted in December 2005 at the 23rd International Solvay Conference in Physics. None quite gave the standard model, but it seemed as though one could get close with enough effort in many distinct ways. Plus no one understood the theory beyond the regime of applicability of string perturbation theory.
There was a comparatively quiet period at the beginning of the 1990s; however, several important tools were developed. For example, it became apparent that the various superstring theories were related by "string dualities", some of which relate weak string-coupling - perturbative - physics in one model with strong string-coupling - non-perturbative - in another.
Then the second superstring revolution occurred. Joseph Polchinski realized that obscure string theory objects, called D-branes, which he discovered six years earlier, equate to stringy versions of the p-branes known in supergravity theories. String theory perturbation didn't restrict these p-branes. Thanks to supersymmetry, p-branes in supergravity gained understanding well beyond the limits of string theory.
Armed with this new nonperturbative tool, Edward Witten and many others could show all of the perturbative string theories as descriptions of different states in a single theory that Witten named M-theory. Furthermore, he argued that M-theory's long wavelength limit, i.e. when the quantum wavelength associated to objects in the theory appear much larger than the size of the 11th dimension, needs 11-dimensional supergravity descriptors that fell out of favor with the first superstring revolution 10 years earlier, accompanied by the 2- and 5-branes.
Therefore, supergravity comes full circle and uses a common framework in understanding features of string theories, M-theory, and their compactifications to lower spacetime dimensions.
Relation to superstrings
The term "low energy limits" labels some 10-dimensional supergravity theories. These arise as the massless, tree-level approximation of string theories. True effective field theories of string theories, rather than truncations, are rarely available. Due to string dualities, the conjectured 11-dimensional M-theory is required to have 11-dimensional supergravity as a "low energy limit". However, this doesn't necessarily mean that string theory/M-theory is the only possible UV completion of supergravity; supergravity research is useful independent of those relations.
4D N = 1 SUGRA
Before we move on to SUGRA proper, let's recapitulate some important details about general relativity. We have a 4D differentiable manifold M with a Spin(3,1) principal bundle over it. This principal bundle represents the local Lorentz symmetry. In addition, we have a vector bundle T over the manifold with the fiber having four real dimensions and transforming as a vector under Spin(3,1).
We have an invertible linear map from the tangent bundle TM to T. This map is the vierbein. The local Lorentz symmetry has a gauge connection associated with it, the spin connection.
The following discussion will be in superspace notation, as opposed to the component notation, which isn't manifestly covariant under SUSY. There are actually many different versions of SUGRA out there which are inequivalent in the sense that their actions and constraints upon the torsion tensor are different, but ultimately equivalent in that we can always perform a field redefinition of the supervierbeins and spin connection to get from one version to another.
In 4D N=1 SUGRA, we have a 4|4 real differentiable supermanifold M, i.e. we have 4 real bosonic dimensions and 4 real fermionic dimensions. As in the nonsupersymmetric case, we have a Spin(3,1) principal bundle over M. We have an R4|4 vector bundle T over M. The fiber of T transforms under the local Lorentz group as follows; the four real bosonic dimensions transform as a vector and the four real fermionic dimensions transform as a Majorana spinor. This Majorana spinor can be reexpressed as a complex left-handed Weyl spinor and its complex conjugate right-handed Weyl spinor (they're not independent of each other). We also have a spin connection as before.
We will use the following conventions; the spatial (both bosonic and fermionic) indices will be indicated by M, N, ... . The bosonic spatial indices will be indicated by μ, ν, ..., the left-handed Weyl spatial indices by α, β,..., and the right-handed Weyl spatial indices by , , ... . The indices for the fiber of T will follow a similar notation, except that they will be hatted like this: . See van der Waerden notation for more details. . The supervierbein is denoted by , and the spin connection by . The inverse supervierbein is denoted by .
The supervierbein and spin connection are real in the sense that they satisfy the reality conditions
where , , and and .
The covariant derivative is defined as
.
The covariant exterior derivative as defined over supermanifolds needs to be super graded. This means that every time we interchange two fermionic indices, we pick up a +1 sign factor, instead of -1.
The presence or absence of R symmetries is optional, but if R-symmetry exists, the integrand over the full superspace has to have an R-charge of 0 and the integrand over chiral superspace has to have an R-charge of 2.
A chiral superfield X is a superfield which satisfies . In order for this constraint to be consistent, we require the integrability conditions that for some coefficients c.
Unlike nonSUSY GR, the torsion has to be nonzero, at least with respect to the fermionic directions. Already, even in flat superspace, .
In one version of SUGRA (but certainly not the only one), we have the following constraints upon the torsion tensor:
Here, is a shorthand notation to mean the index runs over either the left or right Weyl spinors.
The superdeterminant of the supervierbein, , gives us the volume factor for M. Equivalently, we have the volume 4|4-superform.
If we complexify the superdiffeomorphisms, there is a gauge where , and . The resulting chiral superspace has the coordinates x and Θ.
R is a scalar valued chiral superfield derivable from the supervielbeins and spin connection. If f is any superfield, is always a chiral superfield.
The action for a SUGRA theory with chiral superfields X, is given by
where K is the Kähler potential and W is the superpotential, and is the chiral volume factor.
Unlike the case for flat superspace, adding a constant to either the Kähler or superpotential is now physical. A constant shift to the Kähler potential changes the effective Planck constant, while a constant shift to the superpotential changes the effective cosmological constant. As the effective Planck constant now depends upon the value of the chiral superfield X, we need to rescale the supervierbeins (a field redefinition) to get a constant Planck constant. This is called the Einstein frame.
N = 8 supergravity in 4 dimensions
N = 8 supergravity is the most symmetric quantum field theory which involves gravity and a finite number of fields. It can be found from a dimensional reduction of 11D supergravity by making the size of 7 of the dimensions go to zero. It has 8 supersymmetries which is the most any gravitational theory can have since there are 8 half-steps between spin 2 and spin −2. (A graviton has the highest spin in this theory which is a spin 2 particle.) More supersymmetries would mean the particles would have superpartners with spins higher than 2. The only theories with spins higher than 2 which are consistent involve an infinite number of particles (such as string theory and higher-spin theories). Stephen Hawking in his A Brief History of Time speculated that this theory could be the Theory of Everything. However, in later years this was abandoned in favour of string theory. There has been renewed interest in the 21st century with the possibility that this theory may be finite.
Higher-dimensional SUGRA
Higher-dimensional SUGRA is the higher-dimensional, supersymmetric generalization of general relativity. Supergravity can be formulated in any number of dimensions up to eleven. Higher-dimensional SUGRA focuses upon supergravity in greater than four dimensions.
The number of supercharges in a spinor depends on the dimension and the signature of spacetime. The supercharges occur in spinors. Thus the limit on the number of supercharges cannot be satisfied in a spacetime of arbitrary dimension. Some theoretical examples in which this is satisfied are:
12-dimensional two-time theory
11-dimensional maximal supergravity
10-dimensional supergravity theories
Type IIA supergravity: N = (1, 1)
Type IIB supergravity: N = (2, 0)
Type I supergravity: N = (1, 0)
9d supergravity theories
Maximal 9d supergravity from 10d
T-duality
N = 1 Gauged supergravity
The supergravity theories that have attracted the most interest contain no spins higher than two. This means, in particular, that they do not contain any fields that transform as symmetric tensors of rank higher than two under Lorentz transformations. The consistency of interacting higher spin field theories is, however, presently a field of very active interest.
| Physical sciences | Particle physics: General | Physics |
14260687 | https://en.wikipedia.org/wiki/Mobile%20operating%20system | Mobile operating system | A mobile operating system is an operating system used for smartphones, tablets, smartwatches, smartglasses, or other non-laptop personal mobile computing devices. While computers such as typical/mobile laptops are "mobile", the operating systems used on them are usually not considered mobile, as they were originally designed for desktop computers that historically did not have or need specific mobile features. This "fine line" distinguishing mobile and other forms has become blurred in recent years, due to the fact that newer devices have become smaller and more mobile, unlike the hardware of the past. Key notabilities blurring this line are the introduction of tablet computers, light laptops, and the hybridization of the two in 2-in-1 PCs.
Mobile operating systems combine features of a desktop computer operating system with other features useful for mobile or handheld use, and usually including a wireless inbuilt modem and SIM tray for telephone and data connection. In Q1 2018, over 123 million smartphones were sold (the most ever recorded) with 60.2% running Android and 20.9% running iOS. Sales in 2012 were 1.56 billion; sales in 2023 were 1.43 billion with 53.32% being Android. Android alone has more sales than the popular desktop operating system Microsoft Windows, and smartphone use (even without tablets) outnumbers desktop use.
Mobile devices, with mobile communications abilities (for example, smartphones), contain two mobile operating systems. The main user-facing software platform is supplemented by a second low-level proprietary real-time operating system which operates the radio and other hardware. Research has shown that these low-level systems may contain a range of security vulnerabilities permitting malicious base stations to gain high levels of control over the mobile device.
Mobile operating systems have had the most use of any operating system since 2017 (measured by web use).
Timeline
Mobile operating system milestones mirror the development of mobile phones, PDAs, and smartphones:
Pre-1990
1990–2010 – Mobile phones use embedded systems to control operation.
1993–1999
1993
April – PenPoint OS by GO Corp. become available on the AT&T EO Personal Communicator.
August – Apple launches Newton OS running on their Newton series of portable computers.
1994
March – Magic Cap OS by General Magic is first introduced on the Sony Magic Link PDA.
August – The first smartphone, the IBM Simon, has a touchscreen, email, and PDA features.
1996
March – The Palm Pilot 1000 personal digital assistant is introduced with the Palm OS mobile operating system.
August – Nokia releases the Nokia 9000 Communicator running an integrated system based on the PEN/GEOS 3.0 OS from Geoworks.
1997 – EPOC32 first appears on the Psion Series 5 PDA. Release 6 of EPOC32 will later be renamed to Symbian OS.
1998 – Symbian Ltd. is formed as a joint venture by Psion, Ericsson, Motorola, and Nokia, Psion's EPOC32 OS becomes Symbian's EPOC operating system, and is later renamed to Symbian OS. Symbian's OS was used by those companies and several other major mobile phone brands, but especially Nokia.
1999
June – Qualcomm's pdQ becomes the first smartphone with Palm OS.
October – Nokia S40 Platform is officially introduced along with the Nokia 7110, the first phone with T9 predictive text input and a Wireless Application Protocol (WAP) browser for accessing specially formatted Internet data.
2000s
2000 – The Ericsson R380 is released with EPOC32 Release 5, marking the first use on a phone of what's to become known as Symbian OS (as of Release 6).
2001
June – Nokia's Symbian Series 80 platform is first released on the Nokia 9210 Communicator This is the first phone running an OS branded as Symbian, and the first phone using that OS that allows user installation of additional software.
September – Qualcomm's Binary Runtime Environment for Wireless (BREW) platform on their REX real-time operating system (RTOS) is first released on the Kyocera QCP-3035.
2002
March
BlackBerry releases its first smartphone, running Java 2 Micro Edition (J2ME).
UIQ is first released, at v2.0, on Symbian OS, and becomes available later in the year on the Sony Ericsson P800, the successor to the Ericsson R380.
June
Microsoft's first Windows CE (Pocket PC) smartphones are introduced.
Nokia's Symbian Series 60 (S60) platform is released with the Nokia 7650, Nokia's first phone with a camera and Multimedia Messaging Service (MMS). S60 would form the basis of the OS on most of Nokia's smartphones until 2011, when they adopted Microsoft's Windows Phone 7. S60 was also used on some phones from Samsung and others, and later by Sony Ericsson after the consolidation of some Symbian UI variants in 2008.
October – The Danger Hiptop (T-Mobile Sidekick in U.S.) is first released by Danger, Inc., running DangerOS.
2003 – Motorola introduces first Linux-based cellphone Motorola A760 base on Linux MontaVista distribution.
2005
May – Microsoft announces Windows Mobile 5.0.
November – Nokia introduces Maemo OS on the first, small Internet tablet, the N770, with a 4.13" screen.
2007
January – Apple's iPhone with iOS (named "iPhone OS" for its first three releases) is introduced as a "widescreen iPod", "mobile phone", and "Internet communicator".
February – Microsoft announces Windows Mobile 6.0.
May – Palm announces the Palm Foleo, a "Mobile Companion" device similar to a subnotebook computer, running a modified Linux kernel and relying on a companion Palm Treo smartphone to send and retrieve mail, as well as provide data connectivity when away from WiFi. Palm canceled Foleo development on September 4, 2007, after facing public criticism.
June – World's very first iPhone is released in the United States.
November – Open Handset Alliance (OHA) is established, led by Google with 34 members (HTC, Sony, Dell, Intel, Motorola, Samsung, LG, etc.)
2008
February – LiMo Foundation announces the first phones running the LiMo mobile Linux distribution, from Motorola, NEC, Panasonic Mobile, and Samsung, released later in the year. The LiMo Foundation later became the Tizen Association and LiMo was subsumed by Tizen.
June – Nokia becomes the sole owner of Symbian Ltd. The Symbian Foundation was then formed to co-ordinate the future development of the Symbian platform among the corporations using it, in a manner similar to the Open Handset Alliance with Android. Nokia remained the major contributor to Symbian's code.
July – Apple releases iPhone OS 2 with the iPhone 3G, making available Apple's App Store.
October – OHA releases Android (based on Linux kernel) 1.0 with the HTC Dream (T-Mobile G1) as the first Android phone.
November – Symbian^1, the Symbian Foundation's touch-specific S60-based platform (equivalent to S60 5th edition) is first released on Nokia's first touchscreen Symbian phone, the Nokia 5800 XpressMusic, with a resistive screen and a stylus. Symbian^1 being derived from S60 meant that support for UIQ disappeared and no further devices using UIQ were released.
2009
January
Intel announces Moblin 2, specifically created for netbooks that run the company's Atom processor. In April 2009 Intel turned Moblin over to the Linux Foundation.
Palm introduces webOS with the Palm Pre (released in June). The new OS is not backward-compatible with their previous Palm OS.
February
Palm announces that no further devices with Palm OS are going to be released by the company. (The last was the Palm Centro, released October 14, 2007.)
Microsoft announces Windows Mobile 6.5, an "unwanted stopgap" update to Windows Mobile 6.1 intended to bridge the gap between version 6.1 and the then yet-to-be released Windows Mobile 7 (later canceled in favor of Windows Phone 7). The first devices running it appeared in late October 2009.
May – DangerOS 5.0 becomes available, based on NetBSD.
June – Apple releases iPhone OS 3 with the iPhone 3GS.
November – Nokia releases the Nokia N900, its first and only smartphone running the Maemo OS intended for "handheld computers...with voice capability", while stating that they remain focused on Symbian S60 as their smartphone OS. (Nokia had previously released three Mobile Internet devices running Maemo, without cellular network connectivity.)
2010s
2010
February
MeeGo is announced, a mobile Linux distribution merging Maemo from Nokia and Moblin from Intel and Linux Foundation, to be hosted by Linux Foundation. MeeGo is not backward-compatible with any previous operating system.
Samsung introduces the Bada OS and shows the first Bada smartphone, the Samsung S8500. It was later released in May 2010.
April
Apple releases the iPad (first generation) with iPhone OS 3.2. This is the first version of the OS to support tablet computers. For its next major version (4.0) iPhone OS will be renamed iOS.
HP acquires Palm in order to use webOS in multiple new products, including smartphones, tablets, and printers, later stating their intent to use it as the universal platform for all their devices.
May – Microsoft Kin phone line with KIN OS (based on Windows CE and a "close cousin" to Windows Phone) become available.
June – Apple releases iOS 4, renamed from iPhone OS, with the iPhone 4.
July – Microsoft Kin phones and KIN OS are discontinued.
September
Apple releases a variant of iOS powering the new 2nd generation Apple TV.
Symbian^3 is first released on the Nokia N8. This would be Nokia's last flagship device running Symbian (though not their last Symbian phone), before switching to Windows Phone 7 for future flagship phones.
The Danger Hiptop line and DangerOS are discontinued as a result of Microsoft's acquisition of Danger, Inc. in 2008.
November
Nokia assumes full control over Symbian as the Symbian Foundation disintegrates.
Windows Phone OS is released on Windows Phone 7 phones by HTC, LG, Samsung, and Dell. The new OS is not backward-compatible with the prior Windows Mobile OS.
2011
February
Android 3.0 (Honeycomb), the first version to officially support tablet computers, is released on the Motorola Xoom.
Nokia abandons the Symbian OS and announces that it would use Microsoft's Windows Phone 7 as its primary smartphone platform, while Symbian would be gradually wound down.
April – BlackBerry Tablet OS, based on QNX Neutrino is released on the BlackBerry PlayBook.
July
Mozilla announces their Boot to Gecko project (later named Firefox OS) to develop an OS for handheld devices emphasizing standards-based Web technologies, similar to webOS.
webOS 3.0, the first version to support tablet computers, is released on the HP TouchPad.
August – HP announces that webOS device development and production lines would be halted. The last HP webOS version, 3.0.5, is released on January 12, 2012.
September
MeeGo is introduced with the limited-release Nokia N9, Nokia's first and only consumer device to use the OS. (A small number of the Nokia N950, a MeeGo phone available only to developers, were released in mid-2011.)
After Nokia's abandonment of MeeGo, Intel and the Linux Foundation announce a partnership with Samsung to launch Tizen, shifting their focus from MeeGo (Intel and Linux Foundation) and Bada (Samsung) during 2011 and 2012.
October
Apple releases iOS 5 with the iPhone 4S, integrating the Siri voice assistant.
The Mer project is announced, based on an ultra-portable core for building products, composed of Linux, HTML5, QML, and JavaScript, which is derived from the MeeGo codebase.
November – Fire OS, a fork of the Android operating system, is released by Amazon.com on the Kindle Fire tablet.
2012
May – Nokia releases the Nokia 808 PureView, later confirmed (in January 2013) to be the last Symbian smartphone. This phone was followed by a single last Symbian software update, "Nokia Belle, Feature Pack 2", later in 2012.
July
Finnish start-up Jolla, formed by former Nokia employees, announces that MeeGo's community-driven successor Mer would be the basis of their new Sailfish smartphone OS.
Mozilla announces that the project formerly named Boot to Gecko (which is built atop an Android Linux kernel using Android drivers and services; however it uses no Java-like code of Android) is now Firefox OS (since discontinued) and has several handset OEMs on board.
August – Samsung announces they will not ship further phones using their Bada OS, instead focusing on Windows Phone 8 and Android.
September – Apple releases iOS 6 with the iPhone 5.
2013
January – BlackBerry releases their new operating system for smartphones, BlackBerry 10, with their Q10 and Z10 smartphones. BlackBerry 10 is not backward-compatible with the BlackBerry OS used on their previous smartphones.
February – HP sells webOS to LG.
September – Apple releases iOS 7 with the iPhone 5S and iPhone 5C.
October
Canonical announces Ubuntu Touch, a version of the Linux distribution expressly designed for smartphones. The OS is built on the Android Linux kernel, using Android drivers and services, but does not use any of the Java-like code of Android.
Google releases Android KitKat 4.4.
November – Jolla releases Sailfish OS on the Jolla smartphone.
2014
February
Microsoft releases Windows Phone 8.1
Nokia introduces their Nokia X platform OS as an Android 4.1.2 Jelly Bean fork on the Nokia X family of smartphones. Similar to Amazon.com's Fire OS, it replaces Google's apps and services with ones from Nokia (such as HERE Maps, Nokia Xpress and MixRadio, and Nokia's own app store) and Microsoft (such as Skype and Outlook), with a user interface that mimics the Windows Phone UI. After the acquisition of Nokia's devices unit, Microsoft announced in July 2014 that no more Nokia X smartphones would be introduced, marking the end of the platform just a few months later.
August – The Samsung SM-Z9005 Z is the first phone released running Tizen, with v2.2.1 of the OS.
September
Apple releases iOS 8 with the iPhone 6 and 6 Plus.
BlackBerry releases BlackBerry 10 version 10.3 with integration with the Amazon Appstore
November – Google releases Android 5.0 "Lollipop"
2015
February – Google releases Android 5.1 "Lollipop".
April
LG releases the LG Watch Urbane LTE smartwatch running "LG Wearable Platform OS" based on webOS. This is a version of their Android Wear OS-based LG Watch Urbane, with added LTE connectivity.
watchOS, based on iOS, is released by Apple with the Apple Watch.
September
Apple releases iOS 9 with the iPhone 6S and 6S Plus, iPad Pro, and iPad Mini 4, plus watchOS 2. tvOS 9 is also made distinct from iOS, with its own App Store, launching with Apple TV 4th generation.
Google releases Android 6.0 "Marshmallow".
October – BlackBerry announces that there are no plans to release new APIs and software development kits for BlackBerry 10, and future updates would focus on security and privacy enhancements only.
November – Microsoft releases Windows 10 Mobile.
2016
February – Microsoft releases the Lumia 650, their last Windows 10 Mobile phone before discontinuing all mobile hardware production the following year.
July – The BlackBerry Classic, the last device to date running a BlackBerry OS is discontinued. While BlackBerry Limited claimed to still be committed to the BlackBerry 10 operating system, they have since only shipped Android devices after releasing the BlackBerry Priv, their first Android smartphone in November 2015.
August
Google posts the Fuchsia source code on GitHub.
Google releases Android 7.0 "Nougat".
September – Apple releases iOS 10 with the iPhone 7 and 7 Plus, and watchOS 3 with the Apple Watch Series 1 and 2.
November
Tizen releases Tizen 3.0.
BlackBerry releases BlackBerry 10 version 10.3.3.
2017
April
Development of Ubuntu Touch is transferred from Canonical Ltd. to the UBports Foundation
Samsung officially launches Android-based Samsung Experience custom firmware starting with version 8.1 on Samsung Galaxy S8.
May
Samsung announces Tizen 4.0 at Tizen Developer Conference 2017.
August
Google releases Android 8.0 "Oreo".
September
Apple releases iOS 11 with the iPhone 8 and 8 Plus and iPhone X, and watchOS 4 with the Apple Watch Series 3.
October
Microsoft announces that Windows 10 Mobile development is going into maintenance mode only, ending the release of any new features or functionality due to lack of market penetration and resultant lack of interest from app developers, and releases the final major update to it, the "Fall Creators Update."
Cherry Mobile release CherryOS based on Android
2018
February
Samsung releases Samsung Experience 9.0 based on Android "Oreo" 8.0 globally to Samsung Galaxy S8 and S8+.
March
Google and partners officially launches Android Go (based on Android "Oreo" 8.1 but tailored for low-end devices) with Nokia 1, Alcatel 1X, ZTE Tempo Go, General Mobile 8 Go, Micromax Bharat Go and Lava Z50.
Google releases Android "9" as a developer preview.
April
Microsoft release Windows 10 Version 1803 "April 2018 Update".
May
Huawei release LiteOS version 2.1.
August
Google releases Android 9.0 "Pie".
UBPorts released Ubuntu Touch OTA-14, upgrading the OS based on the Canonical's long-term support version of Ubuntu 16.04 LTS "Xenial Xerus".
Xiaomi officially introduces MIUI for POCO for their Poco series smartphone.
Samsung officially introduces Tizen 4.0 with the release of Samsung Galaxy Watch series.
September
Apple releases iOS 12 with the iPhone XS and XS Max, and watchOS 5 with Apple Watch Series 4.
Huawei releases EMUI 9.0.
October
Microsoft releases Windows 10 Version 1809 "October 2018 Update".
November
Samsung announces the One UI as the latest version of the Samsung Experience UI.
Amazon released Fire OS 6 to supported Fire HD devices.
2019
January
Microsoft announces that support for Windows 10 Mobile would end on December 10, 2019, and that Windows 10 Mobile users should migrate to iOS or Android phones.
June
Apple announces iOS 13, watchOS 6, and iPadOS as a distinct variant of iOS.
August
Huawei officially announces HarmonyOS
September
Apple releases iOS 13 with the iPhone 11 series, watchOS 6 with Apple Watch Series 5, and iPadOS with the 7th generation iPad.
Google releases Android 10.
The Librem 5, the first phone running PureOS, is released.
October
Samsung announces the One UI 2.0 as the latest version of their Galaxy Smartphone and Smartwatch UI .
November
Microsoft releases the Windows 10 November 10, 2019 Update.
Current software platforms
These operating systems often run atop baseband or other real-time operating systems that handle hardware aspects of the phone.
Android
Android (based on the modified Linux kernel) is a mobile operating system developed by Open Handset Alliance. The base system is open-source (and only the kernel copyleft), but the apps and drivers which provide functionality are increasingly becoming closed-source. Besides having the largest installed base worldwide on smartphones, it is also the most popular operating system for general purpose computers (a category that includes desktop computers and mobile devices), even though Android is not a popular operating system for regular (desktop) personal computers (PCs). Although the Android operating system is free and open-source software, in devices sold, much of the software bundled with it (including Google apps and vendor-installed software) is proprietary software and closed-source.
Android's releases before 2.0 (1.0, 1.5, 1.6) were used exclusively on mobile phones. Android 2.x releases were mostly used for mobile phones but also some tablets. Android 3.0 was a tablet-oriented release and does not officially run on mobile phones. Both phone and tablet compatibility were merged with Android 4.0. The current Android version is Android 14, released on October 4, 2023.
Android One
Android One, a successor to Google Nexus, is a software experience that runs on the unmodified Android operating system. Unlike most of the "stock" Androids running on the market, the Android One User Interface (UI) closely resembles the Google Pixel UI, due to Android One being a software experience developed by Google and distributed to partners such as Nokia Mobile (HMD) and Xiaomi. Thus, the UI is intended to be as clean as possible. Original equipment manufacturer (OEM) partners may tweak or add additional apps such as cameras to the firmware, but most of the apps are handled proprietarily by Google. Operating system updates are handled by Google and internally tested by OEMs before being distributed via an OTA update to end users.
Current Android One version list
Android One versions follow those of the Android Open Source Project (AOSP), starting from Android 5.0 "Lollipop"
BharOS
BharOS is a mobile operating system in India. It is an Indian government-funded project to develop a free and open-source operating system (OS) for use in government and public systems.
BlackBerry Secure
BlackBerry Secure is an operating system developed by BlackBerry, based on the Android Open Source Project (AOSP). BlackBerry officially announced the name for their Android-based front-end touch interface in August 2017, before which BlackBerry Secure was running on BlackBerry brand devices, such as BlackBerry Priv, DTEK 50/60 and BlackBerry KeyOne. Currently, BlackBerry plans to license out the BlackBerry Secure to other OEMs.
Current BlackBerry Secure version list
BlackBerry Secure version 1.x – based on Android "Marshmallow" 6.x and "Nougat" 7.x
CalyxOS
CalyxOS is an operating system for smartphones based on Android with mostly free and open-source software. It is produced by the Calyx Institute as part of its mission to "defend online privacy, security and accessibility."
ColorOS
ColorOS is a custom front-end touch interface based on the Android Open Source Project (AOSP) and developed by OPPO Electronics Corp. In 2016, OPPO officially released ColorOS with every OPPO and Realme device and released an official ROM for the OnePlus One. Future Realme devices will have their own version of ColorOS.
Current ColorOS version list
ColorOS 1.x – based on Android "Jelly Bean" 4.2.x and "KitKat" 4.4
ColorOS 2.x – based on Android "KitKat" 4.4 and "Lollipop" 5
ColorOS 3.x – based on Android "Lollipop" 5, "Marshmallow" 6, and "Nougat" 7
ColorOS 5.x – based on Android "Oreo" 8
ColorOS 6.x – based on Android "Pie" 9
ColorOS 7.x – based on Android 10
ColorOS 11.x – based on Android 11
ColorOS 12.x – based on Android 11 and 12
ColorOS 13.x – based on Android 13
ColorOS 14.x – based on Android 14
ColorOS 15.x – based on Android 15
CopperheadOS
CopperheadOS is a security-hardened version of Android.
DivestOS
DivestOS is a soft fork of LineageOS. Includes Monthly Updates, FOSS Focus, Deblobbing, Security and Privacy focus, and F-Droid
EMUI
Huawei EMUI is the front-end touch interface developed by Huawei Technologies Co. Ltd. and its sub-brand Honor which is based on Google's Android Open Source Project (AOSP). EMUI is preinstalled on most Huawei and Honor devices. While it was based on the open-source Android operating system, it consists of closed-source proprietary software. Since the US sanctions, it is currently a fork of Android similar to FireOS instead of a compatible one.
In mainland China, and internationally since 2020 due to U.S. sanctions, EMUI devices use Huawei Mobile Services such as Huawei AppGallery instead of Google Mobile Services. Aside from based on Android, Huawei also bundle the HarmonyOS microkernel in the latest EMUI update inside Android which handle other process including security authentication such as the fingerprint authentication.
/e/
/e/ is an operating system forked from the source code of LineageOS (based on Android). /e/ targets Android smart phone devices and uses MicroG as a replacement for Google Play Services. /e/OS is not completely open source software, because it comes with the proprietary Magic Earth 'Maps' app.
Fire OS
Amazon Fire OS is a mobile operating system forked from Android and produced by Amazon for its Fire range of tablets, Echo and Echo Dot, and other content delivery devices like Fire TV (previously for their Fire Phone). Fire OS primarily centers on content consumption, with a customized user interface and heavy ties to content available from Amazon's own storefronts and services.
Current Fire OS version list
Fire OS 1.x
Fire OS 2.x
Fire OS 3.x
Fire OS 4.x
Fire OS 5.x
Fire OS 6.x
Fire OS 7.x
Flyme OS
Flyme OS is an operating system developed by Meizu Technology Co., Ltd., an open-source operating system based on the Android Open Source Project (AOSP). Flyme OS is mainly installed on Meizu smartphones such as the MX series. However, it also has official ROM support for a few Android devices.
Current Flyme OS version list
Flyme OS 1.x.x – based on Android "Ice Cream Sandwich" 4.0.3, initial release
Flyme OS 2.x.x – based on Android "Jelly Bean" 4.1.x – 4.2.x
Flyme OS 3.x.x – based on Android "Jelly Bean" 4.3.x
Flyme OS 4.x.x – based on Android "KitKat" 4.4.x
Flyme OS 5.x.x – based on Android "Lollipop" 5.0.x – 5.1.x
Flyme OS 6.x.x – based on Android "Nougat" 7.x, "Marshmallow" 6.0.x and "Lollipop" 5.0.x – 5.1.x for old devices
Flyme OS 7.x.x – based on Android "Pie" 9, "Oreo" 8.x and "Nougat" 7.x
Flyme OS 8.x.x – based on Android 10, "Pie" 9, "Oreo" 8.x and "Nougat" 7.x
Flyme OS 9.x.x – based on Android 11 and 10
Flyme OS 10.x.x – based on Android 13
Flyme AIOS (11.x.x) – based on Android 14
FuntouchOS
FuntouchOS is a custom user interface developed by Vivo that is based on the Android Open Source Project. FuntouchOS 10.5 had a redesigned UI that resembled stock Androids.
Current FuntouchOS version list
FuntouchOS 2.x – based on Android "KitKat" 4.4, Android "Lollipop" 5 and Android "Marshmallow" 6, initial release
FuntouchOS 3.x – based on Android "Marshmallow" 6 and Android "Nougat" 7
FuntouchOS 4.x – based on Android "Oreo" 8
FuntouchOS 9.x – based on Android "Pie" 9
FuntouchOS 10.x – based on Android "Pie" 9 and Android 10
FuntouchOS 10.5 – based on Android 10 and Android 11, redesigned UI
FuntouchOS 11.x – based on Android 10 and Android 11
FuntouchOS 12.x – based on Android 11 and Android 12
FuntouchOS 13 – based on Android 13
FuntouchOS 14 – based on Android 14
FuntouchOS 15 – based on Android 15
iQOO UI
iQOO UI was a custom user interface based on Vivo's FuntouchOS. The UI largely resembled its predecessor, with a customized UI on top of the FuntouchOS. It was installed on iQOO smartphones sold in China and later was succeeded by OriginOS
GrapheneOS
GrapheneOS is a variant of Android for Pixel hardware.
Hello UI
Hello UI (formerly called My UI and My UX) is a custom Android UI developed by Motorola for their devices. It used to look like the stock Android user experience up until My UI 3.x.
Current Hello UI version list
My UX 1.x – based on Android 10, initial release
My UI 2.x – based on Android 11
My UI 3.x – based on Android 12
My UI 4.x – based on Android 12
My UI 5.x – based on Android 13
Hello UI – based on Android 14
HiOS
HiOS is an Android-based operating system developed by Hong Kong mobile phone manufacturer Tecno Mobile, a subsidiary of Transsion Holdings, exclusively for their smartphones. HiOS allows for a wide range of user customization without requiring rooting the mobile device. The operating system is also bundled with utility applications that allow users to free up memory, freeze applications, limit data accessibility to applications among others. HiOS comes with features like Launcher, Private Safe, Split Screen and Lockscreen Notification.
Current HiOS version list
HiOS 1.x – based on Android "Marshmallow" 6
HiOS 2.x – based on Android "Nougat" 7
HiOS 3.x – based on Android "Nougat" 7
HiOS 4.x – based on Android "Oreo" 8
HiOS 5.x – based on Android "Pie" 9
HiOS 6.x – based on Android 10
HiOS 7.x – based on Android 10
HiOS 7.6.x – based on Android 11
HiOS 8.x – based on Android 11
HTC Sense
HTC Sense is a software suite developed by HTC, used primarily on the company's Android-based devices. Serving as a successor to HTC's TouchFLO 3D software for Windows Mobile, Sense modifies many aspects of the Android user experience, incorporating added features (such as an altered home screen and keyboard), widgets, HTC-developed applications, and redesigned applications. The first device with Sense, the HTC Hero, was released in 2009.
HyperOS
Xiaomi HyperOS or HyperOS (formerly called MIUI), developed by the Chinese electronic company Xiaomi, is a mobile operating system based on the Android Open Source Project (AOSP). It is mostly founded in Xiaomi smartphones and tablets such as the Xiaomi (formerly Mi) and Redmi Series. However, MIUI also had official ROM support for a few Android devices. Although HyperOS is based on AOSP, which is open-source, it consisted of closed-source proprietary software.
MIUI for POCO
A specific version of MIUI developed for Xiaomi sub-brand (Currently an independence brand) POCO, the overall experience of the "skin" was similar to those of standard MIUI expect during the early release of MIUI for POCO where compared to standard MIUI it has an app drawer and allowed for 3rd party Android icon customization. Whereas the current MIUI for POCO shared all the common experience with those of standard MIUI, except the icon and the POCO Launcher instead of stock MIUI Launcher. In 2024 MIUI for POCO was replaced by Xiaomi HyperOS.
Indus OS
Indus OS is a custom mobile operating system based on the Android Open Source Project (AOSP). It is developed by the Indus OS team based in India. No longer valid as of 2018, Indus OS is available on Micromax, Intex, Karbonn, and other Indian smartphone brands.
Current Indus OS version list
Firstouch OS (based on Android "Lollipop" 5.0)
Indus OS 2.0 (based on Android "Marshmallow" 6.0)
Indus OS 3.0 (based on Android "Nougat" 7.0.1)
LG UX
LG UX (formerly Optimus UI) was a front-end touch interface developed by LG Electronics and partners, featuring a full touch user interface. It was not an operating system. LG UX was used internally by LG for sophisticated feature phones and tablet computers, and was not available for licensing by external parties.
Optimus UI 2, based on Android 4.1.2, has been released on the Optimus K II and the Optimus Neo 3. It features a more refined user interface compared to the prior version based on Android 4.1.1, along with new functionalities such as voice shutter and quick memo.
LineageOS
Lineage Android Distribution is a custom mobile operating system based on the Android Open Source Project (AOSP). It serves as the successor to the highly popular custom ROM, CyanogenMod, from which it was forked in December 2016 when Cyanogen Inc. announced it was discontinuing development and shut down the infrastructure behind the project. Since Cyanogen Inc. retained the rights to the Cyanogen name, the project rebranded its fork as LineageOS.
Similar to CyanogenMod, it does not include any proprietary apps unless the user installs them. It allows Android users who can no longer obtain update support from their manufacturer to continue updating their OS version to the latest one based on official release from Google AOSP and heavy theme customization.
MagicOS
"MagicOS" (formerly known as Magic UI and Magic Live) is a front-end touch interface developed by Honor as a subsidiary of Huawei Technologies Co. Ltd before Honor became an independent company.
Magic UI is based on Huawei EMUI, which is based on the Android Open Source Project (AOSP). The overall user interface looks almost identical to EMUI, even after the separation. While it was based on the open-source Android operating system, it consists of closed-source proprietary software.
Due to sanctions imposed by the US on Huawei, new devices released by both Huawei and Honor are no longer allowed to include Google Mobile Services. To allow Honor to regain access to Google services, Huawei sold off Honor to become an independent company, thereby allowing them to pre-install Google Mobile Services on their latest devices.
Magic UI 1.x – based on EMUI 8 with Android "Oreo" 8 (Initial released)
Magic UI 2.x – based on EMUI 9 with Android "Pie" 9 (Minor UI update)
Magic UI 3.x – based on EMUI 10 with Android 10 (Minor UI update)
Magic UI 4.x – based on EMUI 11 with Android 10 and Android 11 (Minor UI update)
Magic UI 5.x – based on EMUI 11 with Android 10 and Android 11 (Minor UI update)
Magic UI 6.x – based on EMUI 12 with Android 12 (Major UI redesigned)
Magic OS 7.x – based on EMUI 12 with Android 13 (Minor UI redesigned)
Magic OS 8.x – based on Android 14 (Minor UI redesigned)
MyOS
MyOS (formerly called MiFavor) is a custom Android UI developed by ZTE for their flagship smartphones and nubia smartphones. MyOS is based on the Android Open Source Project (AOSP). This is a redesign from their previous custom Android UI, MiFavor.
Current MyOS version list
MiFavor 1.x – based on Android "KitKat" 4.4.x, initial release
MiFavor 2.x – based on Android "Lollipop" 5.0.x – 5.1.x, redesigned UI
MiFavor 3.x – based on Android "Marshmallow" 6.x, redesigned UI
MiFavor 4.x – based on Android "Nougat" 7.x, redesigned UI
MiFavor 5.x – based on Android "Oreo" 8.x, redesigned UI
MiFavor 9.x – based on Android "Pie" 9.0, redesigned UI
MiFavor 10.x – based on Android 10, redesigned UI
MyOS 11.x – based on Android 11, initial release migrate from MiFavor
MyOS 12.x – based on Android 12, redesigned UI
MyOS 13.x – based on Android 13
MyOS 14.x – based on Android 14
Nothing OS
Nothing OS is a custom Android UI developed by Nothing for their Nothing Phone (1). Nothing OS design interface are identical to the stock Android and Pixel UI experience, aside from their custom font and widget which is based on dot design.
Current Nothing OS version list
Nothing OS 1 – based on Android 12, initial release
Nothing OS 1.5 – based on Android 13
Nothing OS 2 – based on Android 13, minor UI redesigned
Nothing OS 2.5-2.6 – based on Android 14
Nothing OS 3.0 – based on Android 15
nubia UI
nubia UI was a custom Android UI developed by ZTE and nubia for their smartphones. nubia UI was based on the Android Open Source Project (AOSP).
Current nubia UI version list
nubia UI 6.x – based on Android 8 "Oreo"
nubia UI 7.x – based on Android 9 "Pie"
nubia UI 8.x – based on Android 10
nubia UI 9.x – based on Android 11
One UI
One UI (formerly called TouchWiz and Samsung Experience) is a front-end touch interface developed by Samsung Electronics in 2008 with partners, featuring a full touch user interface. It is not a true operating system, but a user experience. Samsung Experience is used internally by Samsung for smartphones, feature phones and tablet computers, and is not available for licensing by external parties. The Android version of Samsung Experience also came with Samsung-made apps preloaded until the Galaxy S6, which removed all Samsung pre-loaded apps except Samsung Galaxy Store (formerly Galaxy Apps) to save storage space due to the removal of its MicroSD. With the release of Samsung Galaxy S8 and S8+, Samsung Experience 8.1 was preinstalled on it with new functions, known as Samsung DeX. Similar to the concept of Microsoft Continuum, Samsung DeX allowed high-end Galaxy devices such as S8/S8+ or Note 8 to connect into a docking station, which extends the device to allow desktop-like functionality by connecting a keyboard, mouse, and monitor. Samsung also announced "Linux on Galaxy", which allows users to use the standard Linux distribution on the DeX platform.
Previous Samsung Android UI version list
TouchWiz 3.x (based on Android 2.1 "Éclair" and Android 2.2 "Froyo") (Initial release for Android UI)
TouchWiz 4.x (based on Android 2.3 "Gingerbread" and Android 3.0 "Honeycomb") (Minor UI update)
TouchWiz Nature UX (based on Android 4.0 "Ice Cream Sandwich") (Minor UI update)
TouchWiz Nature UX 2.x (based on Android 4.2 "Jellybean") (Minor UI update)
TouchWiz Nature UX 3.x (based on Android 4.4 "KitKat") (Minor UI update)
TouchWiz Nature UX 4.x (based on Android 5 "Lollipop") (Minor UI update)
TouchWiz Nature UX 5.x (based on Android 5 "Lollipop") (Major UI update)
TouchWiz Nature UX 6.x (based on Android 6 "Marshmallow") (Minor UI update)
TouchWiz Grace UX (based on Android 6 "Marshmallow") (Major UI update)
Samsung Experience 8.x (based on Android 7 "Nougat") (Initial release migrate from TouchWiz)
Samsung Experience 9.x (based on Android 8 "Oreo") (Minor update)
Samsung Experience 10.x (based on Android 9 "Pie) (Minor and Last update before redesign One UI)
Current One UI version list
One UI 1.x (based on Android 9 "Pie") (Initial release)
One UI 2.x (based on Android 10) (Minor UI update)
One UI 3.x (based on Android 11) (Minor UI update)
One UI 4.x (based on Android 12) (Minor UI update)
One UI 5.x (based on Android 13) (Minor UI update)
One UI 6.x (based on Android 14) (Major UI update)
Origin OS
Origin OS is a custom user interface developed by Vivo that is based on Android. It is a redesigned skin of Funtouch OS. It is currently only available in China but may someday be released globally.
Current Origin OS version list
Origin OS 1.0 – based on Android 10 and Android 11 (initial release)
Origin OS Ocean – based on Android 12
Origin OS HD – based on Android 12 (only used in Vivo Pad)
Origin OS 3 – based on Android 13
Origin OS 4 – based on Android 14
Origin OS 5 – based on Android 15
OxygenOS
OxygenOS is based on the open source Android Open Source Project (AOSP) and is developed by OnePlus to replace Cyanogen OS on OnePlus devices such as the OnePlus One. It is preinstalled on the OnePlus 2, OnePlus X, OnePlus 3, OnePlus 3T, OnePlus 5, OnePlus 5T, and OnePlus 6. As stated by Oneplus, OxygenOS is focused on stabilizing and maintaining of stock Android functionalities like those found on Nexus devices. It consists of mainly Google apps and minor UI customization to maintain the sleekness of stock Android.
Current OxygenOS version list
Oxygen OS 1.0.x – based on Android 5.0.x "Lollipop" (initial release)
Oxygen OS 2.0.x – based on Android 5.1.x "Lollipop" (overall maintenance update)
Oxygen OS 3.0.x – based on Android 6.0 "Marshmallow" (major Android update)
Oxygen OS 3.1.x – based on Android 6.0.1 "Marshmallow" (minor maintenance update)
Oxygen OS 3.2.x – based on Android 6.0.1 "Marshmallow" (major Android update)
Oxygen OS 4.x.x – based on Android 7.x "Nougat" (major Android update)
Oxygen OS 5.x.x – based on Android 8.x "Oreo" (major Android update)
Oxygen OS 9.x.x – based on Android 9 "Pie" (major Android update)
Oxygen OS 10.x.x – based on Android 10 (major Android update)
Oxygen OS 11.0.x-11.2.x – based on Android 11 (major Android update)
Oxygen OS 11.3.x – based on ColorOS – based on Android 11 (minor update)
Oxygen OS 12.x.x – based on ColorOS 12.x – based on Android 12 (major Android update)
Oxygen OS 13.x.x – based on ColorOS 13.x – based on Android 13 (major Android update)
Oxygen OS 14.x.x – based on ColorOS 14.x – based on Android 14 (major Android update)
Oxygen OS 15.x.x – based on ColorOS 15.x – based on Android 15 (major Android update)
Pixel UI (Pixel Launcher)
Google Pixel UI or Pixel Launcher is developed by Google and based on the open-source Android system. Unlike Nexus phones, where Google shipped with stock Android, the UI that came with first-generation Pixel phones was slightly modified. As part of the Google Pixel software, the Pixel UI and its home launcher are closed-source and proprietary, so it is only available on Pixel family devices. However, third-party mods allow non-Pixel smartphones to install Pixel Launcher with Google Now feed integration.
Current Google Pixel Launcher version list
Pixel Launcher – "7.1.1" (based on Android 7.x "Nougat") (Initial release)
Pixel Launcher – "8.1.0" (based on Android 8.x "Oreo") (Minor UI update)
Pixel Launcher – "9.0" (based on Android 9 "Pie") (Major UI update)
Pixel Launcher – "10.0" (based on Android 10) (Moderate UI update that support themes)
Pixel Launcher – "11.0" (based on Android 11) (Minor UI update)
Pixel Launcher – "12.0" (based on Android 12) (Major UI update)
Pixel Launcher – "13.0" (based on Android 13) (Minor UI update)
Pixel Launcher – "14.0" (based on Android 14) (Minor UI update)
Pixel Launcher – "15.0" (based on Android 15) (Minor UI update)
realme UI
realme UI is a mobile operating system developed by Realme which is based on OPPO ColorOS, which itself is based on the Android Open Source Project (AOSP). The UI mostly resemble its predecessor, but with a custom UI on top of ColorOS to match Realme's target audience.
Current realme UI version list
realme UI 1.0 – based on ColorOS 7.0 – Android 10 – Initial Release
realme UI 2.0 / R Edition – based on ColorOS 11.0 – Android 11
realme UI 3.0 / S Edition – based on ColorOS 12.0 – Android 12
realme UI 4.0 / T Edition – based on ColorOS 13.0 – Android 13
realme UI 5.0 – based on ColorOS 14.0 – Android 14
realme UI 6.0 – based on ColorOS 15.0 – Android 15
realme UI R edition
realme UI R edition is a custom Android skin that Realme developed for their lower-end device line with "C" and Narzo series, the Android-based line of is based on Android Go, hence the overall experience is tune down to allowed for smoother experience on budget Realme devices.
Red Magic OS
Red Magic OS is a mobile operating system developed by ZTE and Nubia for their Red Magic devices.
Current Red Magic OS version list
Red Magic OS 1.x – based on Android 8 "Oreo", initial release
Red Magic OS 2.x – based on Android 9 "Pie", redesigned UI
Red Magic OS 3.x – based on Android 10, redesigned UI
Red Magic OS 4.x – based on Android 11, redesigned UI
Red Magic OS 5.x – based on Android 12, redesigned UI
Red Magic OS 6.x – based on Android 13, redesigned UI
Red Magic OS 9.x – based on Android 14, redesigned UI
Replicant OS
Replicant is a custom mobile operating system based on the Android with all proprietary drivers and bloated closed-source software removed.
TCL UI
TCL UI is a custom user interface developed by TCL Technology for their in-house smartphone series. The OS is based on the Android Open Source Project (AOSP).
Current TCL UI version list
TCL UI 1.x – Based on Android 9 "Pie" and Android 10 – Initial Release
TCL UI 2.x – Based on Android 10 – Minor UI upgrade
TCL UI 3.x – Based on Android 11 – Minor UI upgrade
TCL UI 4.x – Based on Android 12 – Minor UI upgrade
TCL UI 5.x – Based on Android 13 – Minor UI upgrade
TCL UI 7.x – Based on Android 14 – Minor UI upgrade
VOS
VOS is a custom Android UI developed by BQ Aquaris and Vsmart.
Current VOS version list:
VOS 1.x – based on Android "Nougat" 7.1, "Oreo" 8
VOS 2.x – based on Android "Pie" 9
VOS 3.x – based on Android 10
VOS 4.x – based on Android 11
XOS
XOS (formerly known as XUI) is an Android-based operating system developed by Hong Kong mobile phone manufacturer Infinix Mobile, a subsidiary of Transsion Holdings, exclusively for their smartphones. XOS allows for a wide range of user customization without requiring rooting the mobile device. The operating system comes with utility applications that allow users to protect their privacy, improve speed, enhance their experience, etc. XOS comes with features like XTheme, Scan to Recharge, Split Screen and XManager.
Current XOS version list:
XUI 1.x – based on Android "Lollipop" 5, initial release
XOS 2.x – based on Android "Marshmallow" 6 and "Nougat" 7
XOS 3.x – based on Android "Nougat" 7 and "Oreo" 8
XOS 4.x – based on Android "Oreo" 8
XOS 5.x – based on Android "Pie" 9
XOS 6.x – based on Android 10
XOS 7.x – based on Android 10
XOS 7.6.x – based on Android 11
XOS 10.x – based on Android 11, redesigned UI
XOS 10.6.x – based on Android 12, latest update
Xperia UI
Sony Xperia UI (formerly known as Sony Ericsson Timescape UI) was the front-end UI developed by Sony Mobile (formerly Sony Ericsson) in 2010 for their Android-based Sony Xperia series. Sony Xperia UI mostly consisted of Sony's own applications such as Sony Music (formerly known as Walkman Music player), Albums and Video Player. During its time as Timescape UI, the UI differed from the standard Android UIinstead of traditional apps dock on the bottom part, they were located at the four corners of the home screen, while the middle of the screen consisted of the widget. However, recent UI developments more closely resemble those of stock Android.
Current Xperia UI version list:
Timescape version 1 – based on Android "Eclair" 2.0/2.1, initial release
Timescape version 2 – based on Android "Gingerbread" 2.3.x, redesigned UI
Xperia UI version 3 – based on Android "Gingerbread" and "Ice Cream Sandwich" 2.3.x and 4.0.x, redesigned UI
Xperia UI version 4 – based on Android "Jelly Bean" 4.2.x – 4.3.x, redesigned UI
Xperia UI version 5 – based on Android "KitKat" 4.4.x, redesigned UI
Xperia UI version 6 – based on Android "Lollipop" 5.0.x – 5.1.x, redesigned UI
Xperia UI version 7 – based on Android "Marshmallow" 6.0.x, redesigned UI
Xperia UI version 8 – based on Android "Nougat" 7.x, redesigned UI
Xperia UI version 9 – based on Android "Oreo" 8.x, redesigned UI
ZenUI
ZenUI is a front-end touch interface developed by ASUS with partners, featuring a full touch user interface. ZenUI is used by ASUS for its Android phones and tablet computers, and is not available for licensing by external parties. ZenUI also comes preloaded with ASUS-made apps like ZenLink (PC Link, Share Link, Party Link & Remote Link).
Current ZenUI version list:
ZenUI 1.0 – based on Android "Jelly Bean" and "KitKat" 4.3.x and 4.4.x, initial release
ZenUI 2.0 – based on Android "Lollipop" 5.0.x – 5.1.x, redesigned UI
ZenUI 3.0 – based on Android "Marshmallow" 6.0.x, redesigned UI
ZenUI 4.0 – based on Android "Nougat" 7.x, redesigned UI
ZenUI 5.0 – based on Android "Oreo" 8.x, redesigned UI
ZenUI 6.0 – based on Android "Pie" 9, redesigned UI
ZenUI 7.0 – based on Android 10, redesigned UI
ZenUI 8.0 – based on Android 11, minor UI upgrade
ZUI
ZUI is a custom operating system originally developed by Lenovo subsidiary ZUK Mobile for their smartphones. However, after the shutting down of ZUK Mobile, Lenovo took over as the main developer of ZUI. The operating system is based on the Android Open Source Project (AOSP).
Current ZUI version list:
ZUI 1.x – Initial Release
ZUI 2.x
ZUI 3.x
ZUI 4.x
ZUI 4.x
ZUI 10.x - Based on Android 9 "Pie"
ZUI 11.x - Based on Android 9 "Pie" and Android 10
ZUI 12.x - Based on Android 11
ZUI 13.x - Based on Android 11
Wear OS
Wear OS (also known simply as Wear and formerly Android Wear) is a version of Google's Android operating system designed for smartwatches and other wearables. By pairing with mobile phones running Android version 6.0 or newer, or iOS version 10.0 or newer with limited support from Google's pairing application, Wear OS integrates Google Assistant technology and mobile notifications into a smartwatch form factor.
In May 2021 at Google I/O, Google announced a major update to the platform, internally known as Wear OS 3.0. It incorporates a new visual design inspired by Android 12, and Fitbit exercise tracking features. Google also announced a partnership with Samsung Electronics, who is collaborating with Google to unify its Tizen-based smartwatch platform with Wear OS, and has committed to using Wear OS on its future smartwatch products. The underlying codebase was also upgraded to Android 11. Wear OS 3.0 will be available to Wear OS devices running Qualcomm Snapdragon Wear 4100 system on chip, and will be an opt-in upgrade requiring a factory reset to install.
Current Wear OS version list:
Android Wear 4.4w (Based on Android 4.4 "KitKat") – (Initially release)
Android Wear 1.0 – 1.3 (Based on Android 5.0 "Lollipop) – (Minor update)
Android Wear 1.4 (Based on Android 6.0 "Marshmallow) – (Minor update)
Android Wear 2.0 – 2.6 (7.1.1W2) (Based on Android 7.1 "Nougat") – (Minor update)
Android Wear 2.6 (7.1.1W3, 8.0.0 W1) – 2.9 (7.1.1W6, 8.0.0W4) (Based on Android 8.0 "Oreo") – (Minor update)
Wear OS 1.0 (Based on Android 8.0 "Oreo") – (Renamed and Minor update)
Wear OS 2.0 (Based on Android 8.0 "Oreo") – (Minor update)
Wear OS 2.2 (Based on Android 9 "Pie") – (Minor update)
Wear OS 3.x (Based on Android 11) – (Major UI and system update)
One UI Watch
One UI Watch is the user interface Samsung developed for their Wear OS based smartwatch, officially announced after both Google and Samsung confirmed they would unify their respective wearable operating systems (Google Wear OS 2.0 and Samsung Tizen) into Wear OS 3.0.
Current One UI Watch version list:
One UI Watch 3.0 (Based on Wear OS 3.0 – Android 11) (Initial release)
One UI Watch 4.5 (Based on Wear OS 3.5 – Android 11) (Minor update)
One UI Watch 5.0 (Based on Wear OS 4.0 – Android 13) (Minor update)
One UI Watch 6.0 (Based on Wear OS 5.0 – Android 14) (Minor update)
ChromeOS
ChromeOS is an operating system designed by Google that is based on the Linux kernel and uses the Google Chrome web browser as its principal user interface. As a result, ChromeOS primarily supports web applications. Google announced the project in July 2009, conceiving it as an operating system in which both applications and user data reside in the cloud: hence ChromeOS primarily runs web applications.
Due to increase of popularity with 2-in-1 PCs, most recent Chromebooks are introduced with touch screen capability, with Android applications starting to become available for the operating system in 2014. And in 2016, access to Android apps in the entire Google Play Store was introduced on supported ChromeOS devices. With the support of Android applications, there are Chromebook devices that are positioned as tablet based instead of notebooks.
ChromeOS is only available pre-installed on hardware from Google manufacturing partners. An open source equivalent, ChromiumOS, can be compiled from downloaded source code. Early on, Google provided design goals for ChromeOS, but has not otherwise released a technical description.
Sailfish OS
Sailfish OS is from Jolla. It is open source with GNU General Public License (GPL) for middleware stack core which comes from MER. Sailfish due to Jolla's business model and due to alliances with various partners and due to intentional design of OS internals, is capable to adopt in several layers third-party software including Jolla software e.g. Jolla's UI is proprietary software (closed source), so such components can be proprietary with many kinds of licences. However, user can replace them with open source components like e.g. NEMO UI instead Jolla's UI.
After Nokia abandoned in 2011 the MeeGo project, most of the MeeGo team left Nokia, and established Jolla as a company to use MeeGo and Mer business opportunities. The MER standard allows it to be launched on any hardware with kernel compatible with MER. In 2012, Linux Sailfish OS based on MeeGo and using middleware of MER core stack distribution was launched for public use. The first device, the Jolla smartphone, was unveiled on May 20, 2013. In 2015, Jolla Tablet was launched and the BRICS countries declared it an officially supported OS there. Jolla started licensing Sailfish OS 2.0 for third parties. Some devices sold are updateable to Sailfish 2.0 with no limits.
Nemo Mobile is a community-driven OS, similar to Sailfish but attempting to replace its proprietary components, such as the user interface.ll
SteamOS
SteamOS is a Linux distribution developed by Valve. It incorporates Valve's popular namesake Steam video game storefront and is the primary operating system for Steam Machines and the Steam Deck. SteamOS is open source with some closed source components.
SteamOS was originally built to support streaming of video games from one personal computer to the one running SteamOS within the same network, although the operating system can support standalone systems and was intended to be used as part of Valve's Steam Machine platform. SteamOS versions 1.0, released in December 2013, and 2.0 were based on the Debian distribution of Linux with GNOME desktop. With SteamOS, Valve encouraged developers to incorporate Linux compatibility into their releases to better support Linux gaming options.
In February 2022, Valve released the handheld gaming computer Steam Deck running SteamOS 3.0. SteamOS 3 is based on the Arch Linux distribution with KDE Plasma 5.
Tizen
Tizen (based on the Linux kernel) is a mobile operating system hosted by Linux Foundation, together with support from the Tizen Association, guided by a Technical Steering Group composed of Intel and Samsung.
Tizen is an operating system for devices including smartphones, tablets, In-Vehicle Infotainment (IVI) devices, however currently it mainly focuses on wearable and smart TVs. It is an open source system (however the SDK was closed-source and proprietary) that aims to offer a consistent user experience across devices. Tizen's main components are the Linux kernel and the WebKit runtime. According to Intel, Tizen "combines the best of LiMo and MeeGo." HTML5 apps are emphasized, with MeeGo encouraging its members to transition to Tizen, stating that the "future belongs to HTML5-based applications, outside of a relatively small percentage of apps, and we are firmly convinced that our investment needs to shift toward HTML5." Tizen will be targeted at a variety of platforms such as handsets, touch pc, smart TVs and in-vehicle entertainment. On May 17, 2013, Tizen released version 2.1, code-named Nectarine.
While Tizen itself was open source, most of the UX and UI layer that was developed by Samsung was mainly closed-source and proprietary, such as the TouchWiz UI on the Samsung Z's series smartphone and One UI for their Galaxy Watch wearable lines.
Note that some refrigerators use Tizen, even though they are not very mobile.
Samsung has revealed plans to discontin ue the Tizen operating system by the end of 2025, marking a complete halt in support for the smartwatch OS. The company ceased using Tizen OS with its Galaxy Watch4 release, favoring a hybrid OS developed with Google.
KaiOS
KaiOS is from Kai. It is based on Firefox OS/Boot to Gecko. Unlike most mobile operating systems which focus on smartphones, KaiOS was developed mainly for feature phones, giving these access to more advanced technologies usually found on smartphones, such as app stores and Wi-Fi/4G capabilities.
It is a mix of closed-source and open-source components. FirefoxOS/B2G was released under the permissive MPL 2.0. It does not redistribute itself under the same license, so KaiOS is now presumably proprietary (but still mostly open-source, publishing its source code). KaiOS is not entirely proprietary, as it uses the copyleft GPL Linux kernel also used in Android.
Smart Feature OS
Smart Feature OS is a custom version of KaiOS that was developed and solely used by HMD Global for their KaiOS line of Nokia feature phone. The main differences between stock KaiOS and Smart Feature OS is the aesthetics such as icons, widgets, a custom Nokia ringtone and notification tone.
Fully open-source, entirely permissive licenses
Fuchsia
Fuchsia is a capability-based, real-time operating system (RTOS) currently being developed by Google. It was first discovered as a mysterious code post on GitHub in August 2016, without any official announcement. In contrast to prior Google-developed operating systems such as ChromeOS and Android, which are based on Linux kernels, Fuchsia is based on a new microkernel called "Zircon", derived from "Little Kernel", a small operating system intended for embedded systems. This allows it to remove Linux and the copyleft GPL under which the Linux kernel is licensed; Fuchsia is licensed under the permissive BSD 3-clause, Apache 2.0, and MIT licenses. Upon inspection, media outlets noted that the code post on GitHub suggested Fuchsia's capability to run on universal devices, from embedded systems to smartphones, tablets and personal computers. In May 2017, Fuchsia was updated with a user interface, along with a developer writing that the project was not a for experimental, prompting media speculation about Google's intentions with the operating system, including the possibility of it replacing Android.
LiteOS
LiteOS is a lightweight open source real-time operating system which is part of Huawei's "1+2+1" Internet of Things solution, which is similar to Google Android Things and Samsung Tizen. It is released under the permissive BSD 3-clause license. Huawei LiteOS features lightweight, low-power, fast-response, multi-sensor collaboration, multi-protocol interconnect connectivity, enabling IoT terminals to quickly access the network. Huawei LiteOS will make intelligent hardware development easier. Thereby accelerating the realization of the interconnection of all things. Currently LiteOS are introduce to the consumer market with the Huawei Watch GT series and their sub-brand Honor Magic Watch series.
OpenHarmony
OpenHarmony is an open-source version of HarmonyOS developed and donated by Huawei to the OpenAtom Foundation. It supports devices running a mini system with memory as small as 128 KB, or running a standard system with memory greater than 128 MB. The open source HarmonyOS is based on the Huawei LiteOS kernel and Linux kernel for standard systems. OpenHarmony LiteOS Cortex-A brings small-sized, low-power, and high-performance experience and builds a unified and open ecosystem for developers. In addition, it provides rich kernel mechanisms, more comprehensive Portable Operating System Interface (POSIX), and a unified driver framework, Hardware Driver Foundation (HDF), which offers unified access for device developers and friendly development experience for application developers.
Fully open-source, mixed copyleft and permissive licenses
Fedora Mobility
Fedora Mobility is under developing mobile operating system by the Fedora Project that are porting Fedora to run on portable devices such as phones and tablets.
LuneOS
LuneOS is a modern reimplementation of the Palm/HP webOS interface.
Manjaro ARM
Manjaro ARM is a mobile operating system with Plasma Mobile desktop environment that is running and default operating system on the PinePhone, an ARM-based smartphone released by Pine64.
Mobian
A mobile Debian focused for PinePhone and soon Librem.
Plasma mobile
Plasma Mobile is a Plasma variant for smartphones. Plasma Mobile runs on Wayland and it is compatible with Ubuntu Touch applications, PureOS applications, and eventually Android applications via KDE's Shashlik project also sponsored by Blue Systems, or Anbox. It is under the copyleft GPLv2 license.
The Necuno phone uses Plasma Mobile. It is entirely open-source and thus does not have a cellular modem, so it must make calls by VOIP, like a pocket computer.
postmarketOS
postmarketOS is based on the Alpine Linux Linux distribution. It is intended to run on older phone hardware. it is in alpha.
PureOS
PureOS is a Debian GNU/Linux derivative using only free software meeting the Debian Free Software Guidelines, mainly the copyleft GPL. PureOS is endorsed by Free Software Foundation as one of the freedom-respecting operating systems. It is developed by Purism, and was already in use on Purism's laptops before it was used on the Librem 5 smartphone. Purism, in partnership with GNOME and KDE, aims to separate the CPU from the baseband processor and include hardware kill switches for the phone's Wi-Fi, Bluetooth, camera, microphone, and baseband processor, and provide both GNOME and KDE Plasma Mobile as options for the desktop environment.
Ubuntu Touch
Ubuntu Touch is an open-source (GPL) mobile version of the Ubuntu operating system originally developed in 2013 by Canonical Ltd. and continued by the non-profit UBports Foundation in 2017. Ubuntu Touch can run on a pure GNU/Linux base on phones with the required drivers, such as the Librem 5 and the PinePhone. To enable hardware that was originally shipped with Android, Ubuntu Touch makes use of the Android Linux kernel, using Android drivers and services via an LXC container, but does not use any of the Java-like code of Android. As of February 2022, Ubuntu Touch is available on 78 phones and tablets. The UBports Installer serves as an easy-to-use tool to allow inexperienced users to install the operating system on third-party devices without damaging their hardware.
Closed source
iOS
iOS (formerly named iPhone OS) was created by Apple Inc. It has the second largest installed base worldwide on smartphones, but the largest profits, due to aggressive price competition between Android-based manufacturers. It is closed-source and proprietary, and is built on the open source Darwin operating system. The iPhone, iPod Touch, iPad, and second and third-generation Apple TV all use iOS, which is derived from macOS.
Native third-party applications were not officially supported until the release of iPhone OS 2.0 on July 11, 2008. Before this, "jailbreaking" allowed third-party applications to be installed. In recent years, the jailbreaking scene has changed drastically due to Apple's continued efforts to secure their operating system and prevent unauthorized modifications. Currently, jailbreaks of recent iterations of iOS are only semi-untethered, which requires a device to be re-jailbroken at every boot, and exploits for jailbreaks are becoming increasingly hard to find and use.
Currently all iOS devices are developed by Apple and manufactured by Foxconn or another of Apple's partners.
iPadOS
iPadOS is a tablet operating system created and developed by Apple Inc. specifically for their iPad line of tablet computers. It was announced at the company's 2019 Worldwide Developers Conference (WWDC), as a derivation from iOS but with a greater emphasis put on multitasking. It was released on September 24, 2019.
watchOS
watchOS is the operating system of the Apple Watch, developed by Apple Inc. It is based on the iOS operating system and has many similar features. It was released on April 24, 2015, along with the Apple Watch, the only device that runs watchOS. It is currently the most widely used wearable operating system. It features focus on convenience, such as being able to place phone calls and send texts, and health, such as fitness and heart rate tracking.
The most current version of the watchOS operating system is watchOS 10.
Kindle firmware
Kindle firmware is a mobile operating system specifically designed for Amazon Kindle e-readers. It is based on a custom Linux kernel, but it is mostly closed-source and proprietary.
HarmonyOS
HarmonyOS is a distributed operating system developed by Huawei that was specifically designed for smartphones, tablets, TVs, smartwatches, smart devices made by Huawei. It is based on a proprietary multi-kernel and Linux kernel subsystem. Released officially for smartphones on June 2, 2021, from its initial launch on August 9, 2019, for smart screen TVs. On August 4, 2023, Huawei announces its full stack HarmonyOS NEXT for HarmonyOS that will replace the current multi-kernel stack that contains Linux kernel subsystem with APK apps, with only native HarmonyOS apps able to be used. On January 18, 2024, Galaxy Edition version was announced to be used for the next version of HarmonyOS.
Nintendo Switch system software
The Nintendo Switch system software (also known by its codename Horizon) is an updatable firmware and operating system used by the Nintendo Switch hybrid video game console/tablet and Nintendo Switch Lite handheld game console. It is based on a proprietary microkernel. The UI includes a HOME screen, consisting of the top bar, the screenshot viewer ("Album"), and shortcuts to the Nintendo eShop, News, and Settings.
The system itself is based on the Nintendo 3DS system software, additionally the networking stack in the Switch OS is derived at least in part from FreeBSD code while the Stagefright multimedia framework is derived from Android code.
PlayStation Vita system software
The PlayStation Vita system software is the official firmware and operating system for the PlayStation Vita and PlayStation TV video game consoles. It uses the LiveArea as its graphical shell. The PlayStation Vita system software has one optional add-on component, the PlayStation Mobile Runtime Package. The system is built on a Unix-base which is derived from FreeBSD and NetBSD.
Windows 10
Windows 10 (not to be confused with Windows 10 Mobile—see below) is a personal computer operating system developed and released by Microsoft as part of the Windows NT family of operating systems. It was released on July 29, 2015, and many editions and versions have been released since then. It was designed to run across multiple Microsoft product such as PCs and Tablets. The Windows user interface was revised to handle transitions between a mouse-oriented interface and a touchscreen-optimized interface based on available input devices particularly on 2-in-1 PCs.
Windows 10 also introduced universal apps, expanding on Metro-style apps, these apps can be designed to run across multiple Microsoft product families with nearly identical code including PCs, tablets, smartphones, embedded systems, Xbox One, Surface Hub and Mixed Reality.
Windows 11
Windows 11 is a major version of the Windows NT operating system developed by Microsoft that was announced on June 24, 2021, and is the successor to Windows 10, which was released in 2015. Windows 11 was released on October 5, 2021, as a free upgrade via Windows Update for eligible devices running Windows 10.
Microsoft promoted that Windows 11 would have improved performance and ease of use over Windows 10; it features major changes to the Windows shell influenced by the canceled Windows 10X, including a redesigned Start menu, the replacement of its "live tiles" with a separate "Widgets" panel on the taskbar, the ability to create tiled sets of windows that can be minimized and restored from the taskbar as a group, and new gaming technologies inherited from Xbox Series X and Series S such as Auto HDR and DirectStorage on compatible hardware. Internet Explorer is fully replaced by the Blink layout engine-based Microsoft Edge, while Microsoft Teams is integrated into the Windows shell. Microsoft also announced plans to offer support for Android apps to run on Windows 11, with support for Amazon Appstore and manually-installed packages. On March 5, 2024, Microsoft announced that Android apps support will be depreciated on March 5, 2025.
Similar to Windows 10, it was designed to run across multiple Microsoft product such as PCs and Tablets. The Windows user interface was further revised to combine the UI element of both mouse-oriented interface and a touchscreen-optimized interface based into a hybrid UI that combined the capabilities of touch with a traditional desktop UI.
Minor proprietary operating systems
Other than the major operating systems, some companies such as Huami (Amazfit), Huawei, realme, TCL, and Xiaomi have developed their own proprietary RTOSes specifically for their own smartbands and smartwatches that are designed for power effiency and lower battery consumption and are not based on any other operating system.
Proprietary Amazfit OS
Operating System that is primarily designed for their Bip series, however, Huami is currently developing the operating system to run on other smartwatches as well.
Huawei/Honor Band Operating System
Huawei Band Operating system is an operating system specifically designed and developed by Huawei for their fitness trackers, including smartbands from Honor
Lenovo RTOS
Proprietary OS developed by Lenovo for their fitness trackers and smartwatches.
realme Wearable Operating System
A proprietary operating system design to run on realme smartbands and smartwatches.
TCL Wearable Real Time Operating System
A proprietary RTOS powering TCL and Alcatel branded smartbands and smartwatches.
Xiaomi Mi Band Operating System
Proprietary RTOS that is developed by Huami for the Xiaomi Mi Band series.
Discontinued software platforms
Open source
CyanogenMod
CyanogenMod was a custom mobile operating system based on the Android Open Source Project (AOSP). It was a custom ROM that was co-developed by the CyanogenMod community. The OS did not include any proprietary apps unless the user installed them. Due to its open source nature, CyanogenMod allowed Android users who could no longer obtain update support from their manufacturer to continue updating their OS version to the latest one based on official releases from Google AOSP and heavy theme customization. The last version of the OS was CyanogenMod 13 which was based on Android Asus.
On December 24, 2016, CyanogenMod announced on their blog that they would no longer be releasing any CyanogenMod updates. All development moved to LineageOS.
Cyanogen OS
Cyanogen OS was based on CyanogenMod and maintained by Cyanogen Inc; however, it included proprietary apps and it was only available for commercial uses.
Firefox OS
Firefox OS (formerly known as "Boot to Gecko" and shortly "B2G") is from Mozilla. It was an open source mobile operating system released under the Mozilla Public License built on the Android Linux kernel and used Android drivers, but did not use any Java-like code of Android.
According to Ars Technica, "Mozilla says that B2G is motivated by a desire to demonstrate that the standards-based open Web has the potential to be a competitive alternative to the existing single-vendor application development stacks offered by the dominant mobile operating systems." In September 2016, Mozilla announced that work on Firefox OS has ceased, and all B2G-related code would be removed from mozilla-central.
MeeGo/Maemo/Moblin
MeeGo was from non-profit organization The Linux Foundation. It is open source and GPL. At the 2010 Mobile World Congress in Barcelona, Nokia and Intel both unveiled MeeGo, a mobile operating system that combined Moblin and Maemo to create an open-sourced experience for users across all devices. In 2011 Nokia announced that it would no longer pursue MeeGo in favor of Windows Phone. Nokia announced the Nokia N9 on June 21, 2011, at the Nokia Connection event in Singapore. LG announced its support for the platform. Maemo was a platform developed by Nokia for smartphones and Internet tablets. It is open source and GPL, based on Debian GNU/Linux and draws much of its graphical user interface (GUI), frameworks, and libraries from the GNOME project. It uses the Matchbox window manager and the GTK-based Hildon as its GUI and application framework.
webOS
webOS was developed by Palm. webOS is an open source mobile operating system running on the Linux kernel, initially developed by Palm, which launched with the Palm Pre. After being acquired by HP, two phones (the Veer and the Pre 3) and a tablet (the TouchPad) running webOS were introduced in 2011. On August 18, 2011, HP announced that webOS hardware would be discontinued, but would continue to support and update webOS software and develop the webOS ecosystem. HP released webOS as open source under the name Open webOS, and plans to update it with additional features. On February 25, 2013, HP announced the sale of webOS to LG Electronics, who used the operating system for its "smart" or Internet-connected TVs. However, HP retained patents underlying WebOS and cloud-based services such as the App Catalog.
Closed source
Bada
Bada platform (stylized as bada; Korean: 바다) was an operating system for mobile devices such as smartphones and tablet computers. It was developed by Samsung Electronics. Its name is derived from "바다 (bada)", meaning "ocean" or "sea" in Korean. It ranges from mid- to high-end smartphones. To foster adoption of Bada OS, since 2011 Samsung reportedly has considered releasing the source code under an open-source license, and expanding device support to include Smart TVs. Samsung announced in June 2012 intentions to merge Bada into the Tizen project, but would meanwhile use its own Bada operating system, in parallel with Google Android OS and Microsoft Windows Phone, for its smartphones. All Bada-powered devices are branded under the Wave name, but not all of Samsung's Android-powered devices are branded under the name Galaxy.
On February 25, 2013, Samsung announced that it will stop developing Bada, moving development to Tizen instead. Bug reporting was finally terminated in April 2014.
BlackBerry OS
In 1999, Research In Motion released its first BlackBerry devices, providing secure real-time push-email communications on wireless devices. Services such as BlackBerry Messenger provide the integration of all communications into a single inbox. In September 2012, RIM announced that the 200 millionth BlackBerry smartphone was shipped. As of September 2014, there were around 46 million active BlackBerry service subscribers. In the early 2010s, RIM underwent a platform transition, changing its company name to BlackBerry Limited and making new devices using a new operating system named "BlackBerry 10".
BlackBerry 10
BlackBerry 10 (based on the QNX OS) is from BlackBerry. As a smartphone OS, it is closed-source and proprietary, and only runs on phones and tablets manufactured by BlackBerry.
One of the dominant platforms in the world in the late 2000s, its global market share was reduced significantly by the mid-2010s. In late 2016, BlackBerry announced that it will continue to support the OS, with a promise to release 10.3.3. Therefore, BlackBerry 10 would not receive any major updates as BlackBerry and its partners would focus more on their Android base development.
Nintendo 3DS system software
The Nintendo 3DS system software is the updatable operating system used by the Nintendo 3DS.
Symbian
Symbian platform was developed by Nokia for some models of smartphones. It is proprietary software, it was however used by Ericsson (Sony Ericsson), Sending and Benq. The operating system was discontinued in 2012, although a slimmed-down version for basic phones was still developed until July 2014. Microsoft officially shelved the platform in favor of Windows Phone after its acquisition of Nokia.
Palm OS
Palm OS/Garnet OS was from Access Co. It is closed-source and proprietary. webOS was introduced by Palm in January 2009, as the successor to Palm OS with Web 2.0 technologies, open architecture and multitasking abilities.
Microsoft
Windows Mobile
Windows Mobile was a family of proprietary operating systems from Microsoft aimed at business and enterprise users, based on Windows CE and originally developed for Pocket PC (PDA) devices. In 2010 it was replaced with the consumer-focused Windows Phone.
Versions of Windows Mobile came in multiple editions, like "Pocket PC Premium", "Pocket PC Professional", "Pocket PC Phone", and "Smartphone" (Windows Mobile 2003) or "Professional", "Standard", and "Classic" (Windows Mobile 6.0). Some editions were touchscreen-only and some were keyboard-only, although there were cases where device vendors managed to graft support for one onto an edition targeted at the other. Cellular phone features were also only supported by some editions. Microsoft started work on a version of Windows Mobile that would combine all features together, but it was aborted, and instead they focused on developing the non-backward-compatible, touchscreen-only Windows Phone 7.
Windows Phone
Windows Phone is a proprietary mobile operating system developed by Microsoft for smartphones as the replacement successor to Windows Mobile and Zune. Windows Phone features a new touchscreen-oriented user interface derived from Metro design language. Windows Phone was replaced by Windows 10 Mobile in 2015.
Windows 10 Mobile
Windows 10 Mobile (formerly called Windows Phone) was from Microsoft. It was closed-source and proprietary.
Unveiled on February 15, 2010, Windows Phone included a user interface inspired by Microsoft's Metro Design Language. It was integrated with Microsoft services such as OneDrive and Office, Xbox Music, Xbox Video, Xbox Live games, and Bing, but also integrated with many other non-Microsoft services such as Facebook and Google accounts. Windows Phone devices were made primarily by Microsoft Mobile/Nokia, and also by HTC and Samsung.
On January 21, 2015, Microsoft announced that the Windows Phone brand would be phased out and replaced with Windows 10 Mobile, bringing tighter integration and unification with its PC counterpart Windows 10, and providing a platform for smartphones and tablets with screen sizes under 8 inches.
On October 8, 2017, Microsoft officially announced that they would no longer push any major updates to Windows 10 Mobile. The operating system was put in maintenance mode, where Microsoft would push bug fixes and general improvements only. Windows 10 Mobile would not receive any new feature updates.
On January 18, 2019, Microsoft announced that support for Windows 10 Mobile would end on December 10, 2019, with no further security updates released after then, and that Windows 10 Mobile users should migrate to iOS or Android phones.
The released version of Windows 10 Mobile were:
Windows 10 Mobile – Version 1511 (November Update "Threshold") – major UI update
Windows 10 Mobile – Version 1607 (Anniversary Update "Redstone 1")
Windows 10 Mobile – Version 1703 (Creators Update "Redstone 2")
Windows 10 Mobile – Version 1709 (Fall Creators Update)
Windows 8
Windows 8 is a major release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on August 1, 2012, and was made available for download via MSDN and TechNet on August 15, 2012. Nearly three months after its initial release, it finally made its first retail appearance on October 26, 2012.
Windows 8 introduced major changes to the operating system's platform and user interface with the intention to improve its user experience on tablets, where Windows competed with mobile operating systems such as Android and iOS. In particular, these changes included a touch-optimized Windows shell and start screen based on Microsoft's Metro design language, integration with online services, the Windows Store, and a new keyboard shortcut for screenshots. Many of these features were adapted from Windows Phone. Windows 8 also added support for USB 3.0, Advanced Format, near-field communication, and cloud computing, as well as a new lock screen with clock and notifications and the previously released "Domino" and "Beauty and a Beat". Additional security features—including built-in antivirus software, integration with Microsoft SmartScreen phishing filtering, and support for Secure Boot on supported devices—were introduced. It was the first Windows version to support ARM architecture under the Windows RT branding. No CPUs without PAE, SSE2 and NX are supported in this version.
Windows 8.1
Windows 8.1 is a release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on August 27, 2013, and broadly released for retail sale on October 17, 2013, about a year after the retail release of its predecessor, and succeeded by Windows 10 on July 29, 2015. Windows 8.1 was made available for download via MSDN and Technet and available as a free upgrade for retail copies of Windows 8 and Windows RT users via the Windows Store. A server version, Windows Server 2012 R2, was released on October 18, 2013.
Windows 8.1 aimed to address complaints of Windows 8 users and reviewers on launch. Enhancements include an improved Start screen, additional snap views, additional bundled apps, tighter OneDrive (formerly SkyDrive) integration, Internet Explorer 11 (IE11), a Bing-powered unified search system, restoration of a visible Start button on the taskbar, and the ability to restore the previous behavior of opening the user's desktop on login instead of the Start screen.
Market share
Usage
In 2006, Android and iOS did not exist and only 64 million smartphones were sold. In 2018 Q1, 183.5 million smartphones were sold and global market share was 48.9% for Android and 19.1% for iOS. Only 131,000 smartphones running other operating systems were sold, constituting 0.03% of sales.
According to StatCounter web use statistics (a proxy for all use), smartphones (alone without tablets) have majority use globally, with desktop computers used much less (and Android, in particular, more popular than Windows). Use varies however by continent with smartphones way more popular in the biggest continents, i.e. Asia, and the desktop still more popular in some, though not in North America.
The desktop is still popular in many countries (while overall down to 44.9% in the first quarter of 2017), smartphones are more popular even in many developed countries (or about to be in more). A few countries on any continent are desktop-minority; European countries (and some in South America, and a few, e.g. Haiti, in North America; and most in Asia and Africa) are smartphone-majority, Poland and Turkey highest with 57.68% and 62.33%, respectively. In Ireland, smartphone use at 45.55% outnumbers desktop use and mobile as a whole gains majority when including the tablet share at 9.12%. Spain is also slightly desktop-minority.
The range of measured mobile web use varies a lot by country, and a StatCounter press release recognizes "India among world leaders in use of mobile to surf the internet" (of the big countries) where the share is around (or over) 80% and desktop is at 19.56%, with Russia trailing with 17.8% mobile use (and desktop the rest).
Smartphones (alone, without tablets), first gained majority in December 2016 (desktop-majority was lost the month before), and it was not a Christmas-time fluke, as while close to majority after smartphone majority happened again in March 2017.
In the week from November 7–13, 2016, smartphones alone (without tablets) overtook desktop, for the first time (for a short period; non-full-month). Mobile-majority applies to countries such as Paraguay in South America, Poland in Europe and Turkey; and most of Asia and Africa. Some of the world is still desktop-majority, with e.g. in the United States at 54.89% (but no not on all days). However, in some territories of the United States, such as Puerto Rico, desktop is way under majority, with Windows under 30% overtaken by Android.
On October 22, 2016 (and subsequent weekends), mobile showed majority. Since October 27, the desktop has not shown majority, not even on weekdays. Smartphones alone have showed majority since December 23 to the end of the year, with the share topping at 58.22% on Christmas Day. To the "mobile"-majority share then of smartphones, tablets could be added giving a 63.22% majority. While an unusually high top, a similarly high also happened on Monday April 17, 2017, with then only smartphones share slightly lower and tablet share slightly higher, with them combined at 62.88%.
, the world has turned desktop-minority; at about 49% desktop use for the previous month, but mobile was not ranked higher, tablet share had to be added to it to exceed desktop share. By now, mobile (smartphones) have full majority, outnumbering desktop/laptop computers by a safe margin (and no longer counting tablets with desktops makes them most popular).
By operating system
| Technology | Computer hardware | null |
231504 | https://en.wikipedia.org/wiki/Glaciology | Glaciology | Glaciology (; ) is the scientific study of glaciers, or, more generally, ice and natural phenomena that involve ice.
Glaciology is an interdisciplinary Earth science that integrates geophysics, geology, physical geography, geomorphology, climatology, meteorology, hydrology, biology, and ecology. The impact of glaciers on people includes the fields of human geography and anthropology. The discoveries of water ice on the Moon, Mars, Europa and Pluto add an extraterrestrial component to the field, which is referred to as "astroglaciology".
Overview
A glacier is an extended mass of ice formed from snow falling and accumulating over a long period of time; glaciers move very slowly, either descending from high mountains, as in valley glaciers, or moving outward from centers of accumulation, as in continental glaciers.
Areas of study within glaciology include glacial history and the reconstruction of past glaciation. A glaciologist is a person who studies glaciers. A glacial geologist studies glacial deposits and glacial erosive features on the landscape. Glaciology and glacial geology are key areas of polar research.
Types
Glaciers can be identified by their geometry and the relationship to the surrounding topography. There are two general categories of glaciation which glaciologists distinguish: alpine glaciation, accumulations or "rivers of ice" confined to valleys; and continental glaciation, unrestricted accumulations which once covered much of the northern continents.
Alpine – ice flows down the valleys of mountainous areas and forms a tongue of ice moving towards the plains below. Alpine glaciers tend to make topography more rugged by adding and improving the scale of existing features. Various features include large ravines called cirques and arêtes, which are ridges where the rims of two cirques meet.
Continental – an ice sheet found today, only in high latitudes (Greenland/Antarctica), thousands of square kilometers in area and thousands of meters thick. These tend to smooth out the landscapes.
Zones of glaciers
Accumulation zone – where the formation of ice is faster than its removal.
Ablation (or wastage) zone – when the sum of melting, calving, and evaporation (sublimation) is greater than the amount of snow added each year.
Glacier equilibrium line and ELA
The glacier equilibrium line is the line separating the glacial accumulation area above from the ablation area below.
The equilibrium line altitude (ELA) and its change over the years is a key indicator of the health of a glacier. A long term monitoring of the ELA may be used as indication to climate change.
Movement
When a glacier is experiencing an accumulation input by precipitation (snow or refreezing rain) that exceeds the output by ablation, the glacier shows a positive glacier mass balance and will advance. Conversely, if the loss of volume (from evaporation, sublimation, melting, and calving) exceeds the accumulation, the glacier shows a negative glacier mass balance and the glacier will melt back. During times in which the volume input to the glacier by precipitation is equivalent to the ice volume lost from calving, evaporation, and melting, the glacier has a steady-state condition.
Some glaciers show periods where the glacier is advancing at an extreme rate, that is typically 100 times faster than what is considered normal, it is referred to as a surging glacier. Surge periods may occur at an interval of 10 to 15 years, e.g. on Svalbard. This is caused mainly due to a long lasting accumulation period on subpolar glaciers frozen to the ground in the accumulation area. When the stress due to the additional volume in the accumulation area increases, the pressure melting point of the ice at its base may be reached, the basal glacier ice will melt, and the glacier will surge on a film of meltwater.
Rate of movement
The movement of glaciers is usually slow. Its velocity varies from a few centimeters to a few meters per day. The rate of movement depends upon the factors listed below:
Temperature of the ice. A polar glacier shows cold ice with temperatures well below the freezing point from its surface to its base. It is frozen to its bed. A temperate glacier is at a melting point temperature throughout the year, from its surface to its base. This allows the glacier to slide on a thin layer of meltwater. Most glaciers in alpine regions are temperate glaciers.
Gradient of the slope.
Thickness of the glacier
Subglacial water dynamics
Glacial Terminology
Ablation Wastage of the glacier through sublimation, ice melting and iceberg calving.
Ablation zone Area of a glacier in which the annual loss of ice through ablation exceeds the annual gain from precipitation.
Arête An acute ridge of rock where two cirques meet.
Bergschrund Crevasse formed near the head of a glacier, where the mass of ice has rotated, sheared and torn itself apart in the manner of a geological fault.
Cirque, Corrie or cwm Bowl shaped depression excavated by the source of a glacier.
Creep Adjustment to stress at a molecular level.
Flow Movement (of ice) in a constant direction.
Fracture Brittle failure (breaking of ice) under the stress raised when movement is too rapid to be accommodated by creep. It happens, for example, as the central part of a glacier moves faster than the edges.
Glacial landform Collective name for the morphologic structures in/on/under/around a glacier.
Moraine Accumulated debris that has been carried by a glacier and deposited at its sides (lateral moraine) or at its foot (terminal moraine).
Névé Area at the top of a glacier (often a cirque) where snow accumulates and feeds the glacier.
Nunatak/Rognon/Glacial Island Visible peak of a mountain otherwise covered by a glacier.
Horn Spire of rock, also known as a pyramidal peak, formed by the headward erosion of three or more cirques around a single mountain. It is an extreme case of an arête.
Plucking/Quarrying Where the adhesion of the ice to the rock is stronger than the cohesion of the rock, part of the rock leaves with the flowing ice.
Tarn A post-glacial lake in a cirque.
Tunnel valley The tunnel that is formed by hydraulic erosion of ice and rock below an ice sheet margin. The tunnel valley is what remains of it in the underlying rock when the ice sheet has melted.
Glacial deposits
Source:
Stratified
Outwash sand/gravel From front of glaciers, found on a plain.
Kettles When a lock of stagnant ice leaves a depression or pit.
Eskers Steep sided ridges of gravel/sand, possibly caused by streams running under stagnant ice.
Kames Stratified drift builds up low, steep hills.
Varves Alternating thin sedimentary beds (coarse and fine) of a proglacial lake. Summer conditions deposit more and coarser material and those of the winter, less and finer.
Unstratified
Till-unsorted (Glacial flour to boulders) deposited by receding/advancing glaciers, forming moraines, and drumlins.
Moraines (Terminal) material deposited at the end; (ground) material deposited as glacier melts; (lateral) material deposited along the sides.
Drumlins Smooth elongated hills composed of till.
Ribbed moraines Large subglacial elongated hills transverse to former ice flow.
| Physical sciences | Glaciology | Earth science |
231630 | https://en.wikipedia.org/wiki/Plasmon | Plasmon | In physics, a plasmon is a quantum of plasma oscillation. Just as light (an optical oscillation) consists of photons, the plasma oscillation consists of plasmons. The plasmon can be considered as a quasiparticle since it arises from the quantization of plasma oscillations, just like phonons are quantizations of mechanical vibrations. Thus, plasmons are collective (a discrete number) oscillations of the free electron gas density. For example, at optical frequencies, plasmons can couple with a photon to create another quasiparticle called a plasmon polariton.
The field of study and manipulation of plasmons is called plasmonics.
Derivation
The plasmon was initially proposed in 1952 by David Pines and David Bohm and was shown to arise from a Hamiltonian for the long-range electron-electron correlations.
Since plasmons are the quantization of classical plasma oscillations, most of their properties can be derived directly from Maxwell's equations.
Explanation
Plasmons can be described in the classical picture as an oscillation of electron density with respect to the fixed positive ions in a metal. To visualize a plasma oscillation, imagine a cube of metal placed in an external electric field pointing to the right. Electrons will move to the left side (uncovering positive ions on the right side) until they cancel the field inside the metal. If the electric field is removed, the electrons move to the right, repelled by each other and attracted to the positive ions left bare on the right side. They oscillate back and forth at the plasma frequency until the energy is lost in some kind of resistance or damping. Plasmons are a quantization of this kind of oscillation.
Role
Plasmons play a huge role in the optical properties of metals and semiconductors. Frequencies of light below the plasma frequency are reflected by a material because the electrons in the material screen the electric field of the light. Light of frequencies above the plasma frequency is transmitted by a material because the electrons in the material cannot respond fast enough to screen it. In most metals, the plasma frequency is in the ultraviolet, making them shiny (reflective) in the visible range. Some metals, such as copper and gold, have electronic interband transitions in the visible range, whereby specific light energies (colors) are absorbed, yielding their distinct color. In semiconductors, the valence electron plasmon frequency is usually in the deep ultraviolet, while their electronic interband transitions are in the visible range, whereby specific light energies (colors) are absorbed, yielding their distinct color which is why they are reflective. It has been shown that the plasmon frequency may occur in the mid-infrared and near-infrared region when semiconductors are in the form of nanoparticles with heavy doping.
The plasmon energy can often be estimated in the free electron model as
where is the conduction electron density, is the elementary charge, is the electron mass, the permittivity of free space, the reduced Planck constant and the plasmon frequency.
Surface plasmons
Surface plasmons are those plasmons that are confined to surfaces and that interact strongly with light resulting in a polariton. They occur at the interface of a material exhibiting positive real part of their relative permittivity, i.e. dielectric constant, (e.g. vacuum, air, glass and other dielectrics) and a material whose real part of permittivity is negative at the given frequency of light, typically a metal or heavily doped semiconductors. In addition to opposite sign of the real part of the permittivity, the magnitude of the real part of the permittivity in the negative permittivity region should typically be larger than the magnitude of the permittivity in the positive permittivity region, otherwise the light is not bound to the surface (i.e. the surface plasmons do not exist) as shown in the famous book by Heinz Raether. At visible wavelengths of light, e.g. 632.8 nm wavelength provided by a He-Ne laser, interfaces supporting surface plasmons are often formed by metals like silver or gold (negative real part permittivity) in contact with dielectrics such as air or silicon dioxide. The particular choice of materials can have a drastic effect on the degree of light confinement and propagation distance due to losses. Surface plasmons can also exist on interfaces other than flat surfaces, such as particles, or rectangular strips, v-grooves, cylinders, and other structures. Many structures have been investigated due to the capability of surface plasmons to confine light below the diffraction limit of light. One simple structure that was investigated was a multilayer system of copper and nickel. Mladenovic et al. report the use of the multilayers as if its one plasmonic material. Oxidation of the copper layers is prevented with the addition of the nickel layers. It is an easy path the integration of plasmonics to use copper as the plasmonic material because it is the most common choice for metallic plating along with nickel. The multilayers serve as a diffractive grating for the incident light. Up to 40 percent transmission can be achieved at normal incidence with the multilayer system depending on the thickness ratio of copper to nickel. Therefore, the use of already popular metals in a multilayer structure prove to be solution for plasmonic integration.
Surface plasmons can play a role in surface-enhanced Raman spectroscopy and in explaining anomalies in diffraction from metal gratings (Wood's anomaly), among other things. Surface plasmon resonance is used by biochemists to study the mechanisms and kinetics of ligands binding to receptors (i.e. a substrate binding to an enzyme). Multi-parametric surface plasmon resonance can be used not only to measure molecular interactions but also nanolayer properties or structural changes in the adsorbed molecules, polymer layers or graphene, for instance.
Surface plasmons may also be observed in the X-ray emission spectra of metals. A dispersion relation for surface plasmons in the X-ray emission spectra of metals has been derived (Harsh and Agarwal).
More recently surface plasmons have been used to control colors of materials. This is possible since controlling the particle's shape and size determines the types of surface plasmons that can be coupled into and propagate across it. This, in turn, controls the interaction of light with the surface. These effects are illustrated by the historic stained glass which adorn medieval cathedrals. Some stained glass colors are produced by metal nanoparticles of a fixed size which interact with the optical field to give glass a vibrant red color. In modern science, these effects have been engineered for both visible light and microwave radiation. Much research goes on first in the microwave range because at this wavelength, material surfaces and samples can be produced mechanically because the patterns tend to be on the order of a few centimeters. The production of optical range surface plasmon effects involves making surfaces which have features <400 nm. This is much more difficult and has only recently become possible to do in any reliable or available way.
Recently, graphene has also been shown to accommodate surface plasmons, observed via near field infrared optical microscopy techniques and infrared spectroscopy. Potential applications of graphene plasmonics mainly addressed the terahertz to midinfrared frequencies, such as optical modulators, photodetectors, biosensors.
Possible applications
The position and intensity of plasmon absorption and emission peaks are affected by molecular adsorption, which can be used in molecular sensors. For example, a fully operational device detecting casein in milk has been prototyped, based on detecting a change in absorption of a gold layer. Localized surface plasmons of metal nanoparticles can be used for sensing different types of molecules, proteins, etc.
Plasmons are being considered as a means of transmitting information on computer chips, since plasmons can support much higher frequencies (into the 100 THz range, whereas conventional wires become very lossy in the tens of GHz). However, for plasmon-based electronics to be practical, a plasmon-based amplifier analogous to the transistor, called a plasmonstor, needs to be created.
Plasmons have also been proposed as a means of high-resolution lithography and microscopy due to their extremely small wavelengths; both of these applications have seen successful demonstrations in the lab environment.
Finally, surface plasmons have the unique capacity to confine light to very small dimensions, which could enable many new applications.
Surface plasmons are very sensitive to the properties of the materials on which they propagate. This has led to their use to measure the thickness of monolayers on colloid films, such as screening and quantifying protein binding events. Companies such as Biacore have commercialized instruments that operate on these principles. Optical surface plasmons are being investigated with a view to improve makeup by L'Oréal and others.
In 2009, a Korean research team found a way to greatly improve organic light-emitting diode efficiency with the use of plasmons.
A group of European researchers led by IMEC began work to improve solar cell efficiencies and costs through incorporation of metallic nanostructures (using plasmonic effects) that can enhance absorption of light into different types of solar cells: crystalline silicon (c-Si), high-performance III-V, organic, and dye-sensitized. However, for plasmonic photovoltaic devices to function optimally, ultra-thin transparent conducting oxides are necessary.
Full color holograms using plasmonics have been demonstrated.
Plasmon-soliton
Plasmon-soliton mathematically refers to the hybrid solution of nonlinear amplitude equation e.g. for a metal-nonlinear media considering both the plasmon mode and solitary solution. A soliplasmon resonance is on the other hand considered as a quasiparticle combining the surface plasmon mode with spatial soliton as a
result of a resonant interaction. To achieve one dimensional solitary propagation in a plasmonic waveguide while the surface plasmons should be localized at the interface, the lateral distribution of the field envelope should also be unchanged.
A graphene-based waveguide is a suitable platform for supporting hybrid plasmon-solitons due to the large effective area and huge nonlinearity. For example, the propagation of solitary waves in a graphene-dielectric heterostructure may appear as in the form of higher order solitons or discrete solitons resulting from the competition between diffraction and nonlinearity.
| Physical sciences | Basics_2 | Physics |
231728 | https://en.wikipedia.org/wiki/Humpback%20whale | Humpback whale | The humpback whale (Megaptera novaeangliae) is a species of baleen whale. It is a rorqual (a member of the family Balaenopteridae) and is the only species in the genus Megaptera. Adults range in length from and weigh up to . The humpback has a distinctive body shape, with long pectoral fins and tubercles on its head. It is known for breaching and other distinctive surface behaviors, making it popular with whale watchers. Males produce a complex song that typically lasts from 4 to 33 minutes.
Found in oceans and seas around the world, humpback whales typically migrate up to each year. They feed in polar waters and migrate to tropical or subtropical waters to breed and give birth. Their diet consists mostly of krill and small fish, and they usually use bubbles to catch prey. They are polygynandrous breeders, with both sexes having multiple partners. Orcas are the main natural predators of humpback whales. The bodies of humpbacks host barnacles and whale lice.
Like other large whales, the humpback was a target for the whaling industry. Humans once hunted the species to the brink of extinction: its population fell to around 5,000 by the 1960s. Numbers have partially recovered to some 135,000 animals worldwide, but entanglement in fishing gear, collisions with ships, and noise pollution continue to affect the species.
Taxonomy
The humpback was first identified as by Mathurin Jacques Brisson in his Regnum Animale of 1756. In 1781, Georg Heinrich Borowski described the species, converting Brisson's name to its Latin equivalent, Balaena novaeangliae. In 1804, Bernard Germain de Lacépède renamed it B. jubartes. In 1846, John Edward Gray created the genus Megaptera, classifying the humpback as Megaptera longipinna, but in 1932, Remington Kellogg reverted the species name to use Borowski's novaeangliae. The common name is derived from the curving of the whales' backs when diving. The genus name, Megaptera, from the Ancient Greek mega- ("giant") and ptera ("wing"), refer to their large front flippers. The species name means "New Englander" and was probably given by Brisson due to regular sightings of humpbacks off the coast of New England.
Humpback whales are rorquals, members of the family Balaenopteridae, which includes the blue, fin, Bryde's, sei, and minke whales. A 2018 genomic analysis estimated that rorquals diverged from other baleen whales in the late Miocene, between 10.5 and 7.5 million years ago. The humpback and fin whales were found to be sister taxons (see the phylogenetic tree below). There is reference to a humpback–blue whale hybrid in the South Pacific, attributed to marine biologist Michael Poole.
Modern humpback whale populations originated in the southern hemisphere around 880,000 years ago and colonized the northern hemisphere 200,000 to 50,000 years ago. A 2014 genetic study suggested that the separate populations in the North Atlantic, North Pacific, and Southern Oceans have had limited gene flow and are distinct enough to be subspecies, with the scientific names of M. n. novaeangliae, M. n. kuzira, and M. n. australis, respectively. A non-migratory population in the Arabian Sea has been isolated for 70,000 years.
Characteristics
The adult humpback whale is generally long, though individuals up to long have been recorded. Females are usually longer than males. The species can reach body masses of . Calves are born at around long with a mass of . The species has a bulky body with a thin rostrum and proportionally long flippers, each around one-third of its body length. It has a short dorsal fin that varies from nearly nonexistent to somewhat long and curved. Like other rorquals, the humpback has grooves between the tip of the lower jaw and the navel. The grooves are relatively few in number in this species, ranging from 14 to 35. The upper jaws is lined with baleen plates, which number 540–800 in total and are black in color.
The dorsal or upper side of the animal is generally black; the ventral or underside has various levels of black and white coloration. Whales in the southern hemisphere tend to have more white pigmentation. The flippers can vary from all-white to white only on the undersurface. Some individuals may be all white, notably Migaloo who is a true albino. The varying color patterns and scars on the tail flukes distinguish individual animals. The end of the genital slit of the female is marked by a round feature, known as the hemispherical lobe, which visually distinguishes males and females.
Unique among large whales, humpbacks have bumps or tubercles on the head and front edge of the flippers; the tail fluke has a jagged trailing edge. The tubercles on the head are thick at the base and protrude up to . They are mostly hollow in the center, often containing at least one fragile hair that erupts from the skin and is thick. The tubercles develop early in gestation and may have a sensory function, as they are rich in nerves. Sensory nerve cells in the skin are adapted to withstand the high water pressure of diving.
In one study, a humpback whale brain measured long and wide at the tips of the temporal lobes, and weighed around . The humpback's brain has a complexity similar to that of the brains of smaller whales and dolphins. Computer models of the middle ear suggest that the humpback can hear at frequencies between 15Hz and 3kHz "when stimulated at the tympanic membrane", and between 200Hz and 9kHz "if stimulated at the thinner region of the tympanic bone adjacent to the tympanic membrane". These ranges are consistent with their vocalization ranges. As in all cetaceans, the respiratory tract of the humpback whale is connected to the blowholes and not to the mouth, although the species appears to be able to unlock the epiglottis and larynx and move them towards the oral cavity, allowing humpbacks to blow bubbles from their mouths. The vocal folds of the humpback are more horizontally positioned than those of land mammals which allows them to produce underwater calls. These calls are amplified by a laryngeal sac.
Behavior and ecology
Humpback whale groups, aside from mothers and calves, typically last for days or weeks at the most. They are normally sighted in small groups though large aggregations form during feeding and among males competing for females. Humpbacks may interact with other cetacean species, such as right whales, fin whales, and bottlenose dolphins. Humpbacks are highly active at the surface, performing aerial behaviors such as breaching, surface slapping with the tail flake (lobtailing) and flippers and peduncle throws which involve the tail crashing sideways on the surface. These may be forms of play and communication and/or for removing parasites. The species is a slower swimmer than other rorquals, cruising at . When threatened, a humpback may speed up to . Their proportionally long pectoral fins give them great propulsion and allow them to swim in any direction independently of the movements of the tail fluke. Humpbacks are able to flap and rotate their flippers in a manner similar to California sea lions.
Humpbacks rest at the surface with their bodies lying horizontally. They frequent shallow seamounts, commonly exploring depths of up to 80 meters (260 feet) and occasionally venturing into deep dives reaching up to 616 meters (2,020 feet). These deeper descents are believed to serve various purposes, including navigational guidance, communication with fellow humpback whales, and facilitation of feeding activities. Dives typically do not exceed five minutes during the summer but are normally 15–20 minutes during the winter. As it dives, a humpback typically raises its tail fluke, exposing the underside. Humpbacks have been observed to produce oral "bubble clouds" when near another individual, possibly in the context of "aggression, mate attraction, or play". Humpbacks may also use bubble cloud as "smoke screens" to escape from predators.
Feeding
Humpback whales feed from spring to fall. They are generalist feeders, their main food items being krill, copepods, other plankton and small schooling fish. The most common krill species eaten in the southern hemisphere is the Antarctic krill. Further north, the northern krill and various species of Euphausia and Thysanoessa are consumed. Fish prey include herring, capelin, sand lances and Atlantic mackerel. Like other rorquals, humpbacks are "gulp feeders", swallowing prey in bulk, while right whales and bowhead whales are skimmers. The whale increases its mouth gape by expanding the grooves. Water is pushed out through the baleen.
In the southern hemisphere, humpbacks have been recorded foraging in large compact gatherings numbering up to 200 individuals. A study undertaken in May 2009 found a super-aggregation of krill in Wilhelmina Bay, with a large number of humpback whales feeding on them. The researchers counted a density of 5.1 whales per square kilometer. Smaller and less dense aggregations of krill and whales were also found in Andvord Bay to the south. The krill and the whales are abundant in late autumn along the western Antarctic Peninsula, particularly in Wilhelmina Bay, where the whales seem to be eating as much as possible in preparation for the winter.
Humpbacks typically hunt their prey with bubble-nets, which is considered to be a form of tool use. A group swims in a shrinking circle while blowing air from their blowholes, capturing prey above in a cylinder of bubbles. They may dive up to performing this technique. Bubble-netting comes in two main forms; upward spirals and double loops. Upward spirals involve the whales blowing air from their blowholes continuously as they circle towards the surface, creating a spiral of bubbles. Double loops consist of a deep, long loop of bubbles that herds the prey, followed by slapping the surface and then a smaller loop that prepares the final capture. Combinations of spiraling and looping have been recorded. After the humpbacks create the "nets", the whales swim into them with their mouths gaping and ready to swallow. Bubble-net feeding has also been observed in solitary humpbacks, which can consume more food per mouthful without tiring, particularly with low-density prey patches.
Using network-based diffusion analysis, one study argued that whales learned lobtailing from other whales in the group over 27 years in response to a change in primary prey. The tubercles on the flippers stall the angle of attack, which both maximizes lift and minimizes drag (see tubercle effect). This, along with the shape of the flippers, allows the whales to make the abrupt turns necessary during bubble-feeding.
At Stellwagen Bank off the coast of Massachusetts, humpback whales have been recorded foraging at the seafloor for sand lances. This involves the whales flushing out the fish by brushing their jaws against the bottom.
Courtship and reproduction
Mating takes place during the winter months, which is when females reach estrus and males reach peak testosterone and sperm levels. Humpback whales are polygynandrous, with both sexes having multiple partners. Males frequently trail both lone females and cow–calf pairs. These males are known as "escorts"; the male that is closest to the female is known as the "principal escort", and fights off the other suitors, known as "challengers". Other males, called "secondary escorts", trail farther behind and are not directly involved in the conflict. Agonistic behavior between males consists of tail slashing, ramming, and head-butting. Males have also been observed engaging in copulation with each other.
Gestation in the species lasts 11.5 months, and females reproduce every two years. Fetuses start out with teeth and develop their baleen during the last months of their gestation. Humpback whale births have rarely been observed by humans. One birth witnessed off Madagascar occurred within four minutes. Mothers typically give birth in mid-winter, usually to a single calf. Before birth, a mother whales will move to shallower near the coast which reduces her chances of being harassed by escort males. It is common for the mother to help her newborn calf reach the surface. Young start out with furled dorsal fins, which straighten and stiffen as they get older. Calves with furled fins spend more time traveling and surfacing to breathe; calves with straighter fins can hold their breath longer and can rest and circle around at the surface more. Older calves are away from their mothers more than younger calves. Calves suckle for up to a year but can eat adult food at six months. Humpbacks are sexually mature at 5–10 years, depending on the population. Humpback whales possibly live for over 50 years.
Vocalizations
Male humpback whales produce complex songs during the winter breeding season. These vocals range in frequency between 100 Hz and 4 kHz, with harmonics reaching up to 24 kHz or more, and can travel at least . Males may sing for between 4 and 33 minutes, depending on the region. In Hawaii, humpback whales have been recorded vocalizing for as long as seven hours. Songs are divided into layers; "subunits", "units", "subphrases", "phrases" and "themes". A subunit refers to the discontinuities or inflections of a sound while full units are individual sounds, similar to musical notes. A succession of units creates a subphrase, and a collection of subphrases make up a phrase. Similar-sounding phrases are repeated in a series grouped into themes, and multiple themes create a song.
The function of these songs has been debated, but they may have multiple purposes. There is little evidence to suggest that songs establish dominance among males. However, there have been observations of non-singing males disrupting singers, possibly in aggression. Those who join singers are males who were not previously singing. Females do not appear to approach singers that are alone, but may be drawn to gatherings of singing males, much like a lek mating system. Another possibility is that songs bring in foreign whales to populate the breeding grounds. It has also been suggested that humpback whale songs have echolocating properties and may serve to locate other whales. A 2023 study found that as humpback whales numbers have recovered from whaling, singing has become less common.
Whale songs are similar among males in a specific area. Males may alter their songs over time, and others in contact with them copy these changes. They have been shown in some cases to spread "horizontally" between neighboring populations throughout successive breeding seasons. In the northern hemisphere, songs change more gradually while southern hemisphere songs go through cyclical "revolutions".
Humpback whales are reported to make other vocalizations. "Snorts" are quick low-frequency sounds commonly heard among animals in groups consisting of a mother–calf pair and one or more male escort groups. These likely function in mediating interactions within these groups. "Grumbles" are also low in frequency but last longer and are more often made by groups with one or more adult males. They appear to signal body size and may serve to establish social status. "Thwops" and "wops" are frequency modulated vocals, and may serve as contact calls both within and between groups. High-pitched "cries" and "violins" and modulated "shrieks" are normally heard in groups with two or more males and are associated with competition. Humpback whales produce short, low-frequency "grunts" and short, modulated "barks" when joining new groups.
Predation
Visible scars indicate that orcas prey upon juvenile humpbacks. A 2014 study in Western Australia observed that when available in large numbers, young humpbacks can be attacked and sometimes killed by orcas. Moreover, mothers and (possibly related) adults escort calves to deter such predation. The suggestion is that when humpbacks suffered near-extinction during the whaling era, orcas turned to other prey but are now resuming their former practice. There is also evidence that humpback whales will defend against or mob killer whales who are attacking either humpback calves or juveniles as well as members of other species, including seals. The humpback's protection of other species may be unintentional, a "spillover" of mobbing behavior intended to protect members of its species. The powerful flippers of humpback whales, often infested with large, sharp barnacles, are formidable weapons against orcas. When threatened, they will thresh their flippers and tails keeping the orcas at bay.
The great white shark is another confirmed predator of the humpback whale. In 2020, Marine biologists Dines and Gennari et al., published a documented incident of a pair of great white sharks within an hour apart, attacking and killing a live adult humpback whale. A second incident regarding great white sharks killing humpback whales was documented off the coast of South Africa. The shark recorded instigating the attack was a female nicknamed "Helen". Working alone, the shark attacked a emaciated and entangled humpback whale by attacking the whale's tail to cripple and bleed the whale before she managed to drown the whale by biting onto its head and pulling it underwater.
Infestations
Humpback whales often have barnacles living on their skin; the most common being the acorn barnacle species Coronula diadema and Coronula reginae, which in turn are sites for attachment for goose barnacle species like Conchoderma auritum and Conchoderma virgatum. They are most abundant at the lower jaw tip, along the middle ventral groove, near the genital slit and between the bumps on the flippers. C. reginae digs deep into the skin, while attachments by C. diadema are more superficial. The size of the latter species provides more sites for attachment for other barnacles. Barnacles are considered to be epibionts rather than parasites as they do not feed on the whales, though they can affect their swimming by increasing drag.
The whale louse species Cyamus boopis is specialized for feeding on humpback whales and is the only species in its family found on them. Internal parasites of humpbacks include protozoans of the genus Entamoeba, tapeworms of the family Diphyllobothriidae and roundworms of the infraorder Ascaridomorpha.
Range
Humpback whales are found in marine waters worldwide, except for some areas at the equator and High Arctic and some enclosed seas. The furthest north they have been recorded is at 81°N around northern Franz Josef Land. They are usually coastal and tend to congregate in waters within continental shelves. Their winter breeding grounds are located around the equator; their summer feeding areas are found in colder waters, including near the polar ice caps. Humpbacks go on vast migrations between their feeding and breeding areas, often crossing the open ocean. The species has been recorded traveling up to in one direction. An isolated, non-migratory population feeds and breeds in the northern Indian Ocean, mainly in the Arabian Sea around Oman. This population has also been recorded in the Gulf of Aden, the Persian Gulf, and off the coasts of Pakistan and India.
In the North Atlantic, there are two separate wintering populations, one in the West Indies, from Cuba to northern Venezuela, and the other in the Cape Verde Islands and northwest Africa. During summer, West Indies humpbacks congregate off New England, eastern Canada, and western Greenland, while the Cape Verde population gathers around Iceland and Norway. There is some overlap in the summer ranges of these populations, and West Indies humpbacks have been documented feeding further east. Whale visits into the Gulf of Mexico have been infrequent but have occurred in the gulf historically. They were considered to be uncommon in the Mediterranean Sea, but increased sightings, including re-sightings, indicate that more whales may colonize or recolonize it in the future.
The North Pacific has at least four breeding populations: off Mexico (including Baja California and the Revillagigedos Islands), Central America, the Hawaiian Islands, and both Okinawa and the Philippines. The Mexican population forages from the Aleutian Islands to California. During the summer, Central American humpbacks are found only off Oregon and California. In contrast, Hawaiian humpbacks have a wide feeding range but most travel to southeast Alaska and northern British Columbia. The wintering grounds of the Okinawa/Philippines population are mainly around the Russian Far East. There is some evidence for a fifth population somewhere in the northwestern Pacific. These whales are recorded to feed off the Aleutians with a breeding area somewhere south of the Bonin Islands.
Southern Hemisphere
In the Southern Hemisphere, humpback whales are divided into seven breeding stocks, some of which are further divided into sub-structures. These include the southeastern Pacific (stock G), southwestern Atlantic (stock A), southeastern Atlantic (stock B), southwestern Indian Ocean (stock C), southeastern Indian Ocean (stock D), southwestern Pacific (stock E), and the Oceania stock (stocks E–F). Stock G breeds in tropical and subtropical waters off the west coast of Central and South America and forages along the west coast of the Antarctic Peninsula, the South Orkney Islands and to a lesser extent the Tierra del Fuego of southern Chile. Stock A winters off Brazil and migrates to summer grounds around South Georgia and the South Sandwich Islands. Some stock A individuals have also been recorded off the western Antarctic Peninsula, suggesting an increased blurring of the boundaries between the feeding areas of stocks A and G.
Stock B breeds on the west coast of Africa and is further divided into Bl and B2 subpopulations, the former ranging from the Gulf of Guinea to Angola and the latter ranging from Angola to western South Africa. Stock B whales have been recorded foraging in waters to the southwest of the continent, mainly around Bouvet Island. Comparison of songs between those at Cape Lopez and Abrolhos Archipelago indicate that trans-Atlantic mixings between stock A and stock B whales occur. Stock C whales winter around southeastern Africa and surrounding waters. This stock is further divided into C1, C2, C3, and C4 subpopulations; C1 occurs around Mozambique and eastern South Africa, C2 around the Comoro Islands, C3 off the southern and eastern coast of Madagascar and C4 around the Mascarene Islands. The feeding range of this population is likely between coordinates 5°W and 60°E and under 50°S. There may be overlap in the feeding areas of stocks B and C.
Stock D whales breed off the western coast of Australia, and forage in the southern region of the Kerguelen Plateau. Stock E is divided into E1, E2, and E3 stocks. E1 whales have a breeding range off eastern Australia and Tasmania; their main feeding range is close to Antarctica, mainly within 130°E and 170°W. The Oceania stock is divided into the New Caledonia (E2), Tonga (E3), Cook Islands (F1) and French Polynesia (F2) subpopulations. This stock's feeding grounds mainly range from around the Ross Sea to the Antarctic Peninsula.
Human relations
Whaling
Humpback whales were hunted as early as the late 16th century. They were often the first species to be harvested in an area due to this coastal distribution. North Pacific kills alone are estimated at 28,000 during the 20th century. In the same period, over 200,000 humpbacks were taken in the Southern Hemisphere. North Atlantic populations dropped to as low as 700 individuals. In 1946, the International Whaling Commission (IWC) was founded to oversee the industry. They imposed hunting regulations and created hunting seasons. To prevent extinction, IWC banned commercial humpback whaling in 1966. By then, the global population had been reduced to around 5,000. The Soviet Union deliberately under-recorded its catches; the Soviets reported catching 2,820 between 1947 and 1972, but the true number was over 48,000.
As of 2004, hunting was restricted to a few animals each year off the Caribbean island of Bequia in Saint Vincent and the Grenadines. The take is not believed to threaten the local population. Japan had planned to kill 50 humpbacks in the 2007/08 season under its JARPA II research program. The announcement sparked global protests. After a visit to Tokyo by the IWC chair asking the Japanese for their co-operation in sorting out the differences between pro- and anti-whaling nations on the commission, the Japanese whaling fleet agreed to take no humpback whales during the two years it would take to reach a formal agreement. In 2010, the IWC authorized Greenland's native population to hunt a few humpback whales for the following three years.
Whale-watching
Much of the growth of commercial whale watching was built on the humpback whale. The species' highly active surface behaviors and tendency to become accustomed to boats have made them easy to observe, particularly for photographers. In 1975, humpback whale tours were established in New England and Hawaii. This business brings in a revenue of $20 million per year for Hawaii's economy. While Hawaiian tours have tended to be commercial, New England and California whale watching tours have introduced educational components.
Conservation status
As of 2018, the IUCN Red List lists the humpback whale as least-concern, with a worldwide population of around 135,000 whales, of which around 84,000 are mature individuals, and an increasing population trend. Regional estimates are around 13,000 in the North Atlantic, 21,000 in the North Pacific, and 80,000 in the southern hemisphere. For the isolated population in the Arabian Sea, only around 80 individuals remain, and this population is considered to be endangered. In most areas, humpback whale populations have recovered from historic whaling, particularly in the North Pacific. Such recoveries have led to the downlisting of the species' threatened status in the United States, Canada, and Australia. In Costa Rica, Ballena Marine National Park was established for humpback protection.
Humpbacks still face various other man-made threats, including entanglement by fishing gear, vessel collisions, human-caused noise and traffic disturbance, coastal habitat destruction, and climate change. Like other cetaceans, humpbacks can be injured by excessive noise. In the 19th century, two humpback whales were found dead near repeated oceanic sub-bottom blasting sites, with traumatic injuries and fractures in the ears. Saxitoxin, a paralytic shellfish poisoning from contaminated mackerel, has been implicated in humpback whale deaths. While oil ingestion is a risk for whales, a 2019 study found that oil did not foul baleen and instead was easily rinsed by flowing water.
Whale researchers along the Atlantic Coast report that there have been more stranded whales with signs of vessel strikes and fishing gear entanglement in recent years than ever before. The NOAA recorded 88 stranded humpback whales between January 2016 and February 2019. This is more than double the number of whales stranded between 2013 and 2016. Because of the increase in stranded whales, NOAA declared an unusual mortality event in April 2017. Virginia Beach Aquarium's stranding response coordinator, Alexander Costidis, stated the conclusion that the two causes of these unusual mortality events were vessel interactions and entanglements.
| Biology and health sciences | Cetaceans | null |
231826 | https://en.wikipedia.org/wiki/Seismometer | Seismometer | A seismometer is an instrument that responds to ground displacement and shaking such as caused by quakes, volcanic eruptions, and explosions. They are usually combined with a timing device and a recording device to form a seismograph. The output of such a device—formerly recorded on paper (see picture) or film, now recorded and processed digitally—is a seismogram. Such data is used to locate and characterize earthquakes, and to study the internal structure of Earth.
Basic principles
A simple seismometer, sensitive to up-down motions of the Earth, is like a weight hanging from a spring, both suspended from a frame that moves along with any motion detected. The relative motion between the weight (called the mass) and the frame provides a measurement of the vertical ground motion. A rotating drum is attached to the frame and a pen is attached to the weight, thus recording any ground motion in a seismogram.
Any movement from the ground moves the frame. The mass tends not to move because of its inertia, and by measuring the movement between the frame and the mass, the motion of the ground can be determined.
Early seismometers used optical levers or mechanical linkages to amplify the small motions involved, recording on soot-covered paper or photographic paper. Modern instruments use electronics. In some systems, the mass is held nearly motionless relative to the frame by an electronic negative feedback loop. The motion of the mass relative to the frame is measured, and the feedback loop applies a magnetic or electrostatic force to keep the mass nearly motionless. The voltage needed to produce this force is the output of the seismometer, which is recorded digitally.
In other systems the weight is allowed to move, and its motion produces an electrical charge in a coil attached to the mass which voltage moves through the magnetic field of a magnet attached to the frame. This design is often used in a geophone, which is used in exploration for oil and gas.
Seismic observatories usually have instruments measuring three axes: north-south (y-axis), east–west (x-axis), and vertical (z-axis). If only one axis is measured, it is usually the vertical because it is less noisy and gives better records of some seismic waves.
The foundation of a seismic station is critical. A professional station is sometimes mounted on bedrock. The best mountings may be in deep boreholes, which avoid thermal effects, ground noise and tilting from weather and tides. Other instruments are often mounted in insulated enclosures on small buried piers of unreinforced concrete. Reinforcing rods and aggregates would distort the pier as the temperature changes. A site is always surveyed for ground noise with a temporary installation before pouring the pier and laying conduit. Originally, European seismographs were placed in a particular area after a destructive earthquake. Today, they are spread to provide appropriate coverage (in the case of weak-motion seismology) or concentrated in high-risk regions (strong-motion seismology).
Nomenclature
The word derives from the Greek σεισμός, seismós, a shaking or quake, from the verb σείω, seíō, to shake; and μέτρον, métron, to measure, and was coined by David Milne-Home in 1841, to describe an instrument designed by Scottish physicist James David Forbes.
Seismograph is another Greek term from seismós and γράφω, gráphō, to draw. It is often used to mean seismometer, though it is more applicable to the older instruments in which the measuring and recording of ground motion were combined, than to modern systems, in which these functions are separated. Both types provide a continuous record of ground motion; this record distinguishes them from seismoscopes, which merely indicate that motion has occurred, perhaps with some simple measure of how large it was.
The technical discipline concerning such devices is called seismometry, a branch of seismology.
The concept of measuring the "shaking" of something means that the word "seismograph" might be used in a more general sense. For example, a monitoring station that tracks changes in electromagnetic noise affecting amateur radio waves presents an rf seismograph. And helioseismology studies the "quakes" on the Sun.
History
The first seismometer was made in China during the 2nd century. It was invented by Zhang Heng, a Chinese mathematician and astronomer. The first Western description of the device comes from the French physicist and priest Jean de Hautefeuille in 1703. The modern seismometer was developed in the 19th century.
Seismometers were placed on the Moon starting in 1969 as part of the Apollo Lunar Surface Experiments Package. In December 2018, a seismometer was deployed on the planet Mars by the InSight lander, the first time a seismometer was placed onto the surface of another planet.
Ancient era
In Ancient Egypt, Amenhotep, son of Hapu invented a precursor of seismometer, a vertical wooden poles connected with wooden gutters on the central axis functioned to fill water into a vessel until full to detect earthquakes.
In AD 132, Zhang Heng of China's Han dynasty is said to have invented the first seismoscope (by the definition above), which was called Houfeng Didong Yi (translated as, "instrument for measuring the seasonal winds and the movements of the Earth"). The description we have, from the History of the Later Han Dynasty, says that it was a large bronze vessel, about 2 meters in diameter; at eight points around the top were dragon's heads holding bronze balls. When there was an earthquake, one of the dragons' mouths would open and drop its ball into a bronze toad at the base, making a sound and supposedly showing the direction of the earthquake. On at least one occasion, probably at the time of a large earthquake in Gansu in AD 143, the seismoscope indicated an earthquake even though one was not felt. The available text says that inside the vessel was a central column that could move along eight tracks; this is thought to refer to a pendulum, though it is not known exactly how this was linked to a mechanism that would open only one dragon's mouth. The first earthquake recorded by this seismoscope was supposedly "somewhere in the east". Days later, a rider from the east reported this earthquake.
Early designs (1259–1839)
By the 13th century, seismographic devices existed in the Maragheh observatory (founded 1259) in Persia, though it is unclear whether these were constructed independently or based on the first seismoscope. French physicist and priest Jean de Hautefeuille described a seismoscope in 1703, which used a bowl filled with mercury which would spill into one of eight receivers equally spaced around the bowl, though there is no evidence that he actually constructed the device. A mercury seismoscope was constructed in 1784 or 1785 by Atanasio Cavalli, a copy of which can be found at the University Library in Bologna, and a further mercury seismoscope was constructed by Niccolò Cacciatore in 1818. James Lind also built a seismological tool of unknown design or efficacy (known as an earthquake machine) in the late 1790s.
Pendulum devices were developing at the same time. Neapolitan naturalist Nicola Cirillo set up a network of pendulum earthquake detectors following the 1731 Puglia Earthquake, where the amplitude was detected using a protractor to measure the swinging motion. Benedictine monk Andrea Bina further developed this concept in 1751, having the pendulum create trace marks in sand under the mechanism, providing both magnitude and direction of motion. Neapolitan clockmaker Domenico Salsano produced a similar pendulum which recorded using a paintbrush in 1783, labelling it a geo-sismometro, possibly the first use of a similar word to seismometer. Naturalist Nicolo Zupo devised an instrument to detect electrical disturbances and earthquakes at the same time (1784).
The first moderately successful device for detecting the time of an earthquake was devised by Ascanio Filomarino in 1796, who improved upon Salsano's pendulum instrument, using a pencil to mark, and using a hair attached to the mechanism to inhibit the motion of a clock's balance wheel. This meant that the clock would only start once an earthquake took place, allowing determination of the time of incidence.
After an earthquake taking place on October 4, 1834, Luigi Pagani observed that the mercury seismoscope held at Bologna University had completely spilled over, and did not provide useful information. He therefore devised a portable device that used lead shot to detect the direction of an earthquake, where the lead fell into four bins arranged in a circle, to determine the quadrant of earthquake incidence. He completed the instrument in 1841.
Early Modern designs (1839–1880)
In response to a series of earthquakes near Comrie in Scotland in 1839, a committee was formed in the United Kingdom in order to produce better detection devices for earthquakes. The outcome of this was an inverted pendulum seismometer constructed by James David Forbes, first presented in a report by David Milne-Home in 1842, which recorded the measurements of seismic activity through the use of a pencil placed on paper above the pendulum. The designs provided did not prove effective, according to Milne's reports. It was Milne who coined the word seismometer in 1841, to describe this instrument. In 1843, the first horizontal pendulum was used in a seismometer, reported by Milne (though it is unclear if he was the original inventor). After these inventions, Robert Mallet published an 1848 paper where he suggested ideas for seismometer design, suggesting that such a device would need to register time, record amplitudes horizontally and vertically, and ascertain direction. His suggested design was funded, and construction was attempted, but his final design did not fulfill his expectations and suffered from the same problems as the Forbes design, being inaccurate and not self-recording.
Karl Kreil constructed a seismometer in Prague between 1848 and 1850, which used a point-suspended rigid cylindrical pendulum covered in paper, drawn upon by a fixed pencil. The cylinder was rotated every 24 hours, providing an approximate time for a given quake.
Luigi Palmieri, influenced by Mallet's 1848 paper, invented a seismometer in 1856 that could record the time of an earthquake. This device used metallic pendulums which closed an electric circuit with vibration, which then powered an electromagnet to stop a clock. Palmieri seismometers were widely distributed and used for a long time.
By 1872, a committee in the United Kingdom led by James Bryce expressed their dissatisfaction with the current available seismometers, still using the large 1842 Forbes device located in Comrie Parish Church, and requested a seismometer which was compact, easy to install and easy to read. In 1875 they settled on a large example of the Mallet device, consisting of an array of cylindrical pins of various sizes installed at right angles to each other on a sand bed, where larger earthquakes would knock down larger pins. This device was constructed in 'Earthquake House' near Comrie, which can be considered the world's first purpose-built seismological observatory. As of 2013, no earthquake has been large enough to cause any of the cylinders to fall in either the original device or replicas.
The first seismographs (1880-)
The first seismographs were invented in the 1870s and 1880s. The first seismograph was produced by Filippo Cecchi in around 1875. A seismoscope would trigger the device to begin recording, and then a recording surface would produce a graphical illustration of the tremors automatically (a seismogram). However, the instrument was not sensitive enough, and the first seismogram produced by the instrument was in 1887, by which time John Milne had already demonstrated his design in Japan.
In 1880, the first horizontal pendulum seismometer was developed by the team of John Milne, James Alfred Ewing and Thomas Gray, who worked as foreign-government advisors in Japan, from 1880 to 1895. Milne, Ewing and Gray, all having been hired by the Meiji Government in the previous five years to assist Japan's modernization efforts, founded the Seismological Society of Japan in response to an Earthquake that took place on February 22, 1880, at Yokohama (Yokohama earthquake). Two instruments were constructed by Ewing over the next year, one being a common-pendulum seismometer and the other being the first seismometer using a damped horizontal pendulum. The innovative recording system allowed for a continuous record, the first to do so. The first seismogram was recorded on 3 November 1880 on both of Ewing's instruments. Modern seismometers would eventually descend from these designs. Milne has been referred to as the 'Father of modern seismology' and his seismograph design has been called the first modern seismometer.
This produced the first effective measurement of horizontal motion. Gray would produce the first reliable method for recording vertical motion, which produced the first effective 3-axis recordings.
An early special-purpose seismometer consisted of a large, stationary pendulum, with a stylus on the bottom. As the earth started to move, the heavy mass of the pendulum had the inertia to stay still within the frame. The result is that the stylus scratched a pattern corresponding with the Earth's movement. This type of strong-motion seismometer recorded upon a smoked glass (glass with carbon soot). While not sensitive enough to detect distant earthquakes, this instrument could indicate the direction of the pressure waves and thus help find the epicenter of a local quake. Such instruments were useful in the analysis of the 1906 San Francisco earthquake. Further analysis was performed in the 1980s, using these early recordings, enabling a more precise determination of the initial fault break location in Marin county and its subsequent progression, mostly to the south.
Later, professional suites of instruments for the worldwide standard seismographic network had one set of instruments tuned to oscillate at fifteen seconds, and the other at ninety seconds, each set measuring in three directions. Amateurs or observatories with limited means tuned their smaller, less sensitive instruments to ten seconds.
The basic damped horizontal pendulum seismometer swings like the gate of a fence. A heavy weight is mounted on the point of a long (from 10 cm to several meters) triangle, hinged at its vertical edge. As the ground moves, the weight stays unmoving, swinging the "gate" on the hinge.
The advantage of a horizontal pendulum is that it achieves very low frequencies of oscillation in a compact instrument. The "gate" is slightly tilted, so the weight tends to slowly return to a central position. The pendulum is adjusted (before the damping is installed) to oscillate once per three seconds, or once per thirty seconds. The general-purpose instruments of small stations or amateurs usually oscillate once per ten seconds. A pan of oil is placed under the arm, and a small sheet of metal mounted on the underside of the arm drags in the oil to damp oscillations. The level of oil, position on the arm, and angle and size of sheet is adjusted until the damping is "critical", that is, almost having oscillation. The hinge is very low friction, often torsion wires, so the only friction is the internal friction of the wire. Small seismographs with low proof masses are placed in a vacuum to reduce disturbances from air currents.
Zollner described torsionally suspended horizontal pendulums as early as 1869, but developed them for gravimetry rather than seismometry.
Early seismometers had an arrangement of levers on jeweled bearings, to scratch smoked glass or paper. Later, mirrors reflected a light beam to a direct-recording plate or roll of photographic paper. Briefly, some designs returned to mechanical movements to save money. In mid-twentieth-century systems, the light was reflected to a pair of differential electronic photosensors called a photomultiplier. The voltage generated in the photomultiplier was used to drive galvanometers which had a small mirror mounted on the axis. The moving reflected light beam would strike the surface of the turning drum, which was covered with photo-sensitive paper. The expense of developing photo-sensitive paper caused many seismic observatories to switch to ink or thermal-sensitive paper.
After World War II, the seismometers developed by Milne, Ewing and Gray were adapted into the widely used Press-Ewing seismometer.
Modern instruments
Modern instruments use electronic sensors, amplifiers, and recording devices. Most are broadband covering a wide range of frequencies. Some seismometers can measure motions with frequencies from 500 Hz to 0.00118 Hz (1/500 = 0.002 seconds per cycle, to 1/0.00118 = 850 seconds per cycle). The mechanical suspension for horizontal instruments remains the garden-gate described above. Vertical instruments use some kind of constant-force suspension, such as the LaCoste suspension. The LaCoste suspension uses a zero-length spring to provide a long period (high sensitivity). Some modern instruments use a "triaxial" or "Galperin" design, in which three identical motion sensors are set at the same angle to the vertical but 120 degrees apart on the horizontal. Vertical and horizontal motions can be computed from the outputs of the three sensors.
Seismometers unavoidably introduce some distortion into the signals they measure, but professionally designed systems have carefully characterized frequency transforms.
Modern sensitivities come in three broad ranges: geophones, 50 to 750 V/m; local geologic seismographs, about 1,500 V/m; and teleseismographs, used for world survey, about 20,000 V/m. Instruments come in three main varieties: short period, long period and broadband. The short and long period measure velocity and are very sensitive, however they 'clip' the signal or go off-scale for ground motion that is strong enough to be felt by people. A 24-bit analog-to-digital conversion channel is commonplace. Practical devices are linear to roughly one part per million.
Delivered seismometers come with two styles of output: analog and digital. Analog seismographs require analog recording equipment, possibly including an analog-to-digital converter. The output of a digital seismograph can be simply input to a computer. It presents the data in a standard digital format (often "SE2" over Ethernet).
Teleseismometers
The modern broadband seismograph can record a very broad range of frequencies. It consists of a small "proof mass", confined by electrical forces, driven by sophisticated electronics. As the earth moves, the electronics attempt to hold the mass steady through a feedback circuit. The amount of force necessary to achieve this is then recorded.
In most designs the electronics holds a mass motionless relative to the frame. This device is called a "force balance accelerometer". It measures acceleration instead of velocity of ground movement. Basically, the distance between the mass and some part of the frame is measured very precisely, by a linear variable differential transformer. Some instruments use a linear variable differential capacitor.
That measurement is then amplified by electronic amplifiers attached to parts of an electronic negative feedback loop. One of the amplified currents from the negative feedback loop drives a coil very like a loudspeaker. The result is that the mass stays nearly motionless.
Most instruments measure directly the ground motion using the distance sensor. The voltage generated in a sense coil on the mass by the magnet directly measures the instantaneous velocity of the ground. The current to the drive coil provides a sensitive, accurate measurement of the force between the mass and frame, thus measuring directly the ground's acceleration (using f=ma where f=force, m=mass, a=acceleration).
One of the continuing problems with sensitive vertical seismographs is the buoyancy of their masses. The uneven changes in pressure caused by wind blowing on an open window can easily change the density of the air in a room enough to cause a vertical seismograph to show spurious signals. Therefore, most professional seismographs are sealed in rigid gas-tight enclosures. For example, this is why a common Streckeisen model has a thick glass base that must be glued to its pier without bubbles in the glue.
It might seem logical to make the heavy magnet serve as a mass, but that subjects the seismograph to errors when the Earth's magnetic field moves. This is also why seismograph's moving parts are constructed from a material that interacts minimally with magnetic fields. A seismograph is also sensitive to changes in temperature so many instruments are constructed from low expansion materials such as nonmagnetic invar.
The hinges on a seismograph are usually patented, and by the time the patent has expired, the design has been improved. The most successful public domain designs use thin foil hinges in a clamp.
Another issue is that the transfer function of a seismograph must be accurately characterized, so that its frequency response is known. This is often the crucial difference between professional and amateur instruments. Most are characterized on a variable frequency shaking table.
Strong-motion seismometers
Another type of seismometer is a digital strong-motion seismometer, or accelerograph. The data from such an instrument is essential to understand how an earthquake affects man-made structures, through earthquake engineering. The recordings of such instruments are crucial for the assessment of seismic hazard, through engineering seismology.
A strong-motion seismometer measures acceleration. This can be mathematically integrated later to give velocity and position. Strong-motion seismometers are not as sensitive to ground motions as teleseismic instruments but they stay on scale during the strongest seismic shaking.
Strong motion sensors are used for intensity meter applications.
Other forms
Accelerographs and geophones are often heavy cylindrical magnets with a spring-mounted coil inside. As the case moves, the coil tends to stay stationary, so the magnetic field cuts the wires, inducing current in the output wires. They receive frequencies from several hundred hertz down to 1 Hz. Some have electronic damping, a low-budget way to get some of the performance of the closed-loop wide-band geologic seismographs.
Strain-beam accelerometers constructed as integrated circuits are too insensitive for geologic seismographs (2002), but are widely used in geophones.
Some other sensitive designs measure the current generated by the flow of a non-corrosive ionic fluid through an electret sponge or a conductive fluid through a magnetic field.
Interconnected seismometers
Seismometers spaced in a seismic array can also be used to precisely locate, in three dimensions, the source of an earthquake, using the time it takes for seismic waves to propagate away from the hypocenter, the initiating point of fault rupture ( | Technology | Measuring instruments | null |
231920 | https://en.wikipedia.org/wiki/Scheduling%20%28computing%29 | Scheduling (computing) | In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.
The scheduling activity is carried out by a mechanism called a scheduler. Schedulers are often designed so as to keep all computer resources busy (as in load balancing), allow multiple users to share system resources effectively, or to achieve a target quality-of-service.
Scheduling is fundamental to computation itself, and an intrinsic part of the execution model of a computer system; the concept of scheduling makes it possible to have computer multitasking with a single central processing unit (CPU).
Goals
A scheduler may aim at one or more goals, for example:
maximizing throughput (the total amount of work completed per time unit);
minimizing wait time (time from work becoming ready until the first point it begins execution);
minimizing latency or response time (time from work becoming ready until it is finished in case of batch activity, or until the system responds and hands the first output to the user in case of interactive activity);
maximizing fairness (equal CPU time to each process, or more generally appropriate times according to the priority and workload of each process).
In practice, these goals often conflict (e.g. throughput versus latency), thus a scheduler will implement a suitable compromise. Preference is measured by any one of the concerns mentioned above, depending upon the user's needs and objectives.
In real-time environments, such as embedded systems for automatic control in industry (for example robotics), the scheduler also must ensure that processes can meet deadlines; this is crucial for keeping the system stable. Scheduled tasks can also be distributed to remote devices across a network and managed through an administrative back end.
Types of operating system schedulers
The scheduler is an operating system module that selects the next jobs to be admitted into the system and the next process to run. Operating systems may feature up to three distinct scheduler types: a long-term scheduler (also known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler, and a short-term scheduler. The names suggest the relative frequency with which their functions are performed.
Process scheduler
The process scheduler is a part of the operating system that decides which process runs at a certain point in time. It usually has the ability to pause a running process, move it to the back of the running queue and start a new process; such a scheduler is known as a preemptive scheduler, otherwise it is a cooperative scheduler.
We distinguish between long-term scheduling, medium-term scheduling, and short-term scheduling based on how often decisions must be made.
Long-term scheduling
The long-term scheduler, or admission scheduler, decides which jobs or processes are to be admitted to the ready queue (in main memory); that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system, the degree of concurrency to be supported at any one time whether many or few processes are to be executed concurrently, and how the split between I/O-intensive and CPU-intensive processes is to be handled. The long-term scheduler is responsible for controlling the degree of multiprogramming.
In general, most processes can be described as either I/O-bound or CPU-bound. An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time doing computations. It is important that a long-term scheduler selects a good process mix of I/O-bound and CPU-bound processes. If all processes are I/O-bound, the ready queue will almost always be empty, and the short-term scheduler will have little to do. On the other hand, if all processes are CPU-bound, the I/O waiting queue will almost always be empty, devices will go unused, and again the system will be unbalanced. The system with the best performance will thus have a combination of CPU-bound and I/O-bound processes. In modern operating systems, this is used to make sure that real-time processes get enough CPU time to finish their tasks.
Long-term scheduling is also important in large-scale systems such as batch processing systems, computer clusters, supercomputers, and render farms. For example, in concurrent systems, coscheduling of interacting processes is often required to prevent them from blocking due to waiting on each other. In these cases, special-purpose job scheduler software is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system.
Some operating systems only allow new tasks to be added if it is sure all real-time deadlines can still be met.
The specific heuristic algorithm used by an operating system to accept or reject new tasks is the admission control mechanism.
Medium-term scheduling
The medium-term scheduler temporarily removes processes from main memory and places them in secondary memory (such as a hard disk drive) or vice versa, which is commonly referred to as swapping out or swapping in (also incorrectly as paging out or paging in). The medium-term scheduler may decide to swap out a process that has not been active for some time, a process that has a low priority, a process that is page faulting frequently, or a process that is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource.
In many systems today (those that support mapping virtual address space to secondary storage other than the swap file), the medium-term scheduler may actually perform the role of the long-term scheduler, by treating binaries as swapped-out processes upon their execution. In this way, when a segment of the binary is required it can be swapped in on demand, or lazy loaded, also called demand paging.
Short-term scheduling
The short-term scheduler (also known as the CPU scheduler) decides which of the ready, in-memory processes is to be executed (allocated a CPU) after a clock interrupt, an I/O interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers A scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known as voluntary or co-operative), in which case the scheduler is unable to force processes off the CPU.
A preemptive scheduler relies upon a programmable interval timer which invokes an interrupt handler that runs in kernel mode and implements the scheduling function.
Dispatcher
Another component that is involved in the CPU-scheduling function is the dispatcher, which is the module that gives control of the CPU to the process selected by the short-term scheduler. It receives control in kernel mode as the result of an interrupt or system call. The functions of a dispatcher involve the following:
Context switches, in which the dispatcher saves the state (also known as context) of the process or thread that was previously running; the dispatcher then loads the initial or previously saved state of the new process.
Switching to user mode.
Jumping to the proper location in the user program to restart that program indicated by its new state.
The dispatcher should be as fast as possible since it is invoked during every process switch. During the context switches, the processor is virtually idle for a fraction of time, thus unnecessary context switches should be avoided. The time it takes for the dispatcher to stop one process and start another is known as the dispatch latency.
Scheduling disciplines
A scheduling discipline (also called scheduling policy or scheduling algorithm) is an algorithm used for distributing resources among parties which simultaneously and asynchronously request them. Scheduling disciplines are used in routers (to handle packet traffic) as well as in operating systems (to share CPU time among both threads and processes), disk drives (I/O scheduling), printers (print spooler), most embedded systems, etc.
The main purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongst the parties utilizing the resources. Scheduling deals with the problem of deciding which of the outstanding requests is to be allocated resources. There are many different scheduling algorithms. In this section, we introduce several of them.
In packet-switched computer networks and other statistical multiplexing, the notion of a scheduling algorithm is used as an alternative to first-come first-served queuing of data packets.
The simplest best-effort scheduling algorithms are round-robin, fair queuing (a max-min fair scheduling algorithm), proportional-fair scheduling and maximum throughput. If differentiated or guaranteed quality of service is offered, as opposed to best-effort communication, weighted fair queuing may be utilized.
In advanced packet radio wireless networks such as HSDPA (High-Speed Downlink Packet Access) 3.5G cellular system, channel-dependent scheduling may be used to take advantage of channel state information. If the channel conditions are favourable, the throughput and system spectral efficiency may be increased. In even more advanced systems such as LTE, the scheduling is combined by channel-dependent packet-by-packet dynamic channel allocation, or by assigning OFDMA multi-carriers or other frequency-domain equalization components to the users that best can utilize them.
First come, first served
First in, first out (FIFO), also known as first come, first served (FCFS), is the simplest scheduling algorithm. FIFO simply queues processes in the order that they arrive in the ready queue. This is commonly used for a , for example as illustrated in this section.
Since context switches only occur upon process termination, and no reorganization of the process queue is required, scheduling overhead is minimal.
Throughput can be low, because long processes can be holding the CPU, causing the short processes to wait for a long time (known as the convoy effect).
No starvation, because each process gets chance to be executed after a definite time.
Turnaround time, waiting time and response time depend on the order of their arrival and can be high for the same reasons above.
No prioritization occurs, thus this system has trouble meeting process deadlines.
The lack of prioritization means that as long as every process eventually completes, there is no starvation. In an environment where some processes might not complete, there can be starvation.
It is based on queuing.
Priority scheduling
Earliest deadline first (EDF) or least time to go is a dynamic scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs (a task finishes, new task is released, etc.), the queue will be searched for the process closest to its deadline, which will be the next to be scheduled for execution.
Shortest remaining time first
Similar to shortest job first (SJF). With this strategy the scheduler arranges processes with the least estimated processing time remaining to be next in the queue. This requires advanced knowledge or estimations about the time required for a process to complete.
If a shorter process arrives during another process' execution, the currently running process is interrupted (known as preemption), dividing that process into two separate computing blocks. This creates excess overhead through additional context switching. The scheduler must also place each incoming process into a specific place in the queue, creating additional overhead.
This algorithm is designed for maximum throughput in most scenarios.
Waiting time and response time increase as the process's computational requirements increase. Since turnaround time is based on waiting time plus processing time, longer processes are significantly affected by this. Overall waiting time is smaller than FIFO, however since no process has to wait for the termination of the longest process.
No particular attention is given to deadlines, the programmer can only attempt to make processes with deadlines as short as possible.
Starvation is possible, especially in a busy system with many small processes being run.
To use this policy we should have at least two processes of different priority
Fixed-priority pre-emptive scheduling
The operating system assigns a fixed-priority rank to every process, and the scheduler arranges the processes in the ready queue in order of their priority. Lower-priority processes get interrupted by incoming higher-priority processes.
Overhead is not minimal, nor is it significant.
FPPS has no particular advantage in terms of throughput over FIFO scheduling.
If the number of rankings is limited, it can be characterized as a collection of FIFO queues, one for each priority ranking. Processes in lower-priority queues are selected only when all of the higher-priority queues are empty.
Waiting time and response time depend on the priority of the process. Higher-priority processes have smaller waiting and response times.
Deadlines can be met by giving processes with deadlines a higher priority.
Starvation of lower-priority processes is possible with large numbers of high-priority processes queuing for CPU time.
Round-robin scheduling
The scheduler assigns a fixed time unit per process, and cycles through them. If process completes within that time-slice it gets terminated otherwise it is rescheduled after giving a chance to all other processes.
RR scheduling involves extensive overhead, especially with a small time unit.
Balanced throughput between FCFS/FIFO and SJF/SRTF, shorter jobs are completed faster than in FIFO and longer processes are completed faster than in SJF.
Good average response time, waiting time is dependent on number of processes, and not average process length.
Because of high waiting times, deadlines are rarely met in a pure RR system.
Starvation can never occur, since no priority is given. Order of time unit allocation is based upon process arrival time, similar to FIFO.
If Time-Slice is large it becomes FCFS/FIFO or if it is short then it becomes SJF/SRTF.
Multilevel queue scheduling
This is used for situations in which processes are easily divided into different groups. For example, a common division is made between foreground (interactive) processes and background (batch) processes. These two types of processes have different response-time requirements and so may have different scheduling needs. It is very useful for shared memory problems.
Work-conserving schedulers
A work-conserving scheduler is a scheduler that always tries to keep the scheduled resources busy, if there are submitted jobs ready to be scheduled. In contrast, a non-work conserving scheduler is a scheduler that, in some cases, may leave the scheduled resources idle despite the presence of jobs ready to be scheduled.
Scheduling optimization problems
There are several scheduling problems in which the goal is to decide which job goes to which station at what time, such that the total makespan is minimized:
Job-shop scheduling there are jobs and identical stations. Each job should be executed on a single machine. This is usually regarded as an online problem.
Open-shop scheduling there are jobs and different stations. Each job should spend some time at each station, in a free order.
Flow-shop scheduling there are jobs and different stations. Each job should spend some time at each station, in a pre-determined order.
Manual scheduling
A very common method in embedded systems is to schedule jobs manually. This can for example be done in a time-multiplexed fashion. Sometimes the kernel is divided in three or more parts: Manual scheduling, preemptive and interrupt level. Exact methods for scheduling jobs are often proprietary.
No resource starvation problems
Very high predictability; allows implementation of hard real-time systems
Almost no overhead
May not be optimal for all applications
Effectiveness is completely dependent on the implementation
Choosing a scheduling algorithm
When designing an operating system, a programmer must consider which scheduling algorithm will perform best for the use the system is going to see. There is no universal best scheduling algorithm, and many operating systems use extended or combinations of the scheduling algorithms above.
For example, Windows NT/XP/Vista uses a multilevel feedback queue, a combination of fixed-priority preemptive scheduling, round-robin, and first in, first out algorithms. In this system, threads can dynamically increase or decrease in priority depending on if it has been serviced already, or if it has been waiting extensively. Every priority level is represented by its own queue, with round-robin scheduling among the high-priority threads and FIFO among the lower-priority ones. In this sense, response time is short for most threads, and short but critical system threads get completed very quickly. Since threads can only use one time unit of the round-robin in the highest-priority queue, starvation can be a problem for longer high-priority threads.
Operating system process scheduler implementations
The algorithm used may be as simple as round-robin in which each process is given equal time (for instance 1 ms, usually between 1 ms and 100 ms) in a cycling list. So, process A executes for 1 ms, then process B, then process C, then back to process A.
More advanced algorithms take into account process priority, or the importance of the process. This allows some processes to use more time than other processes. The kernel always uses whatever resources it needs to ensure proper functioning of the system, and so can be said to have infinite priority. In SMP systems, processor affinity is considered to increase overall system performance, even if it may cause a process itself to run more slowly. This generally improves performance by reducing cache thrashing.
OS/360 and successors
IBM OS/360 was available with three different schedulers. The differences were such that the variants were often considered three different operating systems:
The Single Sequential Scheduler option, also known as the Primary Control Program (PCP) provided sequential execution of a single stream of jobs.
The Multiple Sequential Scheduler option, known as Multiprogramming with a Fixed Number of Tasks (MFT) provided execution of multiple concurrent jobs. Execution was governed by a priority which had a default for each stream or could be requested separately for each job. MFT version II added subtasks (threads), which executed at a priority based on that of the parent job. Each job stream defined the maximum amount of memory which could be used by any job in that stream.
The Multiple Priority Schedulers option, or Multiprogramming with a Variable Number of Tasks (MVT), featured subtasks from the start; each job requested the priority and memory it required before execution.
Later virtual storage versions of MVS added a Workload Manager feature to the scheduler, which schedules processor resources according to an elaborate scheme defined by the installation.
Windows
Very early MS-DOS and Microsoft Windows systems were non-multitasking, and as such did not feature a scheduler. Windows 3.1x used a non-preemptive scheduler, meaning that it did not interrupt programs. It relied on the program to end or tell the OS that it didn't need the processor so that it could move on to another process. This is usually called cooperative multitasking. Windows 95 introduced a rudimentary preemptive scheduler; however, for legacy support opted to let 16-bit applications run without preemption.
Windows NT-based operating systems use a multilevel feedback queue. 32 priority levels are defined, 0 through to 31, with priorities 0 through 15 being normal priorities and priorities 16 through 31 being soft real-time priorities, requiring privileges to assign. 0 is reserved for the Operating System. User interfaces and APIs work with priority classes for the process and the threads in the process, which are then combined by the system into the absolute priority level.
The kernel may change the priority level of a thread depending on its I/O and CPU usage and whether it is interactive (i.e. accepts and responds to input from humans), raising the priority of interactive and I/O bounded processes and lowering that of CPU bound processes, to increase the responsiveness of interactive applications. The scheduler was modified in Windows Vista to use the cycle counter register of modern processors to keep track of exactly how many CPU cycles a thread has executed, rather than just using an interval-timer interrupt routine. Vista also uses a priority scheduler for the I/O queue so that disk defragmenters and other such programs do not interfere with foreground operations.
Classic Mac OS and macOS
Mac OS 9 uses cooperative scheduling for threads, where one process controls multiple cooperative threads, and also provides preemptive scheduling for multiprocessing tasks. The kernel schedules multiprocessing tasks using a preemptive scheduling algorithm. All Process Manager processes run within a special multiprocessing task, called the blue task. Those processes are scheduled cooperatively, using a round-robin scheduling algorithm; a process yields control of the processor to another process by explicitly calling a blocking function such as WaitNextEvent. Each process has its own copy of the Thread Manager that schedules that process's threads cooperatively; a thread yields control of the processor to another thread by calling YieldToAnyThread or YieldToThread.
macOS uses a multilevel feedback queue, with four priority bands for threadsnormal, system high priority, kernel mode only, and real-time. Threads are scheduled preemptively; macOS also supports cooperatively scheduled threads in its implementation of the Thread Manager in Carbon.
AIX
In AIX Version 4 there are three possible values for thread scheduling policy:
First In, First Out: Once a thread with this policy is scheduled, it runs to completion unless it is blocked, it voluntarily yields control of the CPU, or a higher-priority thread becomes dispatchable. Only fixed-priority threads can have a FIFO scheduling policy.
Round Robin: This is similar to the AIX Version 3 scheduler round-robin scheme based on 10 ms time slices. When a RR thread has control at the end of the time slice, it moves to the tail of the queue of dispatchable threads of its priority. Only fixed-priority threads can have a Round Robin scheduling policy.
OTHER: This policy is defined by POSIX1003.4a as implementation-defined. In AIX Version 4, this policy is defined to be equivalent to RR, except that it applies to threads with non-fixed priority. The recalculation of the running thread's priority value at each clock interrupt means that a thread may lose control because its priority value has risen above that of another dispatchable thread. This is the AIX Version 3 behavior.
Threads are primarily of interest for applications that currently consist of several asynchronous processes. These applications might impose a lighter load on the system if converted to a multithreaded structure.
AIX 5 implements the following scheduling policies: FIFO, round robin, and a fair round robin. The FIFO policy has three different implementations: FIFO, FIFO2, and FIFO3. The round robin policy is named SCHED_RR in AIX, and the fair round robin is called SCHED_OTHER.
Linux
Linux 1.2
Linux 1.2 used a round-robin scheduling policy.
Linux 2.2
Linux 2.2 added scheduling classes and support for symmetric multiprocessing (SMP).
Linux 2.4
In Linux 2.4, an O(n) scheduler with a multilevel feedback queue with priority levels ranging from 0 to 140 was used; 0–99 are reserved for real-time tasks and 100–140 are considered nice task levels. For real-time tasks, the time quantum for switching processes was approximately 200 ms, and for nice tasks approximately 10 ms. The scheduler ran through the run queue of all ready processes, letting the highest priority processes go first and run through their time slices, after which they will be placed in an expired queue. When the active queue is empty the expired queue will become the active queue and vice versa.
However, some enterprise Linux distributions such as SUSE Linux Enterprise Server replaced this scheduler with a backport of the O(1) scheduler (which was maintained by Alan Cox in his Linux 2.4-ac Kernel series) to the Linux 2.4 kernel used by the distribution.
Linux 2.6.0 to Linux 2.6.22
In versions 2.6.0 to 2.6.22, the kernel used an O(1) scheduler developed by Ingo Molnar and many other kernel developers during the Linux 2.5 development. For many kernel in time frame, Con Kolivas developed patch sets which improved interactivity with this scheduler or even replaced it with his own schedulers.
Linux 2.6.23 to Linux 6.5
Con Kolivas' work, most significantly his implementation of fair scheduling named Rotating Staircase Deadline (RSDL), inspired Ingo Molnár to develop the Completely Fair Scheduler (CFS) as a replacement for the earlier O(1) scheduler, crediting Kolivas in his announcement. CFS is the first implementation of a fair queuing process scheduler widely used in a general-purpose operating system.
The CFS uses a well-studied, classic scheduling algorithm called fair queuing originally invented for packet networks. Fair queuing had been previously applied to CPU scheduling under the name stride scheduling. The fair queuing CFS scheduler has a scheduling complexity of , where is the number of tasks in the runqueue. Choosing a task can be done in constant time, but reinserting a task after it has run requires operations, because the run queue is implemented as a red–black tree.
The Brain Fuck Scheduler, also created by Con Kolivas, is an alternative to the CFS.
Linux 6.6
In 2023, Peter Zijlstra proposed replacing CFS with an earliest eligible virtual deadline first scheduling (EEVDF) process scheduler. The aim was to remove the need for CFS latency nice patches.
FreeBSD
FreeBSD uses a multilevel feedback queue with priorities ranging from 0–255. 0–63 are reserved for interrupts, 64–127 for the top half of the kernel, 128–159 for real-time user threads, 160–223 for time-shared user threads, and 224–255 for idle user threads. Also, like Linux, it uses the active queue setup, but it also has an idle queue.
NetBSD
NetBSD uses a multilevel feedback queue with priorities ranging from 0–223. 0–63 are reserved for time-shared threads (default, SCHED_OTHER policy), 64–95 for user threads which entered kernel space, 96-128 for kernel threads, 128–191 for user real-time threads (SCHED_FIFO and SCHED_RR policies), and 192–223 for software interrupts.
Solaris
Solaris uses a multilevel feedback queue with priorities ranging between 0 and 169. Priorities 0–59 are reserved for time-shared threads, 60–99 for system threads, 100–159 for real-time threads, and 160–169 for low priority interrupts. Unlike Linux, when a process is done using its time quantum, it is given a new priority and put back in the queue. Solaris 9 introduced two new scheduling classes, namely fixed-priority class and fair share class. The threads with fixed priority have the same priority range as that of the time-sharing class, but their priorities are not dynamically adjusted. The fair scheduling class uses CPU shares to prioritize threads for scheduling decisions. CPU shares indicate the entitlement to CPU resources. They are allocated to a set of processes, which are collectively known as a project.
Summary
| Technology | Operating systems | null |
232234 | https://en.wikipedia.org/wiki/Gonorynchiformes | Gonorynchiformes | The Gonorynchiformes are an order of ray-finned fish that includes the important food source, the milkfish (Chanos chanos, family Chanidae), and a number of lesser-known types, both marine and freshwater.
The alternate spelling "Gonorhynchiformes", with an "h", is frequently seen but not official.
Gonorynchiformes have small mouths and no teeth. They are the sole group in the clade Anotophysi, a subgroup of the superorder Ostariophysi. They are characterized by a primitive Weberian apparatus formed by the first three vertebrae and one or more cephalic ribs within the head. This apparatus is believed to be a hearing organ, and is found in a more advanced and complex form in the related cypriniform fish, such as carp. Also like the cypriniforms, the gonorynchiforms produce a substance from their skin when injured that dissolves into the water and acts an alarm signal to other fish.
Taxonomy
Although many of the families are rather small, there are several fossil genera. This listing of the groups of Gonorynchiformes includes fossil fish with a short description. They are listed in approximate order of how primitive their characteristics are.
The 5th edition of Fishes of the World classifies the extant taxa in this order as follows, as does Eschmeyer's Catalog of Fishes:
Order Gonorynchiformes Greenwood, Rosen, Weitzman, and Myers, 1966
Family Chanidae Günther, 1868 (milkfishes)
Family Gonorynchidae Richardson, 1848 (beaked sandfishes)
Family Kneriidae Günther, 1868 (shellears)
Family Phractolaemidae Boulenger, 1901 (snake mudheads)
Timeline of genera
| Biology and health sciences | Gonorynchiformes | Animals |
232241 | https://en.wikipedia.org/wiki/Milkfish | Milkfish | The milkfish (Chanos chanos) is a widespread species of ray-finned fish found throughout the Indo-Pacific. It is the sole living species in the family Chanidae, and the only living member of the genus Chanos.
The repeating scientific name (tautonym) is from Greek ( ‘mouth’). They are grouped in the order Gonorhynchiformes and are most closely related to the Ostariophysi—freshwater fishes such as carps, catfish, and loaches.
The species has many common names. The Hawaiian name for the fish is awa, and in Tahitian it is ava. It is called bangús () in the Philippines, where it is popularly known as the national fish, although the National Commission for Culture and the Arts has stated that this is not the case as it has no basis in Philippine law. In the Nauruan language, it is referred to as . Milkfish is also called bandeng or bolu in Indonesia.
The following are common names for milkfish by country:
Philippines: bangus, bangrus, banglus, banglot, banglis, sabalo, awa
Indonesia: ikan bandeng, baulo, bolu, balanak sembawa
Burma: ga-tin
Malaysia: bandang, jangos, pisong-pisong
Sri Lanka: plai-meen, vaikka
Thailand: pla nua chan
S. India: pal-meen
S. Vietnam: ca mang
Iraq binni: al-bahr
Taiwan: sabahee
Hawaii: awa-awa
Japan: sabahee
Mexico: sabalo
Geographic distribution
Chanos chanos occurs in the Indian Ocean and across the Pacific Ocean, from South Africa to Hawaii and the Marquesas, from California to the Galapagos, north to Japan, south to Australia. A single specimen was reported in 2012 in the eastern Mediterranean Sea.
In 1877, the California Fish and Game Commission introduced one hundred milkfish from Hawaii to the inland waters of Solano County, California. The introduced population could not establish itself permanently and it’s currently unknown how their presence affected the native ecosystem.
Milkfishes commonly live in tropical offshore marine waters around islands and along continental shelves, at depths of 1 to 30 m. They also frequently enter estuaries and rivers.
Taxonomy
Chanos is the only surviving genus of the ancient family Chanidae, which has existed since the Early Cretaceous. The only surviving species is the widespread C. chanos. Several fossil species of Chanos are known from the Paleogene of the Tethys and North Seas, dating as far back as the earliest Eocene. The earliest fossil Chanos were found in freshwater Eocene deposits in Europe and North America—hinting that the species first appeared in freshwater environments 40–50 million years ago. It’s possible that their invasion into the ocean happened during high sea-level or flooding events after the Eocene. Global cooling during the Mid-Eocene likely wiped out the population of milkfish in the Atlantic, but the species persisted in the Indo-Pacific.
The following fossil species of Chanos are known:
†C. brevis (Heckel, 1854) - Oligocene of the Chiavon Limestone, Italy
†C. compressus Stinton, 1977 [otolith] - Late Ypresian of the Wittering Formation, England
†C. forcipatus Kner & Steindachner, 1863 - Late Ypresian of Monte Bolca, Italy
†C. torosus Daniltshenko, 1968 - Earliest Ypresian of the Danata Formation, Turkmenistan
†C. zignoi Kner & Steindachner, 1863 - Oligocene of the Chiavon Limestone, Italy
The extinct Caeus leopoldi from the Early Cretaceous (Albian) of Italy is also sometimes placed in Chanos as Chanos leopoldi, which would be the earliest record of the genus and extend its occurrence even further back. However, more recent studies have affirmed it being a distinct genus.
Anatomy
The milkfish can grow to , but are most often no more than in length. They can reach a weight of about . and an age of 15 years. They have an elongated and almost compressed body, with a generally symmetrical and streamlined appearance, one dorsal fin, falcate pectoral fins and a sizable forked caudal fin. The head is small relative to the body. The mouth is small and toothless. The body is olive green, with silvery flanks and dark bordered fins. They have 13–17 dorsal soft rays, 8–10 anal soft rays and 31 caudal fin rays. There are numerous fine intramuscular bones, which may complicate human consumption of the fish (see "Consumption" below).
Their silvery complexion is similar to many other fish species of the Indo-Pacific. They are often mistaken for species of Clupeidae, Megalops cyprinoides, Gonorhynchus gonorhynchus, and Elops machnata. Chanos can be distinguished from these species by their size, absence of scutes, tubercle on the lower jaw, fin placement, no gular plate between arms of the lower jaw, and having only four branchiostegal rays.
Variant milkfish body types have occasionally been found. The ‘goldfish-type’ milkfish was discovered in the Philippines and is characterized by distinctly elongated dorsal, pelvic, and anal fins, and a caudal fin as long as the body. In Hawaii, Indonesia, and Australia, dwarf or hunchback ‘shad-type’ specimens have been recorded. They have a standard length-to-depth ratio of 2.0-2.5 instead of the usual 3.5-4.0. In Northern Australia, a milkfish with a red head, red fins, and blue dorsal coloration was reported.
Life history
Reproduction of milkfish in nature is far less understood than populations of milkfish bred and propagated in captivity (see "Aquaculture" below). In the wild, milkfish take 3–5 years to mature. Females can produce 0.5-6 million eggs and have the ability to spawn more than once a year. Spawning takes place at night, may be lunar periodic, and is strongly seasonal. Milkfish eggs are pelagic and range between 1.1–1.25 mm in diameter. Spawning sites are clear, warm, shallow waters above sand or coral reefs. It is believed milkfish prefer these locations to minimize predation from benthic planktivores.
Milkfish larvae have a pelagic planktonic stage. Younger larvae occur mostly at the surface, or sometimes deeper (20–30m). Older larvae are only found at the surface and in near-shore environments. Larvae metamorphose into fry and become benthic-feeding juveniles that are opportunistically herbivorous, detritivorous, or omnivorous, depending on the predominant food types in the habitat.
Juvenile milkfish larger than 20mm typically bear the characteristics of adults. They have complete fin rays, a forked caudal fin, scales, and silvery coloration. Juveniles have been found to inhabit a diverse range of shallow-water ecosystems such as coral lagoons, estuaries, marsh flats, tidal creeks, and tide pools.
Diet
Milkfish are omnivorous opportunistic feeders. Juvenile milkfish eat a variety of foods including phytoplankton, zooplankton, filamentous green algae, and small invertebrates. Similarly to juveniles, adults also eat benthic invertebrates and additionally planktonic and nektonic items such as clupeid juveniles.
Habitat
Adults tend to school around coasts and islands with coral reefs. The young fry live at sea for two to three weeks and then migrate during the juvenile stage to mangrove swamps, estuaries, and sometimes lakes, and return to sea to mature sexually and reproduce. Juveniles prefer to settle in undisturbed coastal ecosystems that are semi-enclosed, calm, shallow, free from predators, and rich in aquatic vegetation. In their natural habitats, milkfish are very adaptable to both changes in environmental conditions and diet. Milkfish are good osmoregulators and extremely euryhaline.
The wide geographic distribution of milkfish has led to genetic variation in the species across the Indo-Pacific. Milkfish populations differ between the Hawaiian islands, the central Pacific islands, Tonga, Tahiti, Philippines-Taiwan-Indonesia, Thailand-Malaysia, India, and Africa. However, all populations are thought to be inter-breeding, thus they are all considered one species, and their genetic diversity is low. However, populations may still differ in their reproductive, migratory, and survival methods.
Conservation status
According to the International Union for Conservation of Nature, milkfish are not currently a threatened species. However, little information is available on wild stocks.
Although milkfish populations are not threatened with extinction, they are at risk of ingesting or absorbing pollutants. Milkfish frequent environments that have been affected by industrial pollutants, land runoff, and plastics. Asia is one of the largest contributors of plastic litter into both the ocean and freshwater systems. A population of milkfish from San Jose, North Samar, Philippines was found to have concentrations of lead in the meaty part of their bodies. Pollutants have also impacted the aquaculture industry. In an aquaculture system in Butuan, Philippines, 97% of the fish sampled had microplastics in their gastrointestinal tracts. A similar study in Indonesia showed similar results. The presence of pollutants in milkfish poses a threat to the species’ health, aquaculture, and humans.
Fishing
In the Philippines, it is prohibited to fish for adult milkfish, also known as sabalo, over 60 cm. The government enacted this law in 1975 in an effort to protect spawning stocks of fish. However, sabalo are incidentally caught in fish corrals and are products of bycatch from fisheries. The ban was reinforced by the Philippine Fisheries Code of 1998 and violations are punishable by imprisonment for 6 months to 8 yr, and/or a fine of P80,000, forfeiture of the catch and fishing equipment used, and revocation of the fishing license.
Consumption
The milkfish is an important seafood in Southeast Asia and some Pacific Islands. Because it is notorious for being much bonier than other food fish, deboned milkfish, called "boneless bangús" in the Philippines, has become popular in stores and markets. Despite the notoriety however, many people in the Philippines continue to enjoy the fish cooked regularly or even raw using kalamansi juice or vinegar to make kinilaw na bangus.
Popular presentations of milkfish in Indonesia include bandeng duri lunak (soft-boned milkfish, ikan bandeng is Indonesian for milkfish) from Central and East Java or bandeng presto, which is pressure cooked milkfish until the thorns are rendered tender, and bandeng asap or smoked milkfish. Either fresh or processed, milkfish is the popular seafood product of Indonesian fishing towns, such as Juwana near Semarang in Central Java, and Sidoarjo near Surabaya in East Java.
Milkfish is the most popular fish in Taiwanese cuisine; it is valued for its versatility as well as its tender meat and economical price. Popular presentations include as a topping for congee, pan fried, braised, and as fish balls. There is a milkfish museum in Anping District and city of Kaohsiung holds an annual milkfish festival.
Milkfish is an oily fish, and is rich in omega-3 fatty acids.
Aquaculture
History
Milkfish aquaculture first occurred around 1800 years ago in the Philippines and spread to Indonesia, Taiwan, and into the Pacific. Traditional milkfish aquaculture relied upon restocking ponds by collecting wild fry. This led to a wide range of variability in quality and quantity between seasons and regions.
In the late 1970s, farmers first successfully spawned breeding fish. However, they were hard to obtain and produced unreliable egg viability. In 1980, the first spontaneous spawning happened in sea cages. These eggs were found to be sufficient to generate a constant supply for farms.
Milkfish aquaculture accounts for 14% of all aquaculture production worldwide. Indonesia and the Philippines were the leading producers of the species in 2017. The fish is especially desirable for aquaculture because of their rapid growth rate, disease resistance, acclimation to captivity, low mortality, high market value, and high-quality flesh.
Farming methods
Fry are raised in either sea cages, large saline ponds (Philippines), or concrete tanks (Indonesia, Taiwan). Milkfish reach sexual maturity at , which takes five years in floating sea cages, but eight to 10 years in ponds and tanks. Once they reach , (eight years), 3–4 million eggs are produced each breeding cycle. This is mainly done using natural environmental cues. However, attempts have been made using gonadotropin-releasing hormone analogue (GnRH-A) to induce spawning. Some still use the traditional wild stock method — capturing wild fry using nets.
Milkfish hatcheries, like most hatcheries, contain a variety of cultures, for example, rotifers, green algae, and brine shrimp, as well as the target species. They can either be intensive or semi-intensive. Semi-intensive methods are more profitable at US$6.67 per thousand fry in 1998, compared with $27.40 for intensive methods. However, the experience required by labour for semi-intensive hatcheries is higher than intensive.
Milkfish nurseries in Taiwan are highly commercial and have densities of about 2000/L. Indonesia achieves similar densities, but has more backyard-type nurseries. The Philippines has integrated nurseries with grow-out facilities and densities of about 1000/L.
The three methods of outgrowing are pond culture, pen culture, and cage culture.
Shallow ponds are found mainly in Indonesia and the Philippines. These are shallow (), brackish ponds with benthic algae, usually used as feed. They are usually excavated from nipa or mangrove areas and produce about 800 kg/ha/yr. Deep ponds (2–3 m) have more stable environments and their use began in 1970. They so far have shown less susceptibility to disease than shallow ponds.
In 1979, pen culture was introduced in Laguna de Bay, which had high primary production. This provided an excellent food source. Once this ran out, fertilizer was applied. They are susceptible to disease.
Cage culture occurs in coastal bays. These consist of large cages suspended in open water. They rely largely on natural sources of food.
Most food is natural (known as lab-lab) or a combination of phytoplankton and macroalgae. Traditionally, this was made on site; food is now made commercially to order.
Harvest occurs when the individuals are 20–40 cm long (250–500 g in weight). Partial harvests remove uniformly sized individuals with seine nets or gill nets. Total harvest removes all individuals and leads to a variety of sizes. Forced harvest happens when an environmental problem occurs, such as depleted oxygen due to algal blooms, and all stock is removed.
Possible parasites include nematodes, copepods, protozoa, and helminths. Many of these are treatable with chemicals and antibiotics.
Challenges
Modern milkfish aquaculture faces some challenges: acquiring viable milkfish fry, overcoming their status as a low-value species, attempting to expand outside of an ethnic market and struggling to find a sustainable cost-production balance. In 1987, Taiwan developed the outdoor hatchery technique, which resulted in lower-cost technology, and their fry production surpassed that of the wild. Since then, Taiwan has been one of the biggest hatchery fry producers in the Indo-Pacific. To stimulate market demand, sellers have been taking a fast-food approach, to make the product more accessible and desirable to common consumers.
Processing and marketing
Traditional post-harvest processing include smoking, drying, and fermenting. Bottling, canning, and freezing are of recent origin.
Demand has been steadily increasing since 1950. In 2005, 595,000 tonnes were harvested worth US$616 million.
A trend toward value-added products is occurring. In recent years, the possibility of using milkfish juveniles as bait for tuna long-lining has started to be investigated, opening up new markets for fry hatcheries.
Golden bangus
On April 21, 2012, a Filipino fisherman donated a milkfish with yellowish coloring to the Philippine Bureau of Fisheries and Aquatic Resources, which was later on called the "golden bangus". However, the fish soon died, allegedly because of a lower level of oxygen in the pond to which it was transferred.
Cultural significance
Milkfish have appeared in the traditions and mythology of the native Pohnpeians, Hawaiians, Tongans, and Nauruans in the Pacific.
Bangus Festival
In the city of Dagupan, Philippines they host an annual Bangus Festival. The festival was initially a bangus harvest or ‘Gilon’ conceptualized in the 1990s by Mayor Al Fernandez. Now, the festival has become an extravagant event including street dance competitions. The street dance competition named Gilon-gilon ed Dalan was established to celebrate the bangus harvest. The festival also honors the city’s patron Saint John, who was originally a fisherman and figured prominently in biblical stories of bountiful fish harvests. The Dagupan province is considered the country’s top producer of milkfish cultured in marine cages and pens. Two ‘species’ of milkfish are cultured in the city—the more popular of the two is the Bonuan Bangus.
| Biology and health sciences | Fishes | null |
232244 | https://en.wikipedia.org/wiki/Sacrum | Sacrum | The sacrum (: sacra or sacrums), in human anatomy, is a large, triangular bone at the base of the spine that forms by the fusing of the sacral vertebrae (S1S5) between ages 18 and 30.
The sacrum situates at the upper, back part of the pelvic cavity, between the two wings of the pelvis. It forms joints with four other bones. The two projections at the sides of the sacrum are called the alae (wings), and articulate with the ilium at the L-shaped sacroiliac joints. The upper part of the sacrum connects with the last lumbar vertebra (L5), and its lower part with the coccyx (tailbone) via the sacral and coccygeal cornua.
The sacrum has three different surfaces which are shaped to accommodate surrounding pelvic structures. Overall, it is concave (curved upon itself). The base of the sacrum, the broadest and uppermost part, is tilted forward as the sacral promontory internally. The central part is curved outward toward the posterior, allowing greater room for the pelvic cavity.
In all other quadrupedal vertebrates, the pelvic vertebrae undergo a similar developmental process to form a sacrum in the adult, even while the bony tail (caudal) vertebrae remain unfused. The number of sacral vertebrae varies slightly. For instance, the S1S5 vertebrae of a horse will fuse, the S1S3 of a dog will fuse, and four pelvic vertebrae of a rat will fuse between the lumbar and the caudal vertebrae of its tail.
The Stegosaurus dinosaur had a greatly enlarged neural canal in the sacrum, characterized as a "posterior brain case".
Structure
The sacrum is a complex structure providing support for the spine and accommodation for the spinal nerves. It also articulates with the hip bones. The sacrum has a base, an apex, and three surfaces – a pelvic, dorsal and a lateral surface. The base of the sacrum, which is broad and expanded, is directed upward and forward. On either side of the base is a large projection known as an ala of sacrum and these alae (wings) articulate with the sacroiliac joints. The alae support the psoas major muscles and the lumbosacral trunk which connects the lumbar plexus with the sacral plexus. In the articulated pelvis, the alae are continuous with the iliac fossa. Each ala is slightly concave from side to side, and convex from the back and gives attachment to a few of the fibers of the iliacus muscle. The posterior quarter of the ala represents the transverse process, and its anterior three-quarters the costal process of the first sacral segment. Each ala also serves as part of the border of the pelvic brim. The alae also form the base of the lumbosacral triangle. The iliolumbar ligament and lumbosacral ligaments are attached to the ala.
In the middle of the base is a large oval articular surface, the upper surface of the body of the first sacral vertebra, which is connected with the under surface of the body of the last lumbar vertebra by an intervertebral fibrocartilage. Behind this is the large triangular orifice of the sacral canal, which is completed by the lamina and spinous process of the first sacral vertebra. The superior articular processes project from it on either side; they are oval, concave, directed backward and medialward, like the superior articular processes of a lumbar vertebra. They are attached to the body of the first sacral vertebra and to each ala, by short thick pedicles; on the upper surface of each pedicle is a vertebral notch, which forms the lower part of the foramen between the last lumbar and first sacral vertebrae.
The apex is directed downward and presents an oval facet for articulation with the coccyx. The sacral canal as a continuation of the vertebral canal runs throughout the greater part of the sacrum. The sacral angle is the angle formed by the true conjugate with the two pieces of sacrum. Normally, it is greater than 60 degrees. A sacral angle of lesser degree suggests funneling of the pelvis.
Promontory
The sacral promontory marks part of the border of the pelvic inlet, and comprises the iliopectineal line and the linea terminalis. The sacral promontory articulates with the last lumbar vertebra to form the sacrovertebral angle, an angle of 30 degrees from the horizontal plane that provides a useful marker for a sling implant procedure.
Surfaces
The pelvic surface of the sacrum is concave from the top, and curved slightly from side to side. Its middle part is crossed by four transverse ridges, which correspond to the original planes of separation between the five sacral vertebrae. The body of the first segment is large and has the form of a lumbar vertebra; the bodies of the next bones get progressively smaller, are flattened from the back, and curved to shape themselves to the sacrum, being concave in front and convex behind. At each end of the transverse ridges, are the four anterior sacral foramina, diminishing in size in line with the smaller vertebral bodies. The foramina give exit to the anterior divisions of the sacral nerves and entrance to the lateral sacral arteries. Each part at the sides of the foramina is traversed by four broad, shallow grooves, which lodge the anterior divisions of the sacral nerves. They are separated by prominent ridges of bone which give origin to the piriformis muscle. If a sagittal section be made through the center of the sacrum, the bodies are seen to be united at their circumferences by bone, wide intervals being left centrally, which, in the fresh state, are filled by the intervertebral discs.
The dorsal surface of the sacrum is convex and narrower than the pelvic surface. In the middle line is the median sacral crest, surmounted by three or four tubercles—the rudimentary spinous processes of the upper three or four sacral vertebrae. On either side of the median sacral crest is a shallow sacral groove, which gives origin to the multifidus muscle. The floor of the groove is formed by the united laminae of the corresponding vertebrae. The laminae of the fifth sacral vertebra, and sometimes those of the fourth, do not meet at the back, resulting in a fissure known as the sacral hiatus in the posterior wall of the sacral canal. The sacral canal is a continuation of the spinal canal and runs throughout the greater part of the sacrum. Above the sacral hiatus, it is triangular in form. The canal lodges the sacral nerves, via the anterior and posterior sacral foramina.
On the lateral aspect of the sacral groove is a linear series of tubercles produced by the fusion of the articular processes which together form the indistinct medial sacral crest. The articular processes of the first sacral vertebra are large and oval-shaped. Their facets are concave from side to side, face to the back and middle, and articulate with the facets on the inferior processes of the fifth lumbar vertebra.
The tubercles of the inferior articular processes of the fifth sacral vertebra, known as the sacral cornua, are projected downward and are connected to the cornua of the coccyx. At the side of the articular processes are the four posterior sacral foramina; they are smaller in size and less regular in form than those at the front, and transmit the posterior divisions of the sacral nerves. On the side of the posterior sacral foramina is a series of tubercles, the transverse processes of the sacral vertebrae, and these form the lateral sacral crest. The transverse tubercles of the first sacral vertebra are large and very distinct; they, together with the transverse tubercles of the second vertebra, give attachment to the horizontal parts of the posterior sacroiliac ligaments; those of the third vertebra give attachment to the oblique fasciculi of the posterior sacroiliac ligaments; and those of the fourth and fifth to the sacrotuberous ligaments.
The lateral surface of the sacrum is broad above, but narrows into a thin edge below. The upper half presents in front an ear-shaped surface, the auricular surface, covered with cartilage in the immature state, for articulation with the ilium. Behind it is a rough surface, the sacral tuberosity, on which are three deep and uneven impressions, for the attachment of the posterior sacroiliac ligament. The lower half is thin, and ends in a projection called the inferior lateral angle. Medial to this angle is a notch, which is converted into a foramen by the transverse process of the first piece of the coccyx, and this transmits the anterior division of the fifth sacral nerve. The thin lower half of the lateral surface gives attachment to the sacrotuberous and sacrospinous ligaments, to some fibers of the gluteus maximus at the back and to the coccygeus in the front.
Articulations
The sacrum articulates with four bones:
the last lumbar vertebra above
the coccyx (tailbone) below
the ilium portion of the hip bone on either side
Rotation of the sacrum superiorly and anteriorly whilst the coccyx moves posteriorly relative to the ilium is sometimes called "nutation" (from the Latin term nutatio which means "nodding") and the reverse, postero-inferior motion of the sacrum relative to the ilium whilst the coccyx moves anteriorly, "counter-nutation".
In upright vertebrates, the sacrum is capable of slight independent movement along the sagittal plane. On bending backward the top (base) of the sacrum moves forward relative to the ilium; on bending forward the top moves back.
The sacrum refers to all of the parts combined. Its parts are called sacral vertebrae when referred individually.
Variations
In some cases, the sacrum will consist of six pieces or be reduced in number to four. The bodies of the first and second vertebrae may fail to unite.
Development
The somites that give rise to the vertebral column begin to develop from head to tail along the length of the notochord. At day 20 of embryogenesis, the first four pairs of somites appear in the future occipital bone region. Developing at the rate of three or four a day, the next eight pairs form in the cervical region to develop into the cervical vertebrae; the next twelve pairs will form the thoracic vertebrae; the next five pairs the lumbar vertebrae and by about day 29, the sacral somites will appear to develop into the sacral vertebrae; finally on day 30, the last three pairs will form the coccyx.
Clinical significance
Congenital disorders
The congenital disorder, spina bifida, occurs as a result of a defective embryonic neural tube, characterised by the incomplete closure of vertebral arch or of the incomplete closure of the surface of the vertebral canal. The most common sites for spina bifida malformations are the lumbar and sacral areas.
Another congenital disorder is that of caudal regression syndrome also known as sacral agenesis. This is characterised by an abnormal underdevelopment in the embryo (occurring by the seventh week) of the lower spine. Sometimes part of the coccyx is absent, or the lower vertebrae can be absent, or on occasion a small part of the spine is missing with no outward sign.
Fracture
Sacral fractures are relatively uncommon; however, they are often associated with neurological deficits. In the presence of neurological signs, they are mostly treated with surgical fixation.
Cancer
The sacrum is one of the main sites for the development of the sarcomas known as chordomas that are derived from the remnants of the embryonic notochord.
Other animals
In dogs, the sacrum is formed by three fused vertebrae. The sacrum in the horse is made up of five fused vertebrae. In birds, the sacral vertebrae are fused with the lumbar and some caudal and thoracic vertebrae to form a single structure called the synsacrum. In the frog, the ilium is elongated and forms a mobile joint with the sacrum that acts as an additional limb to give more power to its leaps.
History
English sacrum was introduced as a technical term in anatomy in the mid-18th century, as a shortening of the Late Latin name os sacrum "sacred bone", itself a translation of Greek ἱερόν ὀστέον, the term found in the writings of Galen. Prior to the adoption of sacrum, the bone was also called holy bone in English, paralleling German heiliges Bein or Heiligenbein (alongside Kreuzbein) and Dutch heiligbeen.
The origin of Galen's term is unclear. Supposedly the sacrum was the part of an animal offered in sacrifice (since the sacrum is the seat of the organs of procreation). Others attribute the adjective ἱερόν to the ancient belief that this specific bone would be indestructible. As the Greek adjective ἱερός may also mean "strong", it has also been suggested that os sacrum is a mistranslation of a term intended to mean "the strong bone". This is supported by the alternative Greek name μέγας σπόνδυλος by the Greeks, translating to "large vertebra", translated into Latin as vertebra magna.
In Classical Greek the bone was known as κλόνις (Latinized clonis); this term is cognate to Latin clunis "buttock", Sanskrit "haunch" and Lithuanian šlaunis "hip, thigh". The Latin word is found in the alternative Latin name of the sacrum, ossa clunium, as it were "bones of the buttocks". Due to the fact that the os sacrum is broad and thick at its upper end, the sacrum is alternatively called os latum, "broad bone".
Additional images
| Biology and health sciences | Skeletal system | Biology |
232249 | https://en.wikipedia.org/wiki/Crystal%20radio | Crystal radio | A crystal radio receiver, also called a crystal set, is a simple radio receiver, popular in the early days of radio. It uses only the power of the received radio signal to produce sound, needing no external power. It is named for its most important component, a crystal detector, originally made from a piece of crystalline mineral such as galena. This component is now called a diode.
Crystal radios are the simplest type of radio receiver and can be made with a few inexpensive parts, such as a wire for an antenna, a coil of wire, a capacitor, a crystal detector, and earphones (because a crystal set has insufficient power for a loudspeaker). However they are passive receivers, while other radios use an amplifier powered by current from a battery or wall outlet to make the radio signal louder. Thus, crystal sets produce rather weak sound and must be listened to with sensitive earphones, and can receive stations only within a limited range of the transmitter.
The rectifying property of a contact between a mineral and a metal was discovered in 1874 by Karl Ferdinand Braun. Crystals were first used as a detector of radio waves in 1894 by Jagadish Chandra Bose, in his microwave optics experiments. They were first used as a demodulator for radio communication reception in 1902 by G. W. Pickard. Crystal radios were the first widely used type of radio receiver, and the main type used during the wireless telegraphy era. Sold and homemade by the millions, the inexpensive and reliable crystal radio was a major driving force in the introduction of radio to the public, contributing to the development of radio as an entertainment medium with the beginning of radio broadcasting around 1920.
Around 1920, crystal sets were superseded by the first amplifying receivers, which used vacuum tubes. With this technological advance, crystal sets became obsolete for commercial use but continued to be built by hobbyists, youth groups, and the Boy Scouts mainly as a way of learning about the technology of radio. They are still sold as educational devices, and there are groups of enthusiasts devoted to their construction.
Crystal radios receive amplitude modulated (AM) signals, although FM designs have been built. They can be designed to receive almost any radio frequency band, but most receive the AM broadcast band. A few receive shortwave bands, but strong signals are required. The first crystal sets received wireless telegraphy signals broadcast by spark-gap transmitters at frequencies as low as 20 kHz.
History
Crystal radio was invented by a long, partly obscure chain of discoveries in the late 19th century that gradually evolved into more and more practical radio receivers in the early 20th century. The earliest practical use of crystal radio was to receive Morse code radio signals transmitted from spark-gap transmitters by early amateur radio experimenters. As electronics evolved, the ability to send voice signals by radio caused a technological explosion around 1920 that evolved into today's radio broadcasting industry.
Early years
Early radio telegraphy used spark gap and arc transmitters as well as high-frequency alternators running at radio frequencies. The coherer was the first means of detecting a radio signal. These, however, lacked the sensitivity to detect weak signals.
In the early 20th century, various researchers discovered that certain metallic minerals, such as galena, could be used to detect radio signals.
Indian physicist Jagadish Chandra Bose was first to use a crystal as a radio wave detector, using galena detectors to receive microwaves starting around 1894. In 1901, Bose filed for a U.S. patent for "A Device for Detecting Electrical Disturbances" that mentioned the use of a galena crystal; this was granted in 1904, #755840. On August 30, 1906, Greenleaf Whittier Pickard filed a patent for a silicon crystal detector, which was granted on November 20, 1906.
A crystal detector includes a crystal, usually a thin wire or metal probe that contacts the crystal, and the stand or enclosure that holds those components in place. The most common crystal used is a small piece of galena; pyrite was also often used, as it was a more easily adjusted and stable mineral, and quite sufficient for urban signal strengths. Several other minerals also performed well as detectors. Another benefit of crystals was that they could demodulate amplitude modulated signals. This device brought radiotelephones and voice broadcast to a public audience. Crystal sets represented an inexpensive and technologically simple method of receiving these signals at a time when the embryonic radio broadcasting industry was beginning to grow.
1920s and 1930s
In 1922 the (then named) United States Bureau of Standards released a publication entitled Construction and Operation of a Simple Homemade Radio Receiving Outfit. This article showed how almost any family having a member who was handy with simple tools could make a radio and tune into weather, crop prices, time, news and the opera. This design was significant in bringing radio to the general public. NBS followed that with a more selective two-circuit version, Construction and Operation of a Two-Circuit Radio Receiving Equipment With Crystal Detector, which was published the same year and is still frequently built by enthusiasts today.
In the beginning of the 20th century, radio had little commercial use, and radio experimentation was a hobby for many people. Some historians consider the autumn of 1920 to be the beginning of commercial radio broadcasting for entertainment purposes. Pittsburgh station KDKA, owned by Westinghouse, received its license from the United States Department of Commerce just in time to broadcast the Harding-Cox presidential election returns. In addition to reporting on special events, broadcasts to farmers of crop price reports were an important public service in the early days of radio.
In 1921, factory-made radios were very expensive. Since less-affluent families could not afford to own one, newspapers and magazines carried articles on how to build a crystal radio with common household items. To minimize the cost, many of the plans suggested winding the tuning coil on empty pasteboard containers such as oatmeal boxes, which became a common foundation for homemade radios.
Crystodyne
In early 1920s Russia, Oleg Losev was experimenting with applying voltage biases to various kinds of crystals for the manufacturing of radio detectors. The result was astonishing: with a zincite (zinc oxide) crystal he gained amplification. This was a negative resistance phenomenon, decades before the development of the tunnel diode. After the first experiments, Losev built regenerative and superheterodyne receivers, and even transmitters.
A crystodyne could be produced under primitive conditions; it could be made in a rural forge, unlike vacuum tubes and modern semiconductor devices. However, this discovery was not supported by the authorities and was soon forgotten; no device was produced in mass quantity beyond a few examples for research.
"Foxhole radios"
In addition to mineral crystals, the oxide coatings of many metal surfaces act as semiconductors (detectors) capable of rectification. Crystal radios have been improvised using detectors made from rusty nails, corroded pennies, and many other common objects.
When Allied troops were halted near Anzio, Italy during the spring of 1944, powered personal radio receivers were strictly prohibited as the Germans had equipment that could detect the local oscillator signal of superheterodyne receivers. Crystal sets lack power driven local oscillators, hence they could not be detected. Some resourceful soldiers constructed "crystal" sets from discarded materials to listen to news and music. One type used a blue steel razor blade and a pencil lead for a detector. The lead point touching the semiconducting oxide coating (magnetite) on the blade formed a crude point-contact diode. By carefully adjusting the pencil lead on the surface of the blade, they could find spots capable of rectification. The sets were dubbed "foxhole radios" by the popular press, and they became part of the folklore of World War II.
In some German-occupied countries during WW2 there were widespread confiscations of radio sets from the civilian population. This led determined listeners to build their own clandestine receivers which often amounted to little more than a basic crystal set. Anyone doing so risked imprisonment or even death if caught, and in most of Europe the signals from the BBC (or other allied stations) were not strong enough to be received on such a set.
"Rocket Radio"
In the late 1950s, the compact "rocket radio", shaped like a rocket, typically imported from Japan, was introduced, and gained moderate popularity. It used a piezoelectric crystal earpiece (described later in this article), a ferrite core to reduce the size of the tuning coil (also described later), and a small germanium fixed diode, which did not require adjustment. To tune in stations, the user moved the rocket nosepiece, which, in turn, moved a ferrite core inside a coil, changing the inductance in a tuned circuit. Earlier crystal radios suffered from severely reduced Q, and resulting selectivity, from the electrical load of the earphone or earpiece. Furthermore, with its efficient earpiece, the "rocket radio" did not require a large antenna to gather enough signal. With much higher Q, it could typically tune in several strong local stations, while an earlier radio might only receive one station, possibly with other stations heard in the background.
For listening in areas where an electric outlet was not available, the "rocket radio" served as an alternative to the vacuum tube portable radios of the day, which required expensive and heavy batteries. Children could hide "rocket radios" under the covers, to listen to radio when their parents thought they were sleeping. Children could take the radios to public swimming pools and listen to radio when they got out of the water, clipping the ground wire to a chain link fence surrounding the pool. The rocket radio was also used as an emergency radio, because it did not require batteries or an AC outlet.
The rocket radio was available in several rocket styles, as well as other styles that featured the same basic circuit.
Transistor radios had become available at the time, but were expensive. Once those radios dropped in price, the rocket radio declined in popularity.
Later years
While it never regained the popularity and general use that it enjoyed at its beginnings, the crystal radio circuit is still used. The Boy Scouts have kept the construction of a radio set in their program since the 1920s. A large number of prefabricated novelty items and simple kits could be found through the 1950s and 1960s, and many children with an interest in electronics built one.
Building crystal radios was a craze in the 1920s, and again in the 1950s. Recently, hobbyists have started designing and building examples of the early instruments. Much effort goes into the visual appearance of these sets as well as their performance. Annual crystal radio 'DX' contests (long distance reception) and building contests allow these set owners to compete with each other and form a community of interest in the subject.
Basic principles
A crystal radio can be thought of as a radio receiver reduced to its essentials. It consists of at least these components:
An antenna in which electric currents are induced by electromagnetic radiation.
A resonant circuit (tuned circuit) which selects the frequency of the desired radio station from all the radio signals received by the antenna. The tuned circuit consists of a coil of wire (called an inductor) and a capacitor connected together. The circuit has a resonant frequency, and allows radio waves at that frequency to pass through to the detector while largely blocking waves at other frequencies. One or both of the coil or capacitor is adjustable, allowing the circuit to be tuned to different frequencies. In some circuits a capacitor is not used and the antenna serves this function, as an antenna that is shorter than a quarter-wavelength of the radio waves it is meant to receive is capacitive.
A semiconductor crystal detector that demodulates the radio signal to extract the audio signal (modulation). The crystal detector functions as a square law detector, demodulating the radio frequency alternating current to its audio frequency modulation. The detector's audio frequency output is converted to sound by the earphone. Early sets used a "cat whisker detector" consisting of a small piece of crystalline mineral such as galena with a fine wire touching its surface. The crystal detector was the component that gave crystal radios their name. Modern sets use modern semiconductor diodes, although some hobbyists still experiment with crystal or other detectors.
An earphone to convert the audio signal to sound waves so they can be heard. The low power produced by a crystal receiver is insufficient to power a loudspeaker, hence earphones are used.
As a crystal radio has no power supply, the sound power produced by the earphone comes solely from the transmitter of the radio station being received, via the radio waves captured by the antenna. The power available to a receiving antenna decreases with the square of its distance from the radio transmitter. Even for a powerful commercial broadcasting station, if it is more than a few miles from the receiver the power received by the antenna is very small, typically measured in microwatts or nanowatts. In modern crystal sets, signals as weak as 50 picowatts at the antenna can be heard. Crystal radios can receive such weak signals without using amplification only due to the great sensitivity of human hearing, which can detect sounds with an intensity of only 10−16 W/cm2. Therefore, crystal receivers have to be designed to convert the energy from the radio waves into sound waves as efficiently as possible. Even so, they are usually only able to receive stations within distances of about 25 miles for AM broadcast stations, although the radiotelegraphy signals used during the wireless telegraphy era could be received at hundreds of miles, and crystal receivers were even used for transoceanic communication during that period.
Design
Commercial passive receiver development was abandoned with the advent of reliable vacuum tubes around 1920, and subsequent crystal radio research was primarily done by radio amateurs and hobbyists. Many different circuits have been used. The following sections discuss the parts of a crystal radio in greater detail.
Antenna
The antenna converts the energy in the electromagnetic radio waves to an alternating electric current in the antenna, which is connected to the tuning coil. Since, in a crystal radio, all the power comes from the antenna, it is important that the antenna collect as much power from the radio wave as possible. The larger an antenna, the more power it can intercept. Antennas of the type commonly used with crystal sets are most effective when their length is close to a multiple of a quarter-wavelength of the radio waves they are receiving. Since the length of the waves used with crystal radios is very long (AM broadcast band waves are long) the antenna is made as long as possible, from a long wire, in contrast to the whip antennas or ferrite loopstick antennas used in modern radios.
Serious crystal radio hobbyists use "inverted L" and "T" type antennas, consisting of hundreds of feet of wire suspended as high as possible between buildings or trees, with a feed wire attached in the center or at one end leading down to the receiver. However, more often, random lengths of wire dangling out windows are used. A popular practice in early days (particularly among apartment dwellers) was to use existing large metal objects, such as bedsprings, fire escapes, and barbed wire fences as antennas.
Ground
The wire antennas used with crystal receivers are monopole antennas which develop their output voltage with respect to ground. The receiver thus requires a connection to ground (the earth) as a return circuit for the current. The ground wire was attached to a radiator, water pipe, or a metal stake driven into the ground. In early days if an adequate ground connection could not be made a counterpoise was sometimes used. A good ground is more important for crystal sets than it is for powered receivers, as crystal sets are designed to have a low input impedance needed to transfer power efficiently from the antenna. A low resistance ground connection (preferably below 25 Ω) is necessary because any resistance in the ground reduces available power from the antenna. In contrast, modern receivers are voltage-driven devices, with high input impedance, hence little current flows in the antenna/ground circuit. Also, mains powered receivers are grounded adequately through their power cords, which are in turn attached to the earth through the building wiring.
Tuned circuit
The tuned circuit, consisting of a coil and a capacitor connected together, acts as a resonator, similar to a tuning fork. Electric charge, induced in the antenna by the radio waves, flows rapidly back and forth between the plates of the capacitor through the coil. The circuit has a high impedance at the desired radio signal's frequency, but a low impedance at all other frequencies. Hence, signals at undesired frequencies pass through the tuned circuit to ground, while the desired frequency is instead passed on to the detector (diode) and stimulates the earpiece and is heard. The frequency of the station received is the resonant frequency f of the tuned circuit, determined by the capacitance C of the capacitor and the inductance L of the coil:
The circuit can be adjusted to different frequencies by varying the inductance (L), the capacitance (C), or both, "tuning" the circuit to the frequencies of different radio stations. In the lowest-cost sets, the inductor was made variable via a spring contact pressing against the windings that could slide along the coil, thereby introducing a larger or smaller number of turns of the coil into the circuit, varying the inductance. Alternatively, a variable capacitor is used to tune the circuit. Some modern crystal sets use a ferrite core tuning coil, in which a ferrite magnetic core is moved into and out of the coil, thereby varying the inductance by changing the magnetic permeability (this eliminated the less reliable mechanical contact).
The antenna is an integral part of the tuned circuit and its reactance contributes to determining the circuit's resonant frequency. Antennas usually act as a capacitance, as antennas shorter than a quarter-wavelength have capacitive reactance. Many early crystal sets did not have a tuning capacitor, and relied instead on the capacitance inherent in the wire antenna (in addition to significant parasitic capacitance in the coil) to form the tuned circuit with the coil.
The earliest crystal receivers did not have a tuned circuit at all, and just consisted of a crystal detector connected between the antenna and ground, with an earphone across it. Since this circuit lacked any frequency-selective elements besides the broad resonance of the antenna, it had little ability to reject unwanted stations, so all stations within a wide band of frequencies were heard in the earphone (in practice the most powerful usually drowns out the others). It was used in the earliest days of radio, when only one or two stations were within a crystal set's limited range.
Impedance matching
An important principle used in crystal radio design to transfer maximum power to the earphone is impedance matching. The maximum power is transferred from one part of a circuit to another when the impedance of one circuit is the complex conjugate of that of the other; this implies that the two circuits should have equal resistance. However, in crystal sets, the impedance of the antenna-ground system (around 10–200 ohms) is usually lower than the impedance of the receiver's tuned circuit (thousands of ohms at resonance), and also varies depending on the quality of the ground attachment, length of the antenna, and the frequency to which the receiver is tuned.
Therefore, in improved receiver circuits, in order to match the antenna impedance to the receiver's impedance, the antenna was connected across only a portion of the tuning coil's turns. This made the tuning coil act as an impedance matching transformer (in an autotransformer connection) in addition to providing the tuning function. The antenna's low resistance was increased (transformed) by a factor equal to the square of the turns ratio (the ratio of the number of turns the antenna was connected to, to the total number of turns of the coil), to match the resistance across the tuned circuit. In the "two-slider" circuit, popular during the wireless era, both the antenna and the detector circuit were attached to the coil with sliding contacts, allowing (interactive) adjustment of both the resonant frequency and the turns ratio. Alternatively a multiposition switch was used to select taps on the coil. These controls were adjusted until the station sounded loudest in the earphone.
Problem of selectivity
One of the drawbacks of crystal sets is that they are vulnerable to interference from stations near in frequency to the desired station. Often two or more stations are heard simultaneously. This is because the simple tuned circuit does not reject nearby signals well; it allows a wide band of frequencies to pass through, that is, it has a large bandwidth (low Q factor) compared to modern receivers, giving the receiver low selectivity.
The crystal detector worsened the problem, because it has relatively low resistance, thus it "loaded" the tuned circuit, drawing significant current and thus damping the oscillations, reducing its Q factor so it allowed through a broader band of frequencies. In many circuits, the selectivity was improved by connecting the detector and earphone circuit to a tap across only a fraction of the coil's turns. This reduced the impedance loading of the tuned circuit, as well as improving the impedance match with the detector.
Inductive coupling
In more sophisticated crystal receivers, the tuning coil is replaced with an adjustable air core antenna coupling transformer which improves the selectivity by a technique called loose coupling. This
consists of two magnetically coupled coils of wire, one (the primary) attached to the antenna and ground and the other (the secondary) attached to the rest of the circuit. The current from the antenna creates an alternating magnetic field in the primary coil, which induced a current in the secondary coil which was then rectified and powered the earphone. Each of the coils functions as a tuned circuit; the primary coil resonated with the capacitance of the antenna (or sometimes another capacitor), and the secondary coil resonated with the tuning capacitor. Both the primary and secondary were tuned to the frequency of the station. The two circuits interacted to form a resonant transformer.
Reducing the coupling between the coils, by physically separating them so that less of the magnetic field of one intersects the other, reduces the mutual inductance, narrows the bandwidth, and results in much sharper, more selective tuning than that produced by a single tuned circuit. However, the looser coupling also reduced the power of the signal passed to the second circuit. The transformer was made with adjustable coupling, to allow the listener to experiment with various settings to gain the best reception.
One design common in early days, called a "loose coupler", consisted of a smaller secondary coil inside a larger primary coil. The smaller coil was mounted on a rack so it could be slid linearly in or out of the larger coil. If radio interference was encountered, the smaller coil would be slid further out of the larger, loosening the coupling, narrowing the bandwidth, and thereby rejecting the interfering signal.
The antenna coupling transformer also functioned as an impedance matching transformer, that allowed a better match of the antenna impedance to the rest of the circuit. One or both of the coils usually had several taps which could be selected with a switch, allowing adjustment of the number of turns of that transformer and hence the "turns ratio".
Coupling transformers were difficult to adjust, because the three adjustments, the tuning of the primary circuit, the tuning of the secondary circuit, and the coupling of the coils, were all interactive, and changing one affected the others.
Crystal detector
The crystal detector demodulates the radio frequency signal, extracting the modulation (the audio signal which represents the sound waves) from the radio frequency carrier wave. In early receivers, a type of crystal detector often used was a "cat whisker detector". The point of contact between the wire and the crystal acted as a semiconductor diode. The cat whisker detector constituted a crude Schottky diode that allowed current to flow better in one direction than in the opposite direction. Modern crystal sets use modern semiconductor diodes. The crystal functions as an envelope detector, rectifying the alternating current radio signal to a pulsing direct current, the peaks of which trace out the audio signal, so it can be converted to sound by the earphone, which is connected to the detector.
The rectified current from the detector has radio frequency pulses from the carrier frequency in it, which are blocked by the high inductive reactance and do not pass well through the coils of early date earphones. Hence, a small capacitor called a bypass capacitor is often placed across the earphone terminals; its low reactance at radio frequency bypasses these pulses around the earphone to ground. In some sets the earphone cord had enough capacitance that this component could be omitted.
Only certain sites on the crystal surface functioned as rectifying junctions, and the device was very sensitive to the pressure of the crystal-wire contact, which could be disrupted by the slightest vibration. Therefore, a usable contact point had to be found by trial and error before each use. The operator dragged the wire across the crystal surface until a radio station or "static" sounds were heard in the earphones. Alternatively, some radios (circuit, right) used a battery-powered buzzer attached to the input circuit to adjust the detector. The spark at the buzzer's electrical contacts served as a weak source of static, so when the detector began working, the buzzing could be heard in the earphones. The buzzer was then turned off, and the radio tuned to the desired station.
Galena (lead sulfide) was the most common crystal used, but various other types of crystals were also used, the most common being iron pyrite (fool's gold, FeS2), silicon, molybdenite (MoS2), silicon carbide (carborundum, SiC), and a zincite-bornite (ZnO-Cu5FeS4) crystal-to-crystal junction trade-named Perikon. Crystal radios have also been improvised from a variety of common objects, such as blue steel razor blades and lead pencils, rusty needles, and pennies In these, a semiconducting layer of oxide or sulfide on the metal surface is usually responsible for the rectifying action.
In modern sets, a semiconductor diode is used for the detector, which is much more reliable than a crystal detector and requires no adjustments. Germanium diodes (or sometimes Schottky diodes) are used instead of silicon diodes, because their lower forward voltage drop (roughly 0.3 V compared to 0.6 V) makes them more sensitive.
All semiconductor detectors function rather inefficiently in crystal receivers, because the low voltage input to the detector is too low to result in much difference between forward better conduction direction, and the reverse weaker conduction. To improve the sensitivity of some of the early crystal detectors, such as silicon carbide, a small forward bias voltage was applied across the detector by a battery and potentiometer. The bias moves the diode's operating point higher on the detection curve producing more signal voltage at the expense of less signal current (higher impedance). There is a limit to the benefit that this produces, depending on the other impedances of the radio. This improved sensitivity was caused by moving the DC operating point to a more desirable voltage-current operating point (impedance) on the junction's I-V curve. The battery did not power the radio, but only provided the biasing voltage which required little power.
Earphones
The requirements for earphones used in crystal sets are different from earphones used with modern audio equipment. They have to be efficient at converting the electrical signal energy to sound waves, while most modern earphones sacrifice efficiency in order to gain high fidelity reproduction of the sound. In early homebuilt sets, the earphones were the most costly component.
The early earphones used with wireless-era crystal sets had moving iron drivers that worked in a way similar to the horn loudspeakers of the period. Each earpiece contained a permanent magnet about which was a coil of wire which formed a second electromagnet. Both magnetic poles were close to a steel diaphragm of the speaker. When the audio signal from the radio was passed through the electromagnet's windings, current was caused to flow in the coil which created a varying magnetic field that augmented or diminished that due to the permanent magnet. This varied the force of attraction on the diaphragm, causing it to vibrate. The vibrations of the diaphragm push and pull on the air in front of it, creating sound waves. Standard headphones used in telephone work had a low impedance, often 75 Ω, and required more current than a crystal radio could supply. Therefore, the type used with crystal set radios (and other sensitive equipment) was wound with more turns of finer wire giving it a high impedance of 2000–8000 Ω.
Modern crystal sets use piezoelectric crystal earpieces, which are much more sensitive and also smaller. They consist of a piezoelectric crystal with electrodes attached to each side, glued to a light diaphragm. When the audio signal from the radio set is applied to the electrodes, it causes the crystal to vibrate, vibrating the diaphragm. Crystal earphones are designed as ear buds that plug directly into the ear canal of the wearer, coupling the sound more efficiently to the eardrum. Their resistance is much higher (typically megohms) so they do not greatly "load" the tuned circuit, allowing increased selectivity of the receiver. The piezoelectric earphone's higher resistance, in parallel with its capacitance of around 9 pF, creates a filter that allows the passage of low frequencies, but blocks the higher frequencies. In that case a bypass capacitor is not needed (although in practice a small one of around 0.68 to 1 nF is often used to help improve quality), but instead a 10–100 kΩ resistor must be added in parallel with the earphone's input.
Although the low power produced by crystal radios is typically insufficient to drive a loudspeaker, some homemade 1960s sets have used one, with an audio transformer to match the low impedance of the speaker to the circuit. Similarly, modern low-impedance (8 Ω) earphones cannot be used unmodified in crystal sets because the receiver does not produce enough current to drive them. They are sometimes used by adding an audio transformer to match their impedance with the higher impedance of the driving antenna circuit.
Use as a power source
A crystal radio tuned to a strong local transmitter can be used as a power source for a second amplified receiver of a distant station that cannot be heard without amplification.
There is a long history of unsuccessful attempts and unverified claims to recover the power in the carrier of the received signal itself. Conventional crystal sets use half-wave rectifiers. As AM signals have a modulation factor of only 30% by voltage at peaks, no more than 9% of received signal power () is actual audio information, and 91% is just rectified DC voltage. <correction> The 30% figure is the standard used for radio testing, and is based on the average modulation factor for speech. Properly-designed and managed AM transmitters can be run to 100% modulation on peaks without causing distortion or "splatter" (excess sideband energy that radiates outside of the intended signal bandwidth). Given that the audio signal is unlikely to be at peak all the time, the ratio of energy is, in practice, even greater. Considerable effort was made to convert this DC voltage into sound energy. Some earlier attempts include a one-transistor amplifier in 1966. Sometimes efforts to recover this power are confused with other efforts to produce a more efficient detection. This history continues now with designs as elaborate as "inverted two-wave switching power unit".
Gallery
During the wireless telegraphy era before 1920, crystal receivers were "state of the art", and sophisticated models were produced. After 1920 crystal sets became the cheap alternative to vacuum tube radios, used in emergencies and by youth and the poor.
| Technology | Broadcasting | null |
232411 | https://en.wikipedia.org/wiki/Sneeze | Sneeze | A sneeze (also known as sternutation) is a semi-autonomous, convulsive expulsion of air from the lungs through the nose and mouth, usually caused by foreign particles irritating the nasal mucosa. A sneeze expels air forcibly from the mouth and nose in an explosive, spasmodic involuntary action. This action allows for mucus to escape through the nasal cavity and saliva to escape from the oral cavity. Sneezing is possibly linked to sudden exposure to bright light (known as photic sneeze reflex), sudden change (drop) in temperature, breeze of cold air, a particularly full stomach, exposure to allergens, or viral infection. Because sneezes can spread disease through infectious aerosol droplets, it is recommended to cover one's mouth and nose with the forearm, the inside of the elbow, a tissue or a handkerchief while sneezing. In addition to covering the mouth, looking down is also recommended to change the direction of the droplets spread and avoid high concentration in the human breathing heights.
The function of sneezing is to expel mucus containing foreign particles or irritants and cleanse the nasal cavity. During a sneeze, the soft palate and palatine uvula depress while the back of the tongue elevates to partially close the passage to the mouth, creating a venturi (similar to a carburetor) due to Bernoulli's principle so that air ejected from the lungs is accelerated through the mouth and thus creating a low pressure point at the back of the nose. This way air is forced in through the front of the nose and the expelled mucus and contaminants are launched out the mouth. Sneezing with the mouth closed does expel mucus through the nose but is not recommended because it creates a very high pressure in the head and is potentially harmful.
Sneezing cannot occur during sleep due to REM atonia – a bodily state where motor neurons are not stimulated and reflex signals are not relayed to the brain. Sufficient external stimulants, however, may cause a person to wake from sleep to sneeze, but any sneezing occurring afterwards would take place with a partially awake status at minimum.
When sneezing, humans eyes automatically close due to the involuntary reflex during sneeze.
Description
Sneezing typically occurs when foreign particles or sufficient external stimulants pass through the nasal hairs to reach the nasal mucosa. This triggers the release of histamines, which irritate the nerve cells in the nose, resulting in signals being sent to the brain to initiate the sneeze through the trigeminal nerve network. The brain then relates this initial signal, activates the pharyngeal and tracheal muscles and creates a large opening of the nasal and oral cavities, resulting in a powerful release of air and bioparticles. The powerful nature of a sneeze is attributed to its involvement of numerous organs of the upper body – it is a reflexive response involving the face, throat, and chest muscles. Sneezing is also triggered by sinus nerve stimulation caused by nasal congestion and allergies.
The neural regions involved in the sneeze reflex are located in the brainstem along the ventromedial part of the spinal trigeminal nucleus and the adjacent pontine-medullary lateral reticular formation. This region appears to control the epipharyngeal, intrinsic laryngeal and respiratory muscles, and the combined activity of these muscles serve as the basis for the generation of a sneeze.
The sneeze reflex involves contraction of a number of different muscles and muscle groups throughout the body, typically including the eyelids. The common suggestion that it is impossible to sneeze with one's eyes open is, however, inaccurate. Other than irritating foreign particles, allergies or possible illness, another stimulus is sudden exposure to bright light – a condition known as photic sneeze reflex (PSR). Walking out of a dark building into sunshine may trigger PSR, or the ACHOO (autosomal dominant compulsive helio-ophthalmic outbursts of sneezing) syndrome as it is also called. The tendency to sneeze upon exposure to bright light is an autosomal dominant trait and affects 18–35% of the human population. A rarer trigger, observed in some individuals, is the fullness of the stomach immediately after a large meal. This is known as snatiation and is regarded as a medical disorder passed along genetically as an autosomal dominant trait.
Epidemiology
While generally harmless in healthy individuals, sneezes spread disease through the infectious aerosol droplets, commonly ranging from 0.5 to 5 μm. A sneeze can produce 40,000 droplets. To reduce the possibility of thus spreading disease (such as the flu), one holds the forearm, the inside of the elbow, a tissue or a handkerchief in front of one's mouth and nose when sneezing. Using one's hand for that purpose has recently fallen into disuse as it is considered inappropriate, since it promotes spreading germs through human contact (such as handshaking) or by commonly touched objects (most notably doorknobs).
Until recently, the maximum visible distance over which the sneeze plumes (or puffs) travel was observed at , and the maximum sneeze velocity derived was 4.5 m/s (about 10 mph). In 2020, sneezes were recorded generating plumes of up to .
Prevention
Proven methods to reduce sneezing generally advocate reducing interaction with irritants, such as keeping pets out of the house to avoid animal dander; ensuring the timely and continuous removal of dirt and dust particles through proper housekeeping; replacing filters for furnaces and air-handling units; air filtration devices and humidifiers; and staying away from industrial and agricultural zones. Tickling the roof of the mouth with the tongue can stop a sneeze. Some people, however, find sneezes to be pleasurable and would not want to prevent them.
Holding in sneezes, such as by pinching the nose or holding one's breath, is not recommended as the air pressure places undue stress on the lungs and airways. One computer simulation suggests holding in a sneeze results in a burst of air pressure of 39 kPa, approximately 24 times that of a normal sneeze.
In 1884, biologist Henry Walter Bates elucidated the impact of light on the sneezing reflex (Bates H.W. 1881–84. Biologia Centrali-Americana Insecta. Coleoptera. Volume I, Part 1.). He observed that individuals were only capable of sneezing when they felt in control of their entire environment. Consequently, he inferred that people were unable to sneeze in the dark. However, this hypothesis was later debunked.
History
In ancient Greece, sneezes were believed to be prophetic signs from the gods. In 401 BC, for instance, the Athenian general Xenophon gave a speech exhorting his fellow soldiers to fight against the Persians. A soldier underscored his conclusion with a sneeze. Thinking that this sneeze was a favorable sign from the gods, the soldiers were impressed. Another divine moment of sneezing for the Greeks occurs in the story of Odysseus. His waiting wife Penelope, hearing Odysseus may be alive, says that he and his son would take revenge on the suitors if he were to return. At that moment, their son sneezes loudly and Penelope laughs with joy, reassured that it is a sign from the gods (Odyssey 17: 541–550). It may be because this belief survived through the centuries, that in certain parts of Greece today, when someone is asserting something and the listener sneezes promptly at the end of the assertion, the former responds "bless you and I am speaking the truth", or "bless you and here is the truth" ("", ya sou ki alithia leo, or "", ya sou ke na ki i alithia). A similar practice is also followed in India. If either the person just having made a not most obvious statement in Flemish, or some listener sneezes, often one of the listeners will say "It is beniesd", literally "It's sneezed upon", as if a proof of truth – usually self-ironically recalling this old superstitious habit, without either suggesting doubt or intending an actual confirmation, but making any apology by the sneezer for the interruption superfluous as the remark is received by smiles.
In Europe, principally around the early Middle Ages, it was believed that one's life was in fact tied to one's breath – a belief reflected in the word "expire" (originally meaning "to exhale") gaining the additional meaning of "to come to an end" or "to die". This connection, coupled with the significant amount of breath expelled from the body during a sneeze, had likely led people to believe that sneezing could easily be fatal. Such a theory could explain the reasoning behind the traditional English phrase, "God bless you", in response to a sneeze, the origins of which are not entirely clear. Sir Raymond Henry Payne Crawfurd, for instance, the registrar of the Royal College of Physicians, in his 1909 book, "The Last Days of Charles II", states that, when the controversial monarch was on his deathbed, his medical attendants administered a concoction of cowslips and extract of ammonia to promote sneezing. However, it is not known if this promotion of sneezing was done to hasten his death (as coup de grâce) or as an ultimate attempt at treatment.
In certain parts of East Asia, particularly in Chinese culture, Korean culture, Japanese culture and Vietnamese culture, a sneeze without an obvious cause was generally perceived as a sign that someone was talking about the sneezer at that very moment. This can be seen in the Book of Songs (a collection of Chinese poems) in ancient China as early as 1000 BC, and in Japan this belief is still depicted in present-day manga and anime. In China, Vietnam, South Korea, and Japan, for instance, there is a superstition that if talking behind someone's back causes the person being talked about to sneeze; as such, the sneezer can tell if something good is being said (one sneeze), someone is thinking about you (two sneezes in a row), even if someone is in love with you (three sneezes in a row) or if this is a sign that they are about to catch a cold (multiple sneezes).
Parallel beliefs are known to exist around the world, particularly in contemporary Greek, Slavic, Celtic, English, French, and Indian cultures. Similarly, in Nepal, sneezers are believed to be remembered by someone at that particular moment.
In English, the onomatopoeia for sneezes is usually spelled 'achoo' and it is similar to that of different cultures.
Culture
In Indian culture, especially in northern parts of India, Bengali (Bangladesh and Bengal of India) culture and also in Iran, it has been a common superstition that a sneeze taking place before the start of any work was a sign of impending bad interruption. It was thus customary to pause in order to drink water or break any work rhythm before resuming the job at hand in order to prevent any misfortune from occurring.
In Polish culture, especially in the Kresy Wschodnie borderlands, a popular belief persists that sneezes may be an inauspicious sign that, depending on the local version, either someone unspecified or one's mother-in-law speaks ill of the person sneezing at that moment. In other regions, however, this superstition concerns hiccups rather than sneezing. As with other Catholic countries, such as Mexico, Italy, or Ireland, the remnants of pagan culture are fostered in Polish peasant idiosyncratic superstitions.
The practice among Islamic culture, in turn, has largely been based on various prophetic traditions and the teachings of Muhammad. An example of this is Al-Bukhaari's narrations from Abu Hurayrah that Muhammad once said:When one of you sneezes, let him say, "Al-hamdu-Lillah" (Praise be to God), and let his brother or companion say to him, "Yarhamuk Allah" (May God have mercy on you). If he says, "Yarhamuk-Allah", then let [the sneezer] say, "Yahdeekum Allah wa yuslihu baalakum" (May God guide you and rectify your condition).
Verbal responses
In English-speaking countries, one common verbal response to another person's sneeze is "[May God] bless you". Even with "God", the declaration may be said by a person without religious intent. Another, less common, verbal response in the United States and Canada to another's sneeze is "Gesundheit", which is a German word that means, appropriately, 'health'.
Several hypotheses exist for why the custom arose of saying "bless you" or "God bless you" in the context of sneezing:
Some say it came into use during the plague pandemics of the 14th century. Blessing the individual after showing such a symptom was thought to prevent possible impending death due to the lethal disease.
In Renaissance times, a superstition was formed claiming one's heart stopped for a very brief moment during the sneeze; saying bless you was a sign of prayer that the heart would not fail.
Sexuality
Some people may sneeze during the initial phases of sexual arousal. Doctors suspect that the phenomenon might arise from a case of crossed wires in the autonomic nervous system, which regulates a number of functions in the body, including, but not limited to, rousing the genitals during sexual arousal. The nose, like the genitals, contains erectile tissue. This phenomenon may prepare the vomeronasal organ for increased detection of pheromones.
A sneeze has been compared to an orgasm, since both orgasms and sneeze reflexes involve tingling, bodily stretching, tension and release. On this subject, sexologist Vanessa Thompson from the University of Sydney states, "Sneezing and orgasms both produce feel-good chemicals called endorphins but the amount produced by a sneeze is far less than an orgasm."
According to Dr. Holly Boyer from the University of Minnesota, there is a pleasurable effect during a sneeze, where she states, "the muscle tension that builds up in your chest causes pressure, and when you sneeze and the muscles relax, it releases pressure. Anytime you release pressure, it feels good...There's also some evidence that endorphins are released, which causes your body to feel good". Endorphins induce the brain's reward system, and because sneezes occur in a quick burst, so does the pleasure.
In non-humans
Sneezing is not confined to humans or even mammals. Many animals including cats, dogs, chickens and iguanas sneeze. African wild dogs use sneezing as a form of communication, especially when considering a consensus in a pack on whether or not to hunt. Some breeds of dog are predisposed to reverse sneezing.
| Biology and health sciences | Symptoms and signs | Health |
232426 | https://en.wikipedia.org/wiki/Classification | Classification | Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identifying spam emails and deciding whether to give someone a driving license.
As well as 'category', synonyms or near-synonyms for 'class' include 'type', 'species', 'order', 'concept', 'taxon', 'group', 'identification' and 'division'.
The meaning of the word 'classification' (and its synonyms) may take on one of several related meanings. It may encompass both classification and the creation of classes, as for example in 'the task of categorizing pages in Wikipedia'; this overall activity is listed under taxonomy. It may refer exclusively to the underlying scheme of classes (which otherwise may be called a taxonomy). Or it may refer to the label given to an object by the classifier.
Classification is a part of many different kinds of activities and is studied from many different points of view including medicine, philosophy, law, anthropology, biology, taxonomy, cognition, communications, knowledge organization, psychology, statistics, machine learning, economics and mathematics.
Binary vs multi-class classification
Methodological work aimed at improving the accuracy of a classifier is commonly divided between cases where there are exactly two classes (binary classification) and cases where there are three or more classes (multiclass classification).
Evaluation of accuracy
Unlike in decision theory, it is assumed that a classifier repeats the classification task over and over. And unlike a lottery, it is assumed that each classification can be either right or wrong; in the theory of measurement, classification is understood as measurement against a nominal scale. Thus it is possible to try to measure the accuracy of a classifier.
Measuring the accuracy of a classifier allows a choice to be made between two alternative classifiers. This is important both when developing a classifier and in choosing which classifier to deploy. There are however many different methods for evaluating the accuracy of a classifier and no general method for determining which method should be used in which circumstances. Different fields have taken different approaches, even in binary classification. In pattern recognition, error rate is popular. The Gini coefficient and KS statistic are widely used in the credit scoring industry. Sensitivity and specificity are widely used in epidemiology and medicine. Precision and recall are widely used in information retrieval.
Classifier accuracy depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems (a phenomenon that may be explained by the no-free-lunch theorem).
| Physical sciences | Science basics | Basics and measurement |
232535 | https://en.wikipedia.org/wiki/Hypergraph | Hypergraph | In mathematics, a hypergraph is a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices.
Formally, a directed hypergraph is a pair , where is a set of elements called nodes, vertices, points, or elements and is a set of pairs of subsets of . Each of these pairs is called an edge or hyperedge; the vertex subset is known as its tail or domain, and as its head or codomain.
The order of a hypergraph is the number of vertices in . The size of the hypergraph is the number of edges in . The order of an edge in a directed hypergraph is : that is, the number of vertices in its tail followed by the number of vertices in its head.
The definition above generalizes from a directed graph to a directed hypergraph by defining the head or tail of each edge as a set of vertices ( or ) rather than as a single vertex. A graph is then the special case where each of these sets contains only one element. Hence any standard graph theoretic concept that is independent of the edge orders will generalize to hypergraph theory.
An undirected hypergraph is an undirected graph whose edges connect not just two vertices, but an arbitrary number. An undirected hypergraph is also called a set system or a family of sets drawn from the universal set.
Hypergraphs can be viewed as incidence structures. In particular, there is a bipartite "incidence graph" or "Levi graph" corresponding to every hypergraph, and conversely, every bipartite graph can be regarded as the incidence graph of a hypergraph when it is 2-colored and it is indicated which color class corresponds to hypergraph vertices and which to hypergraph edges.
Hypergraphs have many other names. In computational geometry, an undirected hypergraph may sometimes be called a range space and then the hyperedges are called ranges.
In cooperative game theory, hypergraphs are called simple games (voting games); this notion is applied to solve problems in social choice theory. In some literature edges are referred to as hyperlinks or connectors.
The collection of hypergraphs is a category with hypergraph homomorphisms as morphisms.
Applications
Undirected hypergraphs are useful in modelling such things as satisfiability problems, databases, machine learning, and Steiner tree problems. They have been extensively used in machine learning tasks as the data model and classifier regularization (mathematics). The applications include recommender system (communities as hyperedges), image retrieval (correlations as hyperedges), and bioinformatics (biochemical interactions as hyperedges). Representative hypergraph learning techniques include hypergraph spectral clustering that extends the spectral graph theory with hypergraph Laplacian, and hypergraph semi-supervised learning that introduces extra hypergraph structural cost to restrict the learning results. For large scale hypergraphs, a distributed framework built using Apache Spark is also available. It can be desirable to study hypergraphs where all hyperedges have the same cardinality; a k-uniform hypergraph is a hypergraph such that all its hyperedges have size k. (In other words, one such hypergraph is a collection of sets, each such set a hyperedge connecting k nodes.) So a 2-uniform hypergraph is a graph, a 3-uniform hypergraph is a collection of unordered triples, and so on.
Directed hypergraphs can be used to model things including telephony applications, detecting money laundering, operations research, and transportation planning. They can also be used to model Horn-satisfiability.
Generalizations of concepts from graphs
Many theorems and concepts involving graphs also hold for hypergraphs, in particular:
Matching in hypergraphs;
Vertex cover in hypergraphs (also known as: transversal);
Line graph of a hypergraph;
Hypergraph grammar - created by augmenting a class of hypergraphs with a set of replacement rules;
Ramsey's theorem;
Erdős–Ko–Rado theorem;
Kruskal–Katona theorem on uniform hypergraphs;
Hall-type theorems for hypergraphs.
In directed hypergraphs: transitive closure, and shortest path problems.
Hypergraph drawing
Although hypergraphs are more difficult to draw on paper than graphs, several researchers have studied methods for the visualization of hypergraphs.
In one possible visual representation for hypergraphs, similar to the standard graph drawing style in which curves in the plane are used to depict graph edges, a hypergraph's vertices are depicted as points, disks, or boxes, and its hyperedges are depicted as trees that have the vertices as their leaves. If the vertices are represented as points, the hyperedges may also be shown as smooth curves that connect sets of points, or as simple closed curves that enclose sets of points.
In another style of hypergraph visualization, the subdivision model of hypergraph drawing, the plane is subdivided into regions, each of which represents a single vertex of the hypergraph. The hyperedges of the hypergraph are represented by contiguous subsets of these regions, which may be indicated by coloring, by drawing outlines around them, or both. An order-n Venn diagram, for instance, may be viewed as a subdivision drawing of a hypergraph with n hyperedges (the curves defining the diagram) and 2n − 1 vertices (represented by the regions into which these curves subdivide the plane). In contrast with the polynomial-time recognition of planar graphs, it is NP-complete to determine whether a hypergraph has a planar subdivision drawing, but the existence of a drawing of this type may be tested efficiently when the adjacency pattern of the regions is constrained to be a path, cycle, or tree.
An alternative representation of the hypergraph called PAOH is shown in the figure on top of this article. Edges are vertical lines connecting vertices. Vertices are aligned on the left. The legend on the right shows the names of the edges. It has been designed for dynamic hypergraphs but can be used for simple hypergraphs as well.
Hypergraph coloring
Classic hypergraph coloring is assigning one of the colors from set to every vertex of a hypergraph in such a way that each hyperedge contains at least two vertices of distinct colors. In other words, there must be no monochromatic hyperedge with cardinality at least 2. In this sense it is a direct generalization of graph coloring. Minimum number of used distinct colors over all colorings is called the chromatic number of a hypergraph.
Hypergraphs for which there exists a coloring using up to k colors are referred to as k-colorable. The 2-colorable hypergraphs are exactly the bipartite ones.
There are many generalizations of classic hypergraph coloring. One of them is the so-called mixed hypergraph coloring, when monochromatic edges are allowed. Some mixed hypergraphs are uncolorable for any number of colors. A general criterion for uncolorability is unknown. When a mixed hypergraph is colorable, then the minimum and maximum number of used colors are called the lower and upper chromatic numbers respectively.
Properties of hypergraphs
A hypergraph can have various properties, such as:
Empty - has no edges.
Non-simple (or multiple) - has loops (hyperedges with a single vertex) or repeated edges, which means there can be two or more edges containing the same set of vertices.
Simple - has no loops and no repeated edges.
-regular - every vertex has degree , i.e., contained in exactly hyperedges.
2-colorable - its vertices can be partitioned into two classes U and V in such a way that each hyperedge with cardinality at least 2 contains at least one vertex from both classes. An alternative term is Property B.
Two stronger properties are bipartite and balanced.
-uniform - each hyperedge contains precisely vertices.
-partite - the vertices are partitioned into parts, and each hyperedge contains precisely one vertex of each type.
Every -partite hypergraph (for ) is both -uniform and bipartite (and 2-colorable).
Reduced: no hyperedge is a strict subset of another hyperedge; equivalently, every hyperedge is maximal for inclusion. The reduction of a hypergraph is the reduced hypergraph obtained by removing every hyperedge which is included in another hyperedge.
Downward-closed - every subset of an undirected hypergraph's edges is a hyperedge too. A downward-closed hypergraph is usually called an abstract simplicial complex. It is generally not reduced, unless all hyperedges have cardinality 1.
An abstract simplicial complex with the augmentation property is called a matroid.
Laminar: for any two hyperedges, either they are disjoint, or one is included in the other. In other words, the set of hyperedges forms a laminar set family.
Related hypergraphs
Because hypergraph links can have any cardinality, there are several notions of the concept of a subgraph, called subhypergraphs, partial hypergraphs and section hypergraphs.
Let be the hypergraph consisting of vertices
and having edge set
where and are the index sets of the vertices and edges respectively.
A subhypergraph is a hypergraph with some vertices removed. Formally, the subhypergraph induced by is defined as
An alternative term is the restriction of H to A.
An extension of a subhypergraph is a hypergraph where each hyperedge of which is partially contained in the subhypergraph is fully contained in the extension . Formally
with and .
The partial hypergraph is a hypergraph with some edges removed. Given a subset of the edge index set, the partial hypergraph generated by is the hypergraph
Given a subset , the section hypergraph is the partial hypergraph
The dual of is a hypergraph whose vertices and edges are interchanged, so that the vertices are given by and whose edges are given by where
When a notion of equality is properly defined, as done below, the operation of taking the dual of a hypergraph is an involution, i.e.,
A connected graph G with the same vertex set as a connected hypergraph H is a host graph for H if every hyperedge of H induces a connected subgraph in G. For a disconnected hypergraph H, G is a host graph if there is a bijection between the connected components of G and of H, such that each connected component G' of G is a host of the corresponding H'.
The 2-section (or clique graph, representing graph, primal graph, Gaifman graph) of a hypergraph is the graph with the same vertices of the hypergraph, and edges between all pairs of vertices contained in the same hyperedge.
Incidence matrix
Let and . Every hypergraph has an incidence matrix.
For an undirected hypergraph, where
The transpose of the incidence matrix defines a hypergraph called the dual of , where is an m-element set and is an n-element set of subsets of . For and if and only if .
For a directed hypergraph, the heads and tails of each hyperedge are denoted by and respectively. where
Incidence graph
A hypergraph H may be represented by a bipartite graph BG as follows: the sets X and E are the parts of BG, and (x1, e1) are connected with an edge if and only if vertex x1 is contained in edge e1 in H.
Conversely, any bipartite graph with fixed parts and no unconnected nodes in the second part represents some hypergraph in the manner described above. This bipartite graph is also called incidence graph.
Adjacency matrix
A parallel for the adjacency matrix of a hypergraph can be drawn from the adjacency matrix of a graph. In the case of a graph, the adjacency matrix is a square matrix which indicates whether pairs of vertices are adjacent. Likewise, we can define the adjacency matrix for a hypergraph in general where the hyperedges have real weights with
Cycles
In contrast with ordinary undirected graphs for which there is a single natural notion of cycles and acyclic graphs, there are multiple natural non-equivalent definitions of acyclicity for hypergraphs which collapse to ordinary graph acyclicity for the special case of ordinary graphs.
A first definition of acyclicity for hypergraphs was given by Claude Berge: a hypergraph is Berge-acyclic if its incidence graph (the bipartite graph defined above) is acyclic. This definition is very restrictive: for instance, if a hypergraph has some pair of vertices and some pair of hyperedges such that and , then it is Berge-cyclic. Berge-cyclicity can obviously be tested in linear time by an exploration of the incidence graph.
We can define a weaker notion of hypergraph acyclicity, later termed α-acyclicity. This notion of acyclicity is equivalent to the hypergraph being conformal (every clique of the primal graph is covered by some hyperedge) and its primal graph being chordal; it is also equivalent to reducibility to the empty graph through the GYO algorithm (also known as Graham's algorithm), a confluent iterative process which removes hyperedges using a generalized definition of ears. In the domain of database theory, it is known that a database schema enjoys certain desirable properties if its underlying hypergraph is α-acyclic. Besides, α-acyclicity is also related to the expressiveness of the guarded fragment of first-order logic.
We can test in linear time if a hypergraph is α-acyclic.
Note that α-acyclicity has the counter-intuitive property that adding hyperedges to an α-cyclic hypergraph may make it α-acyclic (for instance, adding a hyperedge containing all vertices of the hypergraph will always make it α-acyclic). Motivated in part by this perceived shortcoming, Ronald Fagin defined the stronger notions of β-acyclicity and γ-acyclicity. We can state β-acyclicity as the requirement that all subhypergraphs of the hypergraph are α-acyclic, which is equivalent to an earlier definition by Graham. The notion of γ-acyclicity is a more restrictive condition which is equivalent to several desirable properties of database schemas and is related to Bachman diagrams. Both β-acyclicity and γ-acyclicity can be tested in polynomial time.
Those four notions of acyclicity are comparable: Berge-acyclicity implies γ-acyclicity which implies β-acyclicity which implies α-acyclicity. However, none of the reverse implications hold, so those four notions are different.
Isomorphism, symmetry, and equality
A hypergraph homomorphism is a map from the vertex set of one hypergraph to another such that each edge maps to one other edge.
A hypergraph is isomorphic to a hypergraph , written as if there exists a bijection
and a permutation of such that
The bijection is then called the isomorphism of the graphs. Note that
if and only if .
When the edges of a hypergraph are explicitly labeled, one has the additional notion of strong isomorphism. One says that is strongly isomorphic to if the permutation is the identity. One then writes . Note that all strongly isomorphic graphs are isomorphic, but not vice versa.
When the vertices of a hypergraph are explicitly labeled, one has the notions of equivalence, and also of equality. One says that is equivalent to , and writes if the isomorphism has
and
Note that
if and only if
If, in addition, the permutation is the identity, one says that equals , and writes . Note that, with this definition of equality, graphs are self-dual:
A hypergraph automorphism is an isomorphism from a vertex set into itself, that is a relabeling of vertices. The set of automorphisms of a hypergraph H (= (X, E)) is a group under composition, called the automorphism group of the hypergraph and written Aut(H).
Examples
Consider the hypergraph with edges
and
Then clearly and are isomorphic (with , etc.), but they are not strongly isomorphic. So, for example, in , vertex meets edges 1, 4 and 6, so that,
In graph , there does not exist any vertex that meets edges 1, 4 and 6:
In this example, and are equivalent, , and the duals are strongly isomorphic: .
Symmetry
The of a hypergraph is the maximum cardinality of any of the edges in the hypergraph. If all edges have the same cardinality k, the hypergraph is said to be uniform or k-uniform, or is called a k-hypergraph. A graph is just a 2-uniform hypergraph.
The degree d(v) of a vertex v is the number of edges that contain it. H is k-regular if every vertex has degree k.
The dual of a uniform hypergraph is regular and vice versa.
Two vertices x and y of H are called symmetric if there exists an automorphism such that . Two edges and are said to be symmetric if there exists an automorphism such that .
A hypergraph is said to be vertex-transitive (or vertex-symmetric) if all of its vertices are symmetric. Similarly, a hypergraph is edge-transitive if all edges are symmetric. If a hypergraph is both edge- and vertex-symmetric, then the hypergraph is simply transitive.
Because of hypergraph duality, the study of edge-transitivity is identical to the study of vertex-transitivity.
Partitions
A partition theorem due to E. Dauber states that, for an edge-transitive hypergraph , there exists a partition
of the vertex set such that the subhypergraph generated by is transitive for each , and such that
where is the rank of H.
As a corollary, an edge-transitive hypergraph that is not vertex-transitive is bicolorable.
Graph partitioning (and in particular, hypergraph partitioning) has many applications to IC design and parallel computing. Efficient and scalable hypergraph partitioning algorithms are also important for processing large scale hypergraphs in machine learning tasks.
Further generalizations
One possible generalization of a hypergraph is to allow edges to point at other edges. There are two variations of this generalization. In one, the edges consist not only of a set of vertices, but may also contain subsets of vertices, subsets of subsets of vertices and so on ad infinitum. In essence, every edge is just an internal node of a tree or directed acyclic graph, and vertices are the leaf nodes. A hypergraph is then just a collection of trees with common, shared nodes (that is, a given internal node or leaf may occur in several different trees). Conversely, every collection of trees can be understood as this generalized hypergraph. Since trees are widely used throughout computer science and many other branches of mathematics, one could say that hypergraphs appear naturally as well. So, for example, this generalization arises naturally as a model of term algebra; edges correspond to terms and vertices correspond to constants or variables.
For such a hypergraph, set membership then provides an ordering, but the ordering is neither a partial order nor a preorder, since it is not transitive. The graph corresponding to the Levi graph of this generalization is a directed acyclic graph. Consider, for example, the generalized hypergraph whose vertex set is and whose edges are and . Then, although and , it is not true that . However, the transitive closure of set membership for such hypergraphs does induce a partial order, and "flattens" the hypergraph into a partially ordered set.
Alternately, edges can be allowed to point at other edges, irrespective of the requirement that the edges be ordered as directed, acyclic graphs. This allows graphs with edge-loops, which need not contain vertices at all. For example, consider the generalized hypergraph consisting of two edges and , and zero vertices, so that and . As this loop is infinitely recursive, sets that are the edges violate the axiom of foundation. In particular, there is no transitive closure of set membership for such hypergraphs. Although such structures may seem strange at first, they can be readily understood by noting that the equivalent generalization of their Levi graph is no longer bipartite, but is rather just some general directed graph.
The generalized incidence matrix for such hypergraphs is, by definition, a square matrix, of a rank equal to the total number of vertices plus edges. Thus, for the above example, the incidence matrix is simply
| Mathematics | Graph theory | null |
232867 | https://en.wikipedia.org/wiki/Hamstring | Hamstring | A hamstring () is any one of the three posterior thigh muscles in human anatomy between the hip and the knee: from medial to lateral, the semimembranosus, semitendinosus and biceps femoris.
Etymology
The word "ham" is derived from the Old English ham or hom meaning the hollow or bend of the knee, from a Germanic base where it meant "crooked". It gained the meaning of the leg of an animal around the 15th century. String refers to tendons, and thus the hamstrings' string-like tendons felt on either side of the back of the knee.
Criteria
The common criteria of any hamstring muscles are:
Muscles should originate from ischial tuberosity.
Muscles should be inserted over the knee joint, in the tibia or in the fibula.
Muscles will be innervated by the tibial branch of the sciatic nerve.
Muscle will participate in flexion of the knee joint and extension of the hip joint.
Those muscles which fulfill all of the four criteria are called true hamstrings.
The adductor magnus reaches only up to the adductor tubercle of the femur, but it is included amongst the hamstrings because the tibial collateral ligament of the knee joint morphologically is the degenerated tendon of this muscle. The ligament is attached to the medial epicondyle, two millimeters from the adductor tubercle.
Structure
The three muscles of the posterior thigh (semitendinosus, semimembranosus, biceps femoris) flex (bend) the knee, while all but the biceps femoris extend (straighten) the hip. The three 'true' hamstrings cross both the hip and the knee joint and are therefore involved in knee flexion and hip extension. The short head of the biceps femoris crosses only one joint (knee) and is therefore not involved in hip extension. With its divergent origin and innervation, it is sometimes excluded from the 'hamstring' characterization.
A portion of the adductor magnus is sometimes considered a part of the hamstrings.
Function
The hamstrings cross and act upon two joints – the hip and the knee – and as such they are termed biarticular muscles.
Semitendinosus and semimembranosus extend the hip when the trunk is fixed; they also flex the knee and medially (inwardly) rotate the lower leg when the knee is bent.
The long head of the biceps femoris extends the hip, as when beginning to walk; both short and long heads flex the knee and laterally (outwardly) rotate the lower leg when the knee is bent.
The hamstrings play a crucial role in many daily activities such as walking, running, jumping, and controlling some movement in the gluteus. In walking, they are most important as an antagonist to the quadriceps in the deceleration of knee extension.
Clinical significance
Sports running injuries
A common running injury in several sports, excessive stretch of a hamstring results from extensive hip flexion while the knee is extended. During sprinting, a hamstring injury may occur from excessive muscle strain during eccentric contraction late in the leg swing phase. The overall incidence of a hamstring injury in sports and professional dancers is about two per 1000 hours of performance. In some sports, a hamstring injury occurs at the incidence of 19% of all sports injuries, and results in an average time loss from competition of 24 days.
Imaging
Imaging the hamstring muscles is usually performed with an ultrasound and/or MRI. The biceps femoris is most commonly injured, followed by semitendinosus. Semimembranosus injury is rare. Imaging is useful in differentiating the grade of strain, especially if the muscle is completely torn. In this setting, the level and degree of retraction can be determined, serving as a useful roadmap prior to any surgery. Those with a hamstring strain of greater than in length have a greater risk of recurrence.
Use in surgery
The distal semitendinosus tendon is one of the tendons that can be used in the surgical procedure ACL reconstruction. In this procedure, a piece of it is used to replace the anterior cruciate ligament (ACL). The ACL is one of the four major ligaments in the knee, which also include the posterior cruciate ligament (PCL), medial collateral ligament (MCL), and lateral collateral ligament (LCL).
| Biology and health sciences | Human anatomy | Health |
233027 | https://en.wikipedia.org/wiki/Cubic%20foot | Cubic foot | The cubic foot (symbol ft3 or cu ft) is an imperial and US customary (non-metric) unit of volume, used in the United States and the United Kingdom. It is defined as the volume of a cube with sides of one foot () in length. Its volume is (about of a cubic metre).
Conversions
Symbols and abbreviations
The IEEE symbol for the cubic foot is ft3. The following abbreviations are used: cubicfeet, cubicfoot, cubicft, cufeet, cufoot, cuft, cu.ft, cuft, cbft, cb.ft, cbft, cbf, feet, foot, ft, feet/-3, foot/-3, ft/-3.
Larger multiples are in common usage in commerce and industry in the United States:
CCF or HCF: Centum (Latin hundred) cubic feet; i.e.,
Used in the billing of natural gas and water delivered to households.
MCF: Mille (Latin thousand) cubic feet; i.e.,
MMCF: Mille mille (= million) cubic feet; i.e.,
MMCFD: MMCF per day; i.e., /d
Used in the oil and gas industry.
BCF or TMC: Billion or thousand million cubic feet; i.e.,
TMC is usually used for referring to storage capacity and actual storage volume of storage dams.
TCF: Trillion cubic feet; i.e.,
Used in the oil and gas industry.
Cubic foot per second and related flow rates
The IEEE symbol for the cubic foot per second is ft3/s. The following other abbreviations are also sometimes used:
ft3/sec
cu ft/s
cfs or CFS
cusec
second-feet
The flow or discharge of rivers, i.e., the volume of water passing a location per unit of time, is commonly expressed in units of cubic feet per second or cubic metres per second.
Cusec is a unit of flow rate, used mostly in the United States in the context of water flow, particularly of rivers and canals.
Conversions: 1 ft3s−1 = = = =
Cubic foot per minute
The IEEE symbol for the cubic foot per minute is ft3/min. The following abbreviations are used:
cu ft/min
cfm or CFM
cfpm or CFPM
Cubic feet per minute is used to measure the amount of air that is being delivered, and is a common metric used for carburetors, pneumatic tools, and air-compressor systems.
Standard cubic foot
A standard cubic foot (abbreviated scf) is a measure of quantity of gas, sometimes defined in terms of standard temperature and pressure as a cubic foot of volume at and of pressure.
| Physical sciences | Volume | Basics and measurement |
233076 | https://en.wikipedia.org/wiki/Reconnaissance%20aircraft | Reconnaissance aircraft | A reconnaissance aircraft (colloquially, a spy plane) is a military aircraft designed or adapted to perform aerial reconnaissance with roles including collection of imagery intelligence (including using photography), signals intelligence, as well as measurement and signature intelligence. Modern technology has also enabled some aircraft and UAVs to carry out real-time surveillance in addition to general intelligence gathering.
Before the development of devices such as radar, military forces relied on reconnaissance aircraft for visual observation and scouting of enemy movement. An example is the PBY Catalina maritime patrol flying boat used by the Allies in World War II: a flight of U.S. Navy Catalinas spotted part of the Japanese fleet approaching Midway Island, beginning the Battle of Midway.
History
Prior to the 20th century, machines for powered and controllable flight were not available to military forces, but some attempts were made to use lighter than air craft. During the Napoleonic Wars and Franco-Prussian War, balloons were used for aerial reconnaissance by the French.
In World War I, aircraft were deployed during early phases of battle in reconnaissance roles as 'eyes of the army' to aid ground forces. Aerial reconnaissance from this time through 1945 was mostly carried out by adapted versions of standard fighters and bombers equipped with film cameras. Photography became the primary and best-known method of intelligence collection for reconnaissance aircraft by the end of World War II.
World War I also saw use of floatplanes to locate enemy warships. After the battle of Jutland demonstrated the limitations of seaplane tenders, provisions were made for capital ships to carry, launch, and recover observation seaplanes. These seaplanes could scout for enemy warships beyond the visual range of the ship's lookouts, and could spot the fall of shot during long range artillery engagements. Observation seaplanes were replaced by helicopters after World War II.
After World War II and during the Cold War the United States developed several dedicated reconnaissance aircraft designs, including the U-2 and SR-71, to monitor the nuclear arsenal of the Soviet Union. Other types of reconnaissance aircraft were built for specialized roles in signals intelligence and electronic monitoring, such as the RB-47, RB-57, Boeing RC-135 and the Ryan Model 147 drones.
Since the Cold War much of the strategic reconnaissance aircraft role has passed over to satellites, and the tactical role to unmanned aerial vehicles (UAVs). This has been proven in successful uses by the United States in Desert Storm operations.
| Technology | Military aviation | null |
233082 | https://en.wikipedia.org/wiki/Burn | Burn | A burn is an injury to skin, or other tissues, caused by heat, cold, electricity, chemicals, friction, or ionizing radiation (such as sunburn, caused by ultraviolet radiation). Most burns are due to heat from hot liquids (called scalding), solids, or fire. Burns occur mainly in the home or the workplace. In the home, risks are associated with domestic kitchens, including stoves, flames, and hot liquids. In the workplace, risks are associated with fire and chemical and electric burns. Alcoholism and smoking are other risk factors. Burns can also occur as a result of self-harm or violence between people (assault).
Burns that affect only the superficial skin layers are known as superficial or first-degree burns. They appear red without blisters, and pain typically lasts around three days. When the injury extends into some of the underlying skin layer, it is a partial-thickness or second-degree burn. Blisters are frequently present and they are often very painful. Healing can require up to eight weeks and scarring may occur. In a full-thickness or third-degree burn, the injury extends to all layers of the skin. Often there is no pain and the burnt area is stiff. Healing typically does not occur on its own. A fourth-degree burn additionally involves injury to deeper tissues, such as muscle, tendons, or bone. The burn is often black and frequently leads to loss of the burned part.
Burns are generally preventable. Treatment depends on the severity of the burn. Superficial burns may be managed with little more than simple pain medication, while major burns may require prolonged treatment in specialized burn centers. Cooling with tap water may help pain and decrease damage; however, prolonged cooling may result in low body temperature. Partial-thickness burns may require cleaning with soap and water, followed by dressings. It is not clear how to manage blisters, but it is probably reasonable to leave them intact if small and drain them if large. Full-thickness burns usually require surgical treatments, such as skin grafting. Extensive burns often require large amounts of intravenous fluid, due to capillary fluid leakage and tissue swelling. The most common complications of burns involve infection. Tetanus toxoid should be given if not up to date.
In 2015, fire and heat resulted in 67 million injuries. This resulted in about 2.9 million hospitalizations and 176,000 deaths. Among women in much of the world, burns are most commonly related to the use of open cooking fires or unsafe cook stoves. Among men, they are more likely a result of unsafe workplace conditions. Most deaths due to burns occur in the developing world, particularly in Southeast Asia. While large burns can be fatal, treatments developed since 1960 have improved outcomes, especially in children and young adults. In the United States, approximately 96% of those admitted to a burn center survive their injuries. The long-term outcome is related to the size of burn and the age of the person affected.
History
Cave paintings from more than 3,500 years ago document burns and their management. The earliest Egyptian records on treating burns describes dressings prepared with milk from mothers of baby boys, and the 1500 BCE Edwin Smith Papyrus describes treatments using honey and the salve of resin. Many other treatments have been used over the ages, including the use of tea leaves by the Chinese documented to 600 BCE, pig fat and vinegar by Hippocrates documented to 400 BCE, and wine and myrrh by Celsus documented to the 1st century CE. French barber-surgeon Ambroise Paré was the first to describe different degrees of burns in the 1500s. Guillaume Dupuytren expanded these degrees into six different severities in 1832.
The first hospital to treat burns opened in 1843 in London, England, and the development of modern burn care began in the late 1800s and early 1900s. During World War I, Henry D. Dakin and Alexis Carrel developed standards for the cleaning and disinfecting of burns and wounds using sodium hypochlorite solutions, which significantly reduced mortality. In the 1940s, the importance of early excision and skin grafting was acknowledged, and around the same time, fluid resuscitation and formulas to guide it were developed. In the 1970s, researchers demonstrated the significance of the hypermetabolic state that follows large burns.
The "Evans formula", described in 1952, was the first burn resuscitation formula based on body weight and surface area (BSA) damaged. The first 24 hours of treatment entails 1ml/kg/% BSA of crystalloids plus 1 ml/kg/% BSA colloids plus 2000ml glucose in water, and in the next 24 hours, crystalloids at 0.5 ml/kg/% BSA, colloids at 0.5 ml/kg/% BSA, and the same amount of glucose in water.
Signs and symptoms
The characteristics of a burn depend upon its depth. Superficial burns cause pain lasting two or three days, followed by peeling of the skin over the next few days. Individuals with more severe burns may indicate discomfort or complain of feeling pressure rather than pain. Full-thickness burns may be entirely insensitive to light touch or puncture. While superficial burns are typically red in color, severe burns may be pink, white or black. Burns around the mouth or singed hair inside the nose may indicate that burns to the airways have occurred, but these findings are not definitive. More worrisome signs include: shortness of breath, hoarseness, and stridor or wheezing. Itchiness is common during the healing process, occurring in up to 90% of adults and nearly all children. Numbness or tingling may persist for a prolonged period of time after an electrical injury. Burns may also produce emotional and psychological distress.
Cause
Burns are caused by a variety of external sources classified as thermal (heat-related), chemical, electrical, and radiation. In the United States, the most common causes of burns are: fire or flame (44%), scalds (33%), hot objects (9%), electricity (4%), and chemicals (3%). Most (69%) burn injuries occur at home or at work (9%), and most are accidental, with 2% due to assault by another, and 1–2% resulting from a suicide attempt. These sources can cause inhalation injury to the airway and/or lungs, occurring in about 6%.
Burn injuries occur more commonly among the poor. Smoking and alcoholism are other risk factors. Fire-related burns are generally more common in colder climates. Specific risk factors in the developing world include cooking with open fires or on the floor as well as developmental disabilities in children and chronic diseases in adults.
Thermal
In the United States, fire and hot liquids are the most common causes of burns. Of house fires that result in death, smoking causes 25% and heating devices cause 22%. Almost half of injuries are due to efforts to fight a fire. Scalding is caused by hot liquids or gases and most commonly occurs from exposure to hot drinks, high temperature tap water in baths or showers, hot cooking oil, or steam. Scald injuries are most common in children under the age of five and, in the United States and Australia, this population makes up about two-thirds of all burns. Contact with hot objects is the cause of about 20–30% of burns in children. Generally, scalds are first- or second-degree burns, but third-degree burns may also result, especially with prolonged contact. Fireworks are a common cause of burns during holiday seasons in many countries. This is a particular risk for adolescent males. In the United States, for non-fatal burn injuries to children, white males under the age of 6 comprise most cases. Thermal burns from grabbing/touching and spilling/splashing were the most common type of burn and mechanism, while the bodily areas most impacted were hands and fingers followed by head/neck.
Chemical
Chemical burns can be caused by over 25,000 substances, most of which are either a strong base (55%) or a strong acid (26%). Most chemical burn deaths are secondary to ingestion. Common agents include: sulfuric acid as found in toilet cleaners, sodium hypochlorite as found in bleach, and halogenated hydrocarbons as found in paint remover, among others. Hydrofluoric acid can cause particularly deep burns that may not become symptomatic until some time after exposure. Formic acid may cause the breakdown of significant numbers of red blood cells.
Electrical
Electrical burns or injuries are classified as high voltage (greater than or equal to 1000 volts), low voltage (less than 1000 volts), or as flash burns secondary to an electric arc. The most common causes of electrical burns in children are electrical cords (60%) followed by electrical outlets (14%). Lightning may also result in electrical burns. Risk factors for being struck include involvement in outdoor activities such as mountain climbing, golf and field sports, and working outside. Mortality from a lightning strike is about 10%.
While electrical injuries primarily result in burns, they may also cause fractures or dislocations secondary to blunt force trauma or muscle contractions. In high voltage injuries, most damage may occur internally and thus the extent of the injury cannot be judged by examination of the skin alone. Contact with either low voltage or high voltage may produce cardiac arrhythmias or cardiac arrest.
Radiation
Radiation burns may be caused by protracted exposure to ultraviolet light (such as from the sun, tanning booths or arc welding) or from ionizing radiation (such as from radiation therapy, X-rays or radioactive fallout). Sun exposure is the most common cause of radiation burns and the most common cause of superficial burns overall. There is significant variation in how easily people sunburn based on their skin type. Skin effects from ionizing radiation depend on the amount of exposure to the area, with hair loss seen after 3 Gy, redness seen after 10 Gy, wet skin peeling after 20 Gy, and necrosis after 30 Gy. Redness, if it occurs, may not appear until some time after exposure. Radiation burns are treated the same as other burns. Microwave burns occur via thermal heating caused by the microwaves. While exposures as short as two seconds may cause injury, overall this is an uncommon occurrence.
Non-accidental
In those hospitalized from scalds or fire burns, 310% are from assault. Reasons include: child abuse, personal disputes, spousal abuse, elder abuse, and business disputes. An immersion injury or immersion scald may indicate child abuse. It is created when an extremity, or sometimes the buttocks are held under the surface of hot water. It typically produces a sharp upper border and is often symmetrical, known as "sock burns", "glove burns", or "zebra stripes" - where folds have prevented certain areas from burning. Deliberate cigarette burns most often found on the face, or the back of the hands and feet. Other high-risk signs of potential abuse include: circumferential burns, the absence of splash marks, a burn of uniform depth, and association with other signs of neglect or abuse.
Bride burning, a form of domestic violence, occurs in some cultures, such as India where women have been burned in revenge for what the husband or his family consider an inadequate dowry. In Pakistan, acid burns represent 13% of intentional burns, and are frequently related to domestic violence. Self-immolation (setting oneself on fire) is also used as a form of protest in various parts of the world.
Pathophysiology
At temperatures greater than , proteins begin losing their three-dimensional shape and start breaking down. This results in cell and tissue damage. Many of the direct health effects of a burn are caused by failure of the skin to perform its normal functions, which include: protection from bacteria, skin sensation, body temperature regulation, and prevention of evaporation of the body's water. Disruption of these functions can lead to infection, loss of skin sensation, hypothermia, and hypovolemic shock via dehydration (i.e. water in the body evaporated away). Disruption of cell membranes causes cells to lose potassium to the spaces outside the cell and to take up water and sodium.
In large burns (over 30% of the total body surface area), there is a significant inflammatory response. This results in increased leakage of fluid from the capillaries, and subsequent tissue edema. This causes overall blood volume loss, with the remaining blood suffering significant plasma loss, making the blood more concentrated. Poor blood flow to organs like the kidneys and gastrointestinal tract may result in kidney failure and stomach ulcers.
Increased levels of catecholamines and cortisol can cause a hypermetabolic state that can last for years. This is associated with increased cardiac output, metabolism, a fast heart rate, and poor immune function.
Diagnosis
Burns can be classified by depth, mechanism of injury, extent, and associated injuries. The most commonly used classification is based on the depth of injury. The depth of a burn is usually determined via examination, although a biopsy may also be used. It may be difficult to accurately determine the depth of a burn on a single examination and repeated examinations over a few days may be necessary. In those who have a headache or are dizzy and have a fire-related burn, carbon monoxide poisoning should be considered. Cyanide poisoning should also be considered.
Size
The size of a burn is measured as a percentage of total body surface area (TBSA) affected by partial thickness or full thickness burns. First-degree burns that are only red in color and are not blistering are not included in this estimation. Most burns (70%) involve less than 10% of the TBSA.
There are a number of methods to determine the TBSA, including the Wallace rule of nines, Lund and Browder chart, and estimations based on a person's palm size. The rule of nines is easy to remember but only accurate in people over 16 years of age. More accurate estimates can be made using Lund and Browder charts, which take into account the different proportions of body parts in adults and children. The size of a person's handprint (including the palm and fingers) is approximately 1% of their TBSA.
Severity
To determine the need for referral to a specialized burn unit, the American Burn Association devised a classification system. Under this system, burns can be classified as major, moderate, and minor. This is assessed based on a number of factors, including total body surface area affected, the involvement of specific anatomical zones, the age of the person, and associated injuries. Minor burns can typically be managed at home, moderate burns are often managed in a hospital, and major burns are managed by a burn center. Severe burn injury represents one of the most devastating forms of trauma. Despite improvements in burn care, patients can be left to suffer for as many as three years post-injury.
Prevention
Historically, about half of all burns were deemed preventable. Burn prevention programs have significantly decreased rates of serious burns. Preventive measures include: limiting hot water temperatures, smoke alarms, sprinkler systems, proper construction of buildings, and fire-resistant clothing. Experts recommend setting water heaters below . Other measures to prevent scalds include using a thermometer to measure bath water temperatures, and splash guards on stoves. While the effect of the regulation of fireworks is unclear, there is tentative evidence of benefit with recommendations including the limitation of the sale of fireworks to children.
Management
Resuscitation begins with the assessment and stabilization of the person's airway, breathing and circulation. If inhalation injury is suspected, early intubation may be required. This is followed by care of the burn wound itself. People with extensive burns may be wrapped in clean sheets until they arrive at a hospital. As burn wounds are prone to infection, a tetanus booster shot should be given if an individual has not been immunized within the last five years. In the United States, 95% of burns that present to the emergency department are treated and discharged; 5% require hospital admission. With major burns, early feeding is important. Protein intake should also be increased, and trace elements and vitamins are often required. Hyperbaric oxygenation may be useful in addition to traditional treatments.
Intravenous fluids
In those with poor tissue perfusion, boluses of isotonic crystalloid solution should be given. In children with more than 10–20% TBSA (Total Body Surface Area) burns, and adults with more than 15% TBSA burns, formal fluid resuscitation and monitoring should follow. This should be begun pre-hospital if possible in those with burns greater than 25% TBSA. The Parkland formula can help determine the volume of intravenous fluids required over the first 24 hours. The formula is based on the affected individual's TBSA and weight. Half of the fluid is administered over the first 8 hours, and the remainder over the following 16 hours. The time is calculated from when the burn occurred, and not from the time that fluid resuscitation began. Children require additional maintenance fluid that includes glucose. Additionally, those with inhalation injuries require more fluid. While inadequate fluid resuscitation may cause problems, over-resuscitation can also be detrimental. The formulas are only a guide, with infusions ideally tailored to a urinary output of >30 mL/h in adults or >1mL/kg in children and mean arterial pressure greater than 60 mmHg.
While lactated Ringer's solution is often used, there is no evidence that it is superior to normal saline. Crystalloid fluids appear just as good as colloid fluids, and as colloids are more expensive they are not recommended. Blood transfusions are rarely required. They are typically only recommended when the hemoglobin level falls below 60-80 g/L (6-8 g/dL) due to the associated risk of complications. Intravenous catheters may be placed through burned skin if needed or intraosseous infusions may be used.
Wound care
Early cooling (within 30 minutes of the burn) reduces burn depth and pain, but care must be taken as over-cooling can result in hypothermia. It should be performed with cool water and not ice water as the latter can cause further injury. Chemical burns may require extensive irrigation. Cleaning with soap and water, removal of dead tissue, and application of dressings are important aspects of wound care. If intact blisters are present, it is not clear what should be done with them. Some tentative evidence supports leaving them intact. Second-degree burns should be re-evaluated after two days.
In the management of first and second-degree burns, little quality evidence exists to determine which dressing type to use. It is reasonable to manage first-degree burns without dressings. While topical antibiotics are often recommended, there is little evidence to support their use. Silver sulfadiazine (a type of antibiotic) is not recommended as it potentially prolongs healing time. There is insufficient evidence to support the use of dressings containing silver or negative-pressure wound therapy. Silver sulfadiazine does not appear to differ from silver containing foam dressings with respect to healing.
Medications
Burns can be very painful and a number of different options may be used for pain management. These include simple analgesics (such as ibuprofen and acetaminophen) and opioids such as morphine. Benzodiazepines may be used in addition to analgesics to help with anxiety. During the healing process, antihistamines, massage, or transcutaneous nerve stimulation may be used to aid with itching. Antihistamines, however, are only effective for this purpose in 20% of people. There is tentative evidence supporting the use of gabapentin and its use may be reasonable in those who do not improve with antihistamines. Intravenous lidocaine requires more study before it can be recommended for pain.
Intravenous antibiotics are recommended before surgery for those with extensive burns (>60% TBSA). , guidelines do not recommend their general use due to concerns regarding antibiotic resistance and the increased risk of fungal infections. Tentative evidence, however, shows that they may improve survival rates in those with large and severe burns. Erythropoietin has not been found effective to prevent or treat anemia in burn cases. In burns caused by hydrofluoric acid, calcium gluconate is a specific antidote and may be used intravenously and/or topically. Recombinant human growth hormone (rhGH) in those with burns that involve more than 40% of their body appears to speed healing without affecting the risk of death. The use of steroids is of unclear evidence.
Allogeneic cultured keratinocytes and dermal fibroblasts in murine collagen (Stratagraft) was approved for medical use in the United States in June 2021.
Surgery
Wounds requiring surgical closure with skin grafts or flaps (typically anything more than a small full thickness burn) should be dealt with as early as possible. Circumferential burns of the limbs or chest may need urgent surgical release of the skin, known as an escharotomy. This is done to treat or prevent problems with distal circulation, or ventilation. It is uncertain if it is useful for neck or digit burns. Fasciotomies may be required for electrical burns.
Skin grafts can involve temporary skin substitutes, derived from animal (human donor or pig) skin or synthesized. They are used to cover the wound as a dressing, preventing infection and fluid loss, but will eventually need to be removed. Alternatively, human skin can be treated to be left on permanently without rejection.
There is no evidence that the use of copper sulphate to visualise phosphorus particles for removal can help with wound healing due to phosphorus burns. Meanwhile, absorption of copper sulphate into the blood circulation can be harmful.
Alternative medicine
Honey has been used since ancient times to aid wound healing and may be beneficial in first- and second-degree burns. There is moderate evidence that honey helps heal partial thickness burns. The evidence for aloe vera is of poor quality. While it might be beneficial in reducing pain, and a review from 2007 found tentative evidence of improved healing times, a subsequent review from 2012 did not find improved healing over silver sulfadiazine. There were only three randomized controlled trials for the use of plants for burns, two for aloe vera and one for oatmeal.
There is little evidence that vitamin E helps with keloids or scarring. Butter is not recommended. In low income countries, burns are treated up to one-third of the time with traditional medicine, which may include applications of eggs, mud, leaves or cow dung. Surgical management is limited in some cases due to insufficient financial resources and availability. There are a number of other methods that may be used in addition to medications to reduce procedural pain and anxiety including: virtual reality therapy, hypnosis, and behavioral approaches such as distraction techniques.
Patient support
Burn patients require support and care – both physiological and psychological. Respiratory failure, sepsis, and multi-organ system failure are common in hospitalized burn patients. To prevent hypothermia and maintain normal body temperature, burn patients with over 20% of burn injuries should be kept in an environment with the temperature at or above 30 degree Celsius.
Metabolism in burn patients proceeds at a higher than normal speed due to the whole-body process and rapid fatty acid substrate cycles, which can be countered with an adequate supply of energy, nutrients, and antioxidants. Enteral feeding a day after resuscitation is required to reduce risk of infection, recovery time, non-infectious complications, hospital stay, long-term damage, and mortality. Controlling blood glucose levels can have an impact on liver function and survival.
Risk of thromboembolism is high and acute respiratory distress syndrome (ARDS) that does not resolve with maximal ventilator use is also a common complication. Scars are long-term after-effects of a burn injury. Psychological support is required to cope with the aftermath of a fire accident, while to prevent scars and long-term damage to the skin and other body structures consulting with burn specialists, preventing infections, consuming nutritious foods, early and aggressive rehabilitation, and using compressive clothing are recommended.
Prognosis
The prognosis is worse in those with larger burns, those who are older, and females. The presence of a smoke inhalation injury, other significant injuries such as long bone fractures, and serious co-morbidities (e.g. heart disease, diabetes, psychiatric illness, and suicidal intent) also influence prognosis. On average, of those admitted to burn centers in the United States, 4% die, with the outcome for individuals dependent on the extent of the burn injury. For example, admittees with burn areas less than 10% TBSA had a mortality rate of less than 1%, while admittees with over 90% TBSA had a mortality rate of 85%. In Afghanistan, people with more than 60% TBSA burns rarely survive. The Baux score has historically been used to determine prognosis of major burns. However, with improved care, it is no longer very accurate. The score is determined by adding the size of the burn (% TBSA) to the age of the person and taking that to be more or less equal to the risk of death. Burns in 2013 resulted in 1.2 million years lived with disability and 12.3 million disability adjusted life years.
Complications
A number of complications may occur, with infections being the most common. In order of frequency, potential complications include: pneumonia, cellulitis, urinary tract infections and respiratory failure. Risk factors for infection include: burns of more than 30% TBSA, full-thickness burns, extremes of age (young or old), or burns involving the legs or perineum. Pneumonia occurs particularly commonly in those with inhalation injuries.
Anemia secondary to full thickness burns of greater than 10% TBSA is common. Electrical burns may lead to compartment syndrome or rhabdomyolysis due to muscle breakdown. Blood clotting in the veins of the legs is estimated to occur in 6 to 25% of people. The hypermetabolic state that may persist for years after a major burn can result in a decrease in bone density and a loss of muscle mass. Keloids may form subsequent to a burn, particularly in those who are young and dark skinned. Following a burn, children may have significant psychological trauma and experience post-traumatic stress disorder. Scarring may also result in a disturbance in body image. To treat hypertrophic scars (raised, tense, stiff and itchy scars) and limit their effect on physical function and everyday activities, silicone sheeting and compression garments are recommended. In the developing world, significant burns may result in social isolation, extreme poverty and child abandonment.
Epidemiology
In 2015 fire and heat resulted in 67 million injuries. This resulted in about 2.9 million hospitalizations and 238,000 dying. This is down from 300,000 deaths in 1990. This makes it the fourth leading cause of injuries after motor vehicle collisions, falls, and violence. About 90% of burns occur in the developing world. This has been attributed partly to overcrowding and an unsafe cooking situation. Overall, nearly 60% of fatal burns occur in Southeast Asia with a rate of 11.6 per 100,000. The number of fatal burns has changed from 280,000 in 1990 to 176,000 in 2015.
In the developed world, adult males have twice the mortality as females from burns. This is most probably due to their higher risk occupations and greater risk-taking activities. In many countries in the developing world, however, females have twice the risk of males. This is often related to accidents in the kitchen or domestic violence. In children, deaths from burns occur at more than ten times the rate in the developing than the developed world. Overall, in children it is one of the top fifteen leading causes of death. From the 1980s to 2004, many countries have seen both a decrease in the rates of fatal burns and in burns generally.
Developed countries
An estimated 500,000 burn injuries receive medical treatment yearly in the United States. They resulted in about 3,300 deaths in 2008. Most burns (70%) and deaths from burns occur in males. The highest incidence of fire burns occurs in those 1835 years old, while the highest incidence of scalds occurs in children less than five years old and adults over 65. Electrical burns result in about 1,000 deaths per year. Lightning results in the death of about 60 people a year. In Europe, intentional burns occur most commonly in middle aged men.
Developing countries
In India, about 700,000 to 800,000 people per year sustain significant burns, though very few are looked after in specialist burn units. The highest rates occur in women 16–35 years of age. Part of this high rate is related to unsafe kitchens and loose-fitting clothing typical to India. It is estimated that one-third of all burns in India are due to clothing catching fire from open flames. Intentional burns are also a common cause and occur at high rates in young women, secondary to domestic violence and self-harm.
| Biology and health sciences | Injury | null |
233089 | https://en.wikipedia.org/wiki/Polyvinyl%20acetate | Polyvinyl acetate | Polyvinyl acetate (PVA, PVAc, poly(ethenyl ethanoate)), commonly known as wood glue (a term that may also refer to other types of glues), PVA glue, white glue, carpenter's glue, school glue, or Elmer's Glue in the US, is a widely available adhesive used for porous materials like wood, paper, and cloth. An aliphatic rubbery synthetic polymer with the formula (C4H6O2)n, it belongs to the polyvinyl ester family, with the general formula −[RCOOCHCH2]−. It is a type of thermoplastic.
Properties
The degree of polymerization of polyvinyl acetate is typically 100 to 5000, while its ester groups are sensitive to base hydrolysis and slowly convert PVAc into polyvinyl alcohol and acetic acid.
The glass transition temperature of polyvinyl acetate is between 30 and 45 °C depending on the molecular weight.
PVAc dispersions such as Elmer's Glue-All contain polyvinyl alcohol as a protective colloid. In alkaline conditions, boron compounds such as boric acid or borax cause the polyvinyl alcohol to cross-link, forming tackifying precipitates or toys, such as Slime and Flubber.
A number of microorganisms can degrade polyvinyl acetate. Most commonly, damage is caused by filamentous fungi; however, algae, yeasts, lichens, and bacteria can also degrade polyvinyl acetate.
Discovery
Polyvinyl acetate was discovered in Germany in 1912 by Fritz Klatte.
The monomer, vinyl acetate, was first produced on an industrial scale by the addition of acetic acid to acetylene with a mercury(I) salt, but it is now primarily made by palladium-catalyzed oxidative addition of acetic acid to ethylene.
Preparation
PVA is a vinyl polymer. Polyvinyl acetate is prepared by the polymerization of vinyl acetate monomer (free-radical vinyl polymerization of the monomer vinyl acetate).
Applications
As a dispersion in water (usually an emulsion), PVAc preparations are used as adhesives for porous materials, particularly for wood, paper, and cloth, and as a consolidant for porous building stone, in particular sandstone. PVAc is considered a food-safe material, and is thus used often in such applications (e.g., in food packaging material).
Uses:
As wood glue, PVAc is known as "white glue" and the yellow as "carpenter's glue".
As paper adhesive during paper packaging conversion.
In bookbinding and book arts, due to its flexible strong bond and non-acidic nature (unlike many other polymers). The use of PVAc on the Archimedes Palimpsest during the 20th century greatly hindered the task of disbinding the book and preserving and imaging the pages in the early 21st century, in part because the glue was stronger than the parchment it held together.
In handicrafts.
As envelope adhesive.
As wallpaper adhesive.
As a primer for drywall and other substrates.
As a gum base in chewing gum.
As a water-soluble support material for 3D printing, usually for the fused filament fabrication method.
As an adhesive for cigarette paper.
As the coating layer on Gouda cheese.
The stiff homopolymer PVAc, but mostly the softer copolymer, a combination of vinyl acetate and ethylene, vinyl acetate ethylene (VAE), is also used in paper coatings, paint and other industrial coatings, as a binder in nonwovens in glass fibers, sanitary napkins, filter paper and in textile finishing.
Polyvinyl acetate is also the raw material to make other polymers like:
Polyvinyl alcohol −[HOCHCH2]−: Polyvinyl acetate is partially or completely hydrolysed to give polyvinyl alcohol. This reversible saponification and esterification reaction was a strong hint for Hermann Staudinger in the formulation of his theory of macromolecules.
Polyvinyl acetate phthalate (PVAP): Polyvinyl acetate is partially hydrolyzed and then esterified with phthalic acid.
| Physical sciences | Polymers | Chemistry |
233217 | https://en.wikipedia.org/wiki/Tiger%20shark | Tiger shark | The tiger shark (Galeocerdo cuvier) is a species of ground shark, and the only extant member of the genus Galeocerdo and family Galeocerdonidae. It is a large macropredator, with females capable of attaining a length of over . Populations are found in many tropical and temperate waters, especially around central Pacific islands. Its name derives from the dark stripes down its body, which resemble a tiger's pattern, but fade as the shark matures.
The tiger shark is a solitary, mostly nocturnal hunter. It is notable for having the widest food spectrum of all sharks, with a range of prey that includes crustaceans, fish, seals, birds, squid, turtles, sea snakes, dolphins, and others, even smaller sharks. It also has a reputation as a "garbage eater", consuming a variety of inedible, man-made objects that linger in its stomach. Tiger sharks have only one recorded natural predator, the orca. It is considered a near threatened species because of finning and fishing by humans.
The tiger shark is second only to the great white in recorded fatal attacks on humans, but these events are still exceedingly rare.
Taxonomy
The shark was first described by Peron and Lesueur in 1822, and was given the name Squalus cuvier. Müller and Henle in 1837 renamed it Galeocerdo tigrinus. The genus, Galeocerdo, is derived from the Greek galeos, which means shark, and kerdo, the word for fox. It is often colloquially called the man-eater shark.
The tiger shark is a member of the order Carcharhiniformes, the most species-rich order of sharks, with more than 270 species also including the small catsharks and hammerhead sharks. Members of this order are characterized by the presence of a nictitating membrane over the eyes, two dorsal fins, an anal fin, and five gill slits. It is the largest member of the order, commonly referred to as ground sharks. It is the only extant member of Galeocerdo, the only member of the family Galeocerdonidae. The oldest remains of Galeocerdo extend back to the Eocene epoch, while the oldest fossils of the modern tiger shark Galeocerdo cuvier date to the Middle Miocene, around 13.8 million years ago.
Description
The tiger shark commonly attains adult length of and weighs between . The International Game Fish Association's all-tackle record is . It is sexually dimorphic, with females being the larger sex. Mature females are often over while mature males rarely get that large. Exceptionally large females reportedly can measure over , and the largest males . Weights of particularly large female tiger sharks can exceed . One pregnant female caught off Australia reportedly measured long and weighed . Even larger unconfirmed catches have been claimed. Some papers have accepted a record of an exceptional , tiger shark, but since this is far larger than any scientifically observed specimen, verification would be needed. A 2019 study suggested that Pliocene tiger sharks could have reached in maximum length. There is variation in the speed of growth rates of juvenile tiger sharks depending on the region they inhabit, with some growing close to twice as fast as others.
Among the largest extant sharks, the tiger shark ranks in average size only behind the whale shark (Rhincodon typus), the basking shark (Cetorhinus maximus), and the great white shark (Carcharodon carcharias). This makes it the second-largest predatory shark, after the great white. Some other species such as megamouth sharks (Megachasma pelagios), Pacific sleeper sharks (Somniosus pacificus), Greenland sharks (Somniosus microcephalus), and bluntnose sixgill sharks (Hexanchus griseus) broadly overlap in size with the tiger shark, but as these species are comparatively poorly studied, whether their typical mature size matches that of the tiger shark is unclear. The great hammerhead (Sphyrna mokarran), a member of the same taxonomic order as the tiger shark, has a similar or even greater average body length, but is lighter and less bulky, with a maximum known weight coming from a heavily pregnant long individual at .
Tiger shark teeth are unique with very sharp, pronounced serrations and an unmistakable sideways-pointing tip. Such dentition has developed to slice through flesh, bone, and other tough substances such as turtle shells. Like most sharks, its teeth are continually replaced by rows of new teeth throughout the shark's life. Relative to the shark's size, tiger shark teeth are considerably shorter than those of a great white shark, but they are nearly as broad as the root as the great white's teeth and are arguably better suited to slicing through hard-surfaced prey.
A tiger shark generally has long fins to provide lift as the shark maneuvers through water, while the long upper tail provides bursts of speed. The tiger shark normally swims using small body movements.
Skin
The skin of a tiger shark can typically range from blue to light green with a white or light-yellow underbelly. The advantage of this is that when it is hunting for its prey, when prey looks at the shark from above, the shark will be camouflaged, since the water below is darker. When prey is below the shark and looks up the light underbelly will also camouflage the shark with the sunlight. This is known as countershading. Dark spots and stripes are most visible in young sharks and fade as the shark matures. Its head is somewhat wedge-shaped, which makes it easy to turn quickly to one side. They have small pits on the snout which hold electroreceptors called the ampullae of Lorenzini, which enable them to detect electric fields, including the weak electrical impulses generated by prey, which helps them to hunt. Tiger sharks also have a sensory organ called a lateral line which extends on their flanks down most of the length of their sides. The primary role of this structure is to detect minute vibrations in the water. These adaptations allow the tiger shark to hunt in darkness and detect hidden prey.
Vision
Sharks do not have moveable upper or lower eyelids, but the tiger shark—among other sharks—has a nictitating membrane, a clear eyelid that can cover the eye. A reflective layer behind the tiger shark's retina, called the tapetum lucidum, allows light-sensing cells a second chance to capture photons of visible light. This enhances vision in low-light conditions.
Distribution and habitat
The tiger shark is often found close to the coast, mainly in tropical and subtropical waters throughout the world. Its behavior is primarily nomadic, but is guided by warmer currents, and it stays closer to the equator throughout the colder months. It tends to stay in deep waters that line reefs, but it does move into channels to pursue prey in shallower waters. In the western Pacific Ocean, the shark has been found as far north as Japan and as far south as New Zealand. It has also been recorded in the Mediterranean Sea, but rarely, off Malaga (Spain), Sicily (Italy) and Libya.
Tiger sharks can be seen in the Gulf of Mexico, North American beaches, and parts of South America. It is also commonly observed in the Caribbean Sea. Other locations where tiger sharks are seen include off Africa, China, India, Australia, and Indonesia. Certain tiger sharks have been recorded at depths just shy of .
Feeding
The tiger shark is an apex predator and has a reputation for eating almost anything. These predators swim close inland to eat at night, and during the day swim out into deeper waters. Young tiger sharks are found to feed largely on small fish, as well as various small jellyfish, and mollusks including cephalopods. Around the time they attain , or near sexual maturity, their selection expands considerably, and much larger animals become regular prey. Numerous fish, mollusks (including gastropods and cephalopods), crustaceans, sea birds, sea snakes, marine mammals (e.g. bottlenose dolphins (Tursiops), common dolphins (Delphinus), spotted dolphins (Stenella), dugongs (Dugong dugon), seals and sea lions), and sea turtles (including the three largest species: the leatherback (Dermochelys coriacea), the loggerhead (Caretta caretta) and the green sea turtles (Chelonia mydas)), are regularly eaten by adult tiger sharks. In fact, adult sea turtles have been found in up to 20.8% of studied tiger shark stomachs, indicating somewhat of a dietary preference for sea turtles where they are commonly encountered. They also eat other sharks (including adult sandbar sharks (Carcharhinus plumbeus)), as well as rays, and sometimes even other tiger sharks.
Due to high risk of predation, dolphins often avoid regions inhabited by tiger sharks. Injured or ailing whales may also be attacked and eaten. A group was documented killing an ailing humpback whale (Megaptera novaeangliae) in 2006 near Hawaii. A scavenger, the tiger shark will feed on dead whales, and has been documented doing so alongside great white sharks. Tiger sharks have also been observed to feed on dead manta rays in the German Channel of Palau.
Evidence of dugong predation was identified in one study that found dugong tissue in 15 of 85 tiger sharks caught off the Australian coast. Additionally, examination of adult dugongs has shown scars from failed shark attacks. To minimize attacks, dugong microhabitats shift similarly to those of known tiger shark prey when the sharks are abundant.
The broad, heavily calcified jaws and nearly terminal mouth, combined with robust, serrated teeth, enable the tiger shark to take on these large prey. In addition, excellent eyesight and acute sense of smell enable it to react to faint traces of blood and follow them to the source. The ability to pick up low-frequency pressure waves enables the shark to advance towards an animal with confidence, even in murky water. The shark circles its prey and studies it by prodding it with its snout. When attacking, the shark often eats its prey whole, although larger prey are often eaten in gradual large bites and finished over time.
Notably, terrestrial mammals, including horses (Equus ferus caballus), goats (Capra aegagrus hircus), sheep (Ovis aries), dogs (Canis lupus familiaris), cats (Felis catus), and brown rats (Rattus norvegicus), are fairly common in the stomach contents of tiger sharks around the coasts of Hawaii. In one case, remains of two flying foxes were found in the stomach of this shark, and in another, an echidna (Tachyglossus aculeatus) was regurgitated by a tiger shark being tagged off Orpheus Island, Queensland. Because of its aggressive and indiscriminate feeding style, it often mistakenly eats inedible objects, such as automobile license plates, oil cans, tires, and baseballs. Due to their habits of eating essentially anything, Tiger sharks are often referred to as the "garbage can of the sea".
Predation by orcas
Tiger sharks are preyed on by orcas. Orcas have been recorded hunting and killing tiger sharks by holding them upside down to induce tonic immobility in order to drown the shark. The orcas bite off the shark's fins before disemboweling and devouring it.
Swimming efficiency and stealth
All tiger sharks generally swim slowly, which, combined with cryptic coloration, may make them difficult for prey to detect in some habitats. They are especially well camouflaged against dark backgrounds. Despite their sluggish appearance, tiger sharks are one of the strongest swimmers of the carcharhinid sharks. Once the shark has come close, a speed burst allows it to reach the intended prey before it can escape.
Reproduction
Males reach sexual maturity at and females at . Typical weight of relatively young sexually mature specimens, which often locally comprise the majority of tiger sharks encountered per game-fishing and scientific studies, is around . Females mate once every three years. They breed by internal fertilization. The male inserts one of his claspers into the female's genital opening (cloaca), acting as a guide for the sperm. The male uses his teeth to hold the female still during the procedure, often causing the female considerable discomfort. Mating in the Northern Hemisphere generally takes place between March and May, with birth between April and June the following year. In the Southern Hemisphere, mating takes place in November, December, or early January. The tiger shark is the only species in its family that is ovoviviparous; its eggs hatch internally and the young are born live when fully developed.
Tiger Sharks are unique among all sharks in the fact that they employ embrytrophy to nourish their young inside the womb. The young gestate in sacks which are filled with a fluid that nourishes them. This allows for the young to dramatically increase in size, even though they have no placental connection to the mother.
The young develop inside the mother's body up to 16 months. Litters range from 10 to 80 pups. A newborn is generally long. How long tiger sharks live is unknown, but they can live longer than 12 years.
Ontogeny
Tiger shark ontogeny has been little studied until recently, but studies by Hammerschlag et al., indicated that as they grow, their tails become more symmetrical with age. Additionally, while the heads on juvenile tiger sharks are more conical and similar to other requiem sharks, adult tiger sharks have a head which is relatively broader. The reason for the larger caudal fin on juvenile tiger sharks is theorized to be an adaptation to escape predation by larger predators and to catch quicker-moving prey. As tiger sharks mature, their head also becomes much wider and their tails no longer become as large in proportion to their body size as when they are juveniles because they do not face elevated levels of predation risk upon maturity. The results of this study were interpreted as reflecting two ecological transitions: as tiger sharks mature they become more migratory and having a symmetrical tail is more advantageous in long-distance traveling, and that tiger sharks consume more diverse prey items with age, which requires a greater bite force and broader head.
Conservation
The tiger shark is captured and killed for its fins, flesh, and liver. It is caught regularly in target and nontarget fisheries. Several populations have declined where they have been heavily fished. Continued demand for fins may result in further declines. They are considered a near threatened species due to excessive finning and fishing by humans according to International Union for Conservation of Nature. In June 2018, the New Zealand Department of Conservation classified the tiger shark as "Migrant" with the qualifier "Secure Overseas" under the New Zealand Threat Classification System.
While shark fin has very few nutrients, shark liver has a high concentration of vitamin A, which is used in the production of vitamin oils. In addition, the tiger shark is captured and killed for its distinct skin, as well as by big-game fishers.
In 2010, Greenpeace International added the tiger shark to its seafood red list, which is a list of commonly sold fish likely to come from unsustainable fisheries.
Relationship with humans
Although sharks rarely bite humans, the tiger shark is reported to be responsible for a large share of fatal shark-bite incidents, and is regarded as one of the most dangerous shark species. They often visit shallow reefs, harbors, and canals, creating the potential for encounter with humans. The tiger shark also dwells in river mouths and other runoff-rich water. While it ranks second on the list of number of recorded shark attacks on humans, behind only the great white shark, such attacks are few and very seldom fatal. Typically, three to four shark bites occur per year in Hawaii; one notable survivor of such an attack is surfing champion Bethany Hamilton, who lost her left arm at age 13 to a tiger shark in 2003. This bite rate is very low, considering that thousands of people swim, surf, and dive in Hawaiian waters every day. Human interactions with tiger sharks in Hawaiian waters have been shown to increase between September and November, when tiger shark females are believed to migrate to the islands to give birth.
On 8 June 2023, a tiger shark attacked and killed a 23-year-old Russian man in the Red Sea off the coast of the Egyptian city of Hurghada. The attack was filmed by onlookers and the recording went viral. The shark was later captured by fishermen and killed. This was the third fatal tiger shark attack in the area since 2022.
Between 1959 and 1976, 4,668 tiger sharks were culled in the state of Hawaii in an effort to protect the tourism industry. Despite damaging the shark population, these efforts were shown to be ineffective in decreasing the number of interactions between humans and tiger sharks. Feeding sharks in Hawaii (except for traditional Hawaiian cultural or religious practices) is illegal, and interaction with them, such as cage diving, is discouraged. South African shark behaviorist and shark diver Mark Addison demonstrated divers could interact and dive with them outside of a shark cage in a 2007 Discovery Channel special, and underwater photographer Fiona Ayerst swam with them in the Bahamas. At "Tiger Beach" off Grand Bahama, uncaged diving with – and even the handling of – female tiger sharks has become a routine occurrence.
Warming Atlantic Ocean currents have caused tiger shark migration paths to move further north, according to a University of Miami study.
Mythology
Tiger sharks are considered to be sacred aumākua (ancestor spirits) by some native Hawaiians. Tiger sharks possess a unique significance as aumakua, revered as family guardians in Hawaiian culture. The tiger shark, regarded as an intelligent and highly perceptive spiritual entity, assumes the role of a messenger bridging the gap between humans and the divine. In the Hawaiian belief system, aumakua take on various forms, either animals or objects, representing ancestral connections and manifestations of departed family members. This perspective underscores the intricate web of interdependence among plants, animals, elements, and humans, underscoring the imperative to honor and coexist harmoniously with nature.
| Biology and health sciences | Sharks | null |
233253 | https://en.wikipedia.org/wiki/Umbilical%20cord | Umbilical cord | In placental mammals, the umbilical cord (also called the navel string, birth cord or funiculus umbilicalis) is a conduit between the developing embryo or fetus and the placenta. During prenatal development, the umbilical cord is physiologically and genetically part of the fetus and (in humans) normally contains two arteries (the umbilical arteries) and one vein (the umbilical vein), buried within Wharton's jelly. The umbilical vein supplies the fetus with oxygenated, nutrient-rich blood from the placenta. Conversely, the fetal heart pumps low-oxygen, nutrient-depleted blood through the umbilical arteries back to the placenta.
Structure and development
The umbilical cord develops from and contains remnants of the yolk sac and allantois. It forms by the fifth week of development, replacing the yolk sac as the source of nutrients for the embryo. The cord is not directly connected to the mother's circulatory system, but instead joins the placenta, which transfers materials to and from the maternal blood without allowing direct mixing. The length of the umbilical cord is approximately equal to the crown-rump length of the fetus throughout pregnancy. The umbilical cord in a full term neonate is usually about 50 centimeters (20 in) long and about 2 centimeters (0.75 in) in diameter. This diameter decreases rapidly within the placenta. The fully patent umbilical artery has two main layers: an outer layer consisting of circularly arranged smooth muscle cells and an inner layer which shows rather irregularly and loosely arranged cells embedded in abundant ground substance staining metachromatic. The smooth muscle cells of the layer are rather poorly differentiated, contain only a few tiny myofilaments and are thereby unlikely to contribute actively to the process of post-natal closure.
Umbilical cord can be detected on ultrasound by six weeks of gestation and well-visualised by eight to nine weeks of gestation.
The umbilical cord lining is a good source of mesenchymal and epithelial stem cells. Umbilical cord mesenchymal stem cells (UC-MSC) have been used clinically to treat osteoarthritis, autoimmune diseases, and multiple other conditions. Their advantages include a better harvesting, and multiplication, and immunosuppressive properties that define their potential for use in transplantations. Their use would also overcome the ethical objections raised by the use of embryonic stem cells.
The umbilical cord contains Wharton's jelly, a gelatinous substance made largely from mucopolysaccharides that protects the blood vessels inside. It contains one vein, which carries oxygenated, nutrient-rich blood to the fetus, and two arteries that carry deoxygenated, nutrient-depleted blood away. Occasionally, only two vessels (one vein and one artery) are present in the umbilical cord. This is sometimes related to fetal abnormalities, but it may also occur without accompanying problems.
It is unusual for a vein to carry oxygenated blood and for arteries to carry deoxygenated blood (the only other examples being the pulmonary veins and arteries, connecting the lungs to the heart). However, this naming convention reflects the fact that the umbilical vein carries blood towards the fetus' heart, while the umbilical arteries carry blood away.
The blood flow through the umbilical cord is approximately 35 ml / min at 20 weeks, and 240 ml / min at 40 weeks of gestation. Adapted to the weight of the fetus, this corresponds to 115 ml / min / kg at 20 weeks and 64 ml / min / kg at 40 weeks.
For terms of location, the proximal part of an umbilical cord refers to the segment closest to the embryo or fetus in embryology and fetal medicine, and closest to the placenta in placental pathology, and opposite for the distal part, respectively.
Function
Connection to fetal circulatory system
The umbilical cord enters the fetus via the abdomen, at the point which (after separation) will become the umbilicus (belly button or navel). Within the fetus, the umbilical vein continues towards the transverse fissure of the liver, where it splits into two. One of these branches joins with the hepatic portal vein (connecting to its left branch), which carries blood into the liver. The second branch (known as the ductus venosus) bypasses the liver and flows into the inferior vena cava, which carries blood towards the heart. The two umbilical arteries branch from the internal iliac arteries and pass on either side of the urinary bladder into the umbilical cord, completing the circuit back to the placenta.
Changes after birth
After birth, the umbilical cord stump will dry up and drop away by the time the baby is three weeks old. If the stump still has not separated after three weeks, it might be a sign of an underlying problem, such as an infection or immune system disorder.
In absence of external interventions, the umbilical cord occludes physiologically shortly after birth, explained both by a swelling and collapse of Wharton's jelly in response to a reduction in temperature and by vasoconstriction of the blood vessels by smooth muscle contraction. In effect, a natural clamp is created, halting the flow of blood. In air at 18 °C, this physiological clamping will take three minutes or less. In water birth, where the water temperature is close to body temperature, normal pulsation can be five minutes and longer.
Closure of the umbilical artery by vasoconstriction consists of multiple constrictions which increase in number and degree with time. There are segments of dilations with trapped uncoagulated blood between the constrictions before complete occlusion. Both the partial constrictions and the ultimate closure are mainly produced by muscle cells of the outer circular layer. In contrast, the inner layer seems to serve mainly as a plastic tissue which can easily be shifted in an axial direction and then folded into the narrowing lumen to complete the closure. The vasoconstrictive occlusion appears to be mainly mediated by serotonin and thromboxane A2. The artery in cords of preterm infants contracts more to angiotensin II and arachidonic acid and is more sensitive to oxytocin than in term ones. In contrast to the contribution of Wharton's jelly, cooling causes only temporary vasoconstriction.
Within the child, the umbilical vein and ductus venosus close up, and degenerate into fibrous remnants known as the round ligament of the liver and the ligamentum venosum respectively. Part of each umbilical artery closes up (degenerating into what are known as the medial umbilical ligaments), while the remaining sections are retained as part of the circulatory system.
Clinical significance
Problems and abnormalities
A number of abnormalities can affect the umbilical cord, which can cause problems that affect both mother and child:
Umbilical cord compression can result from, for example, entanglement of the cord, a knot in the cord, or a nuchal cord, (which is the wrapping of the umbilical cord around the fetal neck) but these conditions do not always cause obstruction of fetal circulation.
Velamentous cord insertion
Single umbilical artery
Umbilical cord prolapse
Vasa praevia
Clamping and cutting
The cord can be clamped at different times; however, delaying the clamping of the umbilical cord until at least one minute after birth improves outcomes as long as there is the ability to treat the small risk of jaundice if it occurs. Clamping is followed by cutting of the cord, which is painless due to the absence of nerves. The cord is extremely tough, like thick sinew, and so cutting it requires a suitably sharp instrument. While umbilical severance may be delayed until after the cord has stopped pulsing (one to three minutes after birth), there is ordinarily no significant loss of either venous or arterial blood while cutting the cord. Current evidence neither supports, nor refutes, delayed cutting of the cord, according to the American Congress of Obstetricians and Gynecologists (ACOG) guidelines.
There are umbilical cord clamps which incorporate a knife. These clamps are safer and faster, allowing one to first apply the cord clamp and then cut the umbilical cord. After the cord is clamped and cut, the newborn wears a plastic clip on the navel area until the compressed region of the cord has dried and sealed sufficiently.
The length of umbilical left attached to the newborn varies by practice; in most hospital settings the length of cord left attached after clamping and cutting is minimal. In the United States, however, where the birth occurred outside of the hospital and an emergency medical technician (EMT) clamps and cuts the cord, a longer segment up to in length is left attached to the newborn.
The remaining umbilical stub remains for up to ten days as it dries and then falls off.
Early versus delayed clamping
A Cochrane review in 2013 came to the conclusion that delayed cord clamping (between one and three minutes after birth) is "likely to be beneficial as long as access to treatment for jaundice requiring phototherapy is available". In this review delayed clamping, as contrasted to early, resulted in no difference in risk of severe maternal postpartum hemorrhage or neonatal mortality, and a low Apgar score. On the other hand, delayed clamping resulted in an increased birth weight of on average about 100 g, and an increased hemoglobin concentration of on average 1.5 g/dL with half the risk of being iron deficient at three and six months, but an increased risk of jaundice requiring phototherapy.
In 2012, the American College of Obstetricians and Gynecologists officially endorsed delaying clamping of the umbilical cord for 30–60 seconds with the newborn held below the level of the placenta in all cases of preterm delivery based largely on evidence that it reduces the risk of intraventricular hemorrhage in these children by 50%. In the same committee statement, ACOG also recognize several other likely benefits for preterm infants, including "improved transitional circulation, better establishment of red blood cell volume, and decreased need for blood transfusion". In January 2017, a revised Committee Opinion extended the recommendation to term infants, citing data that term infants benefit from increased hemoglobin levels in the newborn period and improved iron stores in the first months of life, which may result in improved developmental outcomes. ACOG recognized a small increase in the incidence of jaundice in term infants with delayed cord clamping, and recommended policies be in place to monitor for and treat neonatal jaundice. ACOG also noted that delayed cord clamping is not associated with increased risk of postpartum hemorrhage.
Several studies have shown benefits of delayed cord clamping: A meta-analysis showed that delaying clamping of the umbilical cord in full-term neonates for a minimum of two minutes following birth is beneficial to the newborn in giving improved hematocrit, iron status as measured by ferritin concentration and stored iron, as well as a reduction in the risk of anemia (relative risk, 0.53; 95% CI, 0.40–0.70). A decrease was also found in a study from 2008. Although there is higher hemoglobin level at 2 months, this effect did not persist beyond 6 months of age. Not clamping the cord for three minutes following the birth of a baby improved outcomes at four years of age. A delay of three minutes or more in umbilical cord clamping after birth reduce the prevalence of anemia in infants.
Negative effects of delayed cord clamping include an increased risk of polycythemia. Still, this condition appeared to be benign in studies. Infants whose cord clamping occurred later than 60 seconds after birth had a higher rate of neonatal jaundice requiring phototherapy.
Delayed clamping is not recommended as a response to cases where the newborn is not breathing well and needs resuscitation. Rather, the recommendation is instead to immediately clamp and cut the cord and perform cardiopulmonary resuscitation. The umbilical cord pulsating is not a guarantee that the baby is receiving enough oxygen.
Umbilical nonseverance
Some parents choose to omit cord severance entirely, a practice called "lotus birth" or umbilical nonseverance. The entire intact umbilical cord is allowed to dry and separates on its own (typically on the 3rd day after birth), falling off and leaving a healed umbilicus. The Royal College of Obstetricians and Gynaecologists has warned about the risks of infection as the decomposing placenta tissue becomes a nest for infectious bacteria such as Staphylococcus. In one such case, a 20-hour old baby whose parents chose UCNS was brought to the hospital in an agonal state, was diagnosed with sepsis and required an antibiotic treatment for six weeks.
Umbilical cord catheterization
As the umbilical vein is directly connected to the central circulation, it can be used as a route for placement of a venous catheter for infusion and medication. The umbilical vein catheter is a reliable alternative to percutaneous peripheral or central venous catheters or intraosseous canulas and may be employed in resuscitation or intensive care of the newborn.
Blood sampling
From 24 to 34 weeks of gestation, when the fetus is typically viable, blood can be taken from the cord in order to test for abnormalities (particularly for hereditary conditions). This diagnostic genetic test procedure is known as percutaneous umbilical cord blood sampling.
Storage of cord blood
The blood within the umbilical cord, known as cord blood, is a rich and readily available source of primitive, undifferentiated stem cells (of type CD34-positive and CD38-negative). These cord blood cells can be used for bone marrow transplant.
Some parents choose to have this blood diverted from the baby's umbilical blood transfer through early cord clamping and cutting, to freeze for long-term storage at a cord blood bank should the child ever require the cord blood stem cells (for example, to replace bone marrow destroyed when treating leukemia). This practice is controversial, with critics asserting that early cord blood withdrawal at the time of birth actually increases the likelihood of childhood disease, due to the high volume of blood taken (an average of 108ml) in relation to the baby's total supply (typically 300ml). The Royal College of Obstetricians and Gynaecologists stated in 2006 that "there is still insufficient evidence to recommend directed commercial cord blood collection and stem-cell storage in low-risk families".
The American Academy of Pediatrics has stated that cord blood banking for self-use should be discouraged (as most conditions requiring the use of stem cells will already exist in the cord blood), while banking for general use should be encouraged. In the future, cord blood-derived embryonic-like stem cells (CBEs) may be banked and matched with other patients, much like blood and transplanted tissues. The use of CBEs could potentially eliminate the ethical difficulties associated with embryonic stem cells (ESCs).
While the American Academy of Pediatrics discourages private banking except in the case of existing medical need, it also says that information about the potential benefits and limitations of cord blood banking and transplantation should be provided so that parents can make an informed decision.
In the United States, cord blood education has been supported by legislators at the federal and state levels. In 2005, the National Academy of Sciences published an Institute of Medicine (IoM) report which recommended that expectant parents be given a balanced perspective on their options for cord blood banking. In response to their constituents, state legislators across the country are introducing legislation intended to help inform physicians and expectant parents on the options for donating, discarding or banking lifesaving newborn stem cells. Currently 17 states, representing two-thirds of U.S. births, have enacted legislation recommended by the IoM guidelines.
The use of cord blood stem cells in treating conditions such as brain injury and Type 1 Diabetes is already being studied in humans, and earlier stage research is being conducted for treatments of stroke, and hearing loss.
Cord blood stored with private banks is typically reserved for use of the donor child only. In contrast, cord blood stored in public banks is accessible to anyone with a closely matching tissue type and demonstrated need. The use of cord blood from public banks is increasing. Currently it is used in place of a bone marrow transplant in the treatment of blood disorders such as leukemia, with donations released for transplant through one registry, Netcord.org, passing 1,000,000 as of January 2013. Cord blood is used when the patient cannot find a matching bone marrow donor; this "extension" of the donor pool has driven the expansion of public banks.
The umbilical cord in other animals
The umbilical cord in some mammals, including cattle and sheep, contains two distinct umbilical veins. There is only one umbilical vein in the human umbilical cord.
In some animals, the mother will gnaw through the cord, thus separating the placenta from the offspring. The cord along with the placenta is often eaten by the mother, to provide nourishment and to dispose of tissues that would otherwise attract scavengers or predators. In chimpanzees, the mother leaves the cord in place and nurses her young with the cord and placenta attached until the cord dries out and separates naturally, within a day of birth, at which time the cord is discarded. (This was first documented by zoologists in the wild in 1974.)
Some species of shark—hammerheads, requiems and smooth-hounds—are viviparous and have an umbilical cord attached to their placenta.
Other uses for the term "umbilical cord"
The term "umbilical cord" or just "umbilical" has also come to be used for other cords with similar functions, such as the hose connecting surface-supplied divers to their surface supply of air and/or heating, or space-suited astronauts to their spacecraft. Engineers sometimes use the term to describe a complex or critical cable connecting a component, especially when composed of bundles of conductors of different colors, thickness and types, terminating in a single multi-contact disconnect.
Cancer-causing toxicants in human umbilical cords
In multiple American and international studies, cancer-causing chemicals have been found in the blood of umbilical cords. These originate from certain plastics, computer circuit boards, fumes and synthetic fragrances among others. Over 300 chemical toxicants have been found, including bisphenol A (BPA), tetrabromobisphenol A (TBBPA), Teflon-related perfluorooctanoic acid, galaxolide and synthetic musks among others. The studies in America showed higher levels in African Americans, Hispanic Americans and Asian Americans due, it is thought, to living in areas of higher pollution.
Additional images
| Biology and health sciences | Animal ontogeny | Biology |
233271 | https://en.wikipedia.org/wiki/Brown%20algae | Brown algae | Brown algae (: alga) are a large group of multicellular algae comprising the class Phaeophyceae. They include many seaweeds located in colder waters of the Northern Hemisphere. Brown algae are the major seaweeds of the temperate and polar regions. Many brown algae, such as members of the order Fucales, commonly grow along rocky seashores. Most brown algae live in marine environments, where they play an important role both as food and as a potential habitat. For instance, Macrocystis, a kelp of the order Laminariales, may reach in length and forms prominent underwater kelp forests that contain a high level of biodiversity. Another example is Sargassum, which creates unique floating mats of seaweed in the tropical waters of the Sargasso Sea that serve as the habitats for many species. Some members of the class, such as kelps, are used by humans as food.
Between 1,500 and 2,000 species of brown algae are known worldwide. Some species, such as Ascophyllum nodosum, have become subjects of extensive research in their own right due to their commercial importance. They also have environmental significance through carbon fixation.
Brown algae belong to the Stramenopiles, a clade of eukaryotic organisms that are distinguished from green plants by having chloroplasts surrounded by four membranes, suggesting that they were acquired secondarily from a symbiotic relationship between a basal eukaryote and a red or green alga. Most brown algae contain the pigment fucoxanthin, which is responsible for the distinctive greenish-brown color that gives them their name. Brown algae are unique among Stramenopiles in developing into multicellular forms with differentiated tissues, but they reproduce by means of flagellated spores and gametes that closely resemble cells of single-celled Stramenopiles. Genetic studies show their closest relatives to be the yellow-green algae.
Morphology
Brown algae exist in a wide range of sizes and forms. The smallest members of the group grow as tiny, feathery tufts of threadlike cells no more than a few centimeters (a few inches) long. Some species have a stage in their life cycle that consists of only a few cells, making the entire alga microscopic. Other groups of brown algae grow to much larger sizes. The rockweeds and leathery kelps are often the most conspicuous algae in their habitats. Kelps can range in size from the sea palm Postelsia to the giant kelp Macrocystis pyrifera, which grows to over long and is the largest of all the algae. In form, the brown algae range from small crusts or cushions to leafy free-floating mats formed by species of Sargassum. They may consist of delicate felt-like strands of cells, as in Ectocarpus, or of flattened branches resembling a fan, as in Padina.
Regardless of size or form, two visible features set the Phaeophyceae apart from all other algae. First, members of the group possess a characteristic color that ranges from an olive green to various shades of brown. The particular shade depends upon the amount of fucoxanthin present in the alga. Second, all brown algae are multicellular. There are no known species that exist as single cells or as colonies of cells, and the brown algae are the only major group of seaweeds that does not include such forms. However, this may be the result of classification rather than a consequence of evolution, as all the groups hypothesized to be the closest relatives of the browns include single-celled or colonial forms. They can change color depending on salinity, ranging from reddish to brown.
Visible structures
Whatever their form, the body of all brown algae is termed a thallus, indicating that it lacks the complex xylem and phloem of vascular plants. This does not mean that brown algae completely lack specialized structures. But, because some botanists define "true" stems, leaves, and roots by the presence of these tissues, their absence in the brown algae means that the stem-like and leaf-like structures found in some groups of brown algae must be described using different terminology. Although not all brown algae are structurally complex, those that are typically possess one or more characteristic parts.
A holdfast is a rootlike structure present at the base of the alga. Like a root system in plants, a holdfast serves to anchor the alga in place on the substrate where it grows, and thus prevents the alga from being carried away by the current. Unlike a root system, the holdfast generally does not serve as the primary organ for water uptake, nor does it take in nutrients from the substrate. The overall physical appearance of the holdfast differs among various brown algae and among various substrates. It may be heavily branched, or it may be cup-like in appearance. A single alga typically has just one holdfast, although some species have more than one stipe growing from their holdfast.
A stipe is a stalk or stemlike structure present in an alga. It may grow as a short structure near the base of the alga (as in Laminaria), or it may develop into a large, complex structure running throughout the algal body (as in Sargassum or Macrocystis). In the most structurally differentiated brown algae (such as Fucus), the tissues within the stipe are divided into three distinct layers or regions. These regions include a central pith, a surrounding cortex, and an outer epidermis, each of which has an analog in the stem of a vascular plant. In some brown algae, the pith region includes a core of elongated cells that resemble the phloem of vascular plants both in structure and function. In others (such as Nereocystis), the center of the stipe is hollow and filled with gas that serves to keep that part of the alga buoyant. The stipe may be relatively flexible and elastic in species like Macrocystis pyrifera that grow in strong currents, or may be more rigid in species like Postelsia palmaeformis that are exposed to the atmosphere at low tide.
Many algae have a flattened portion that may resemble a leaf, and this is termed a blade, lamina, or frond. The name blade is most often applied to a single undivided structure, while frond may be applied to all or most of an algal body that is flattened, but this distinction is not universally applied. The name lamina refers to that portion of a structurally differentiated alga that is flattened. It may be a single or a divided structure, and may be spread over a substantial portion of the alga. In rockweeds, for example, the lamina is a broad wing of tissue that runs continuously along both sides of a branched midrib. The midrib and lamina together constitute almost all of a rockweed, so that the lamina is spread throughout the alga rather than existing as a localized portion of it.
In some brown algae, there is a single lamina or blade, while in others there may be many separate blades. Even in those species that initially produce a single blade, the structure may tear with rough currents or as part of maturation to form additional blades. These blades may be attached directly to the stipe, to a holdfast with no stipe present, or there may be an air bladder between the stipe and blade. The surface of the lamina or blade may be smooth or wrinkled; its tissues may be thin and flexible or thick and leathery. In species like Egregia menziesii, this characteristic may change depending upon the turbulence of the waters in which it grows. In other species, the surface of the blade is coated with slime to discourage the attachment of epiphytes or to deter herbivores. Blades are also often the parts of the alga that bear the reproductive structures.
Gas-filled floats called pneumatocysts provide buoyancy in many kelps and members of the Fucales. These bladder-like structures occur in or near the lamina, so that it is held nearer the water surface and thus receives more light for photosynthesis. Pneumatocysts are most often spherical or ellipsoidal, but can vary in shape among different species. Species such as Nereocystis luetkeana and Pelagophycus porra bear a single large pneumatocyst between the top of the stipe and the base of the blades. In contrast, the giant kelp Macrocystis pyrifera bears many blades along its stipe, with a pneumatocyst at the base of each blade where it attaches to the main stipe. Species of Sargassum also bear many blades and pneumatocysts, but both kinds of structures are attached separately to the stipe by short stalks. In species of Fucus, the pneumatocysts develop within the lamina itself, either as discrete spherical bladders or as elongated gas-filled regions that take the outline of the lamina in which they develop.
Growth
The brown algae include the largest and fastest growing of seaweeds. Fronds of Macrocystis may grow as much as per day, and the stipes can grow in a single day.
Growth in most brown algae occurs at the tips of structures as a result of divisions in a single apical cell or in a row of such cells. They are single cellular organisms. As this apical cell divides, the new cells that it produces develop into all the tissues of the alga. Branchings and other lateral structures appear when the apical cell divides to produce two new apical cells. However, a few groups (such as Ectocarpus) grow by a diffuse, unlocalized production of new cells that can occur anywhere on the thallus.
Tissue organization
The simplest brown algae are filamentous—that is, their cells are elongate and have septa cutting across their width. They branch by getting wider at their tip, and then dividing the widening.
These filaments may be haplostichous or polystichous, multiaxial or monoaxial forming or not a pseudoparenchyma. Besides fronds, there are the large in size parenchymatic kelps with three-dimensional development and growth and different tissues (meristoderm, cortex and medulla) which could be consider the trees of the sea. There are also the Fucales and Dictyotales smaller than kelps but still parenchymatic with the same kind of distinct tissues.
The cell wall consists of two layers; the inner layer bears the strength, and consists of cellulose; the outer wall layer is mainly algin, and is gummy when wet but becomes hard and brittle when it dries out. Specifically, the brown algal cell wall consists of several components with alginates and sulphated fucan being its main ingredients, up to 40% each of them. Cellulose, a major component from most plant cell walls, is present in a very small percentage, up to 8%. Cellulose and alginate biosynthesis pathways seem to have been acquired from other organisms through endosymbiotic and horizontal gene transfer respectively, while the sulphated polysaccharides are of ancestral origin. Specifically, the cellulose synthases seem to come from the red alga endosymbiont of the photosynthetic stramenopiles ancestor, and the ancestor of brown algae acquired the key enzymes for alginates biosynthesis from an actinobacterium. The presence and fine control of alginate structure in combination with the cellulose which existed before it, gave potentially the brown algae the ability to develop complex structurally multicellular organisms like the kelps.
Evolutionary history
Genetic and ultrastructural evidence place the Phaeophyceae among the heterokonts (Stramenopiles), a large assemblage of organisms that includes both photosynthetic members with plastids (such as the diatoms) as well as non-photosynthetic groups (such as the slime nets and water molds). Although some heterokont relatives of the brown algae lack plastids in their cells, scientists believe this is a result of evolutionary loss of that organelle in those groups rather than independent acquisition by the several photosynthetic members. Thus, all heterokonts are believed to descend from a single heterotrophic ancestor that became photosynthetic when it acquired plastids through endosymbiosis of another unicellular eukaryote.
The closest relatives of the brown algae include unicellular and filamentous species, but no unicellular species of brown algae are known. However, most scientists assume that the Phaeophyceae evolved from unicellular ancestors. DNA sequence comparison also suggests that the brown algae evolved from the filamentous Phaeothamniophyceae, Xanthophyceae, or the Chrysophyceae between 150 and 200 million years ago. In many ways, the evolution of the brown algae parallels that of the green algae and red algae, as all three groups possess complex multicellular species with an alternation of generations. Analysis of 5S rRNA sequences reveals much smaller evolutionary distances among genera of the brown algae than among genera of red or green algae, which suggests that the brown algae have diversified much more recently than the other two groups.
Fossils
The occurrence of Phaeophyceae as fossils is rare due to their generally soft-bodied nature, and scientists continue to debate the identification of some finds. Part of the problem with identification lies in the convergent evolution of morphologies between many brown and red algae. Most fossils of soft-tissue algae preserve only a flattened outline, without the microscopic features that permit the major groups of multicellular algae to be reliably distinguished. Among the brown algae, only species of the genus Padina deposit significant quantities of minerals in or around their cell walls. Other algal groups, such as the red algae and green algae, have a number of calcareous members. Because of this, they are more likely to leave evidence in the fossil record than the soft bodies of most brown algae and more often can be precisely classified.
Fossils comparable in morphology to brown algae are known from strata as old as the Upper Ordovician, but the taxonomic affinity of these impression fossils is far from certain. Claims that earlier Ediacaran fossils are brown algae have since been dismissed. While many carbonaceous fossils have been described from the Precambrian, they are typically preserved as flattened outlines or fragments measuring only millimeters long. Because these fossils lack features diagnostic for identification at even the highest level, they are assigned to fossil form taxa according to their shape and other gross morphological features. A number of Devonian fossils termed fucoids, from their resemblance in outline to species in the genus Fucus, have proven to be inorganic rather than true fossils. The Devonian megafossil Prototaxites, which consists of masses of filaments grouped into trunk-like axes, has been considered a possible brown alga. However, modern research favors reinterpretation of this fossil as a terrestrial fungus or fungal-like organism. Likewise, the fossil Protosalvinia was once considered a possible brown alga, but is now thought to be an early land plant.
A number of Paleozoic fossils have been tentatively classified with the brown algae, although most have also been compared to known red algae species. Phascolophyllaphycus possesses numerous elongate, inflated blades attached to a stipe. It is the most abundant of algal fossils found in a collection made from Carboniferous strata in Illinois. Each hollow blade bears up to eight pneumatocysts at its base, and the stipes appear to have been hollow and inflated as well. This combination of characteristics is similar to certain modern genera in the order Laminariales (kelps). Several fossils of Drydenia and a single specimen of Hungerfordia from the Upper Devonian of New York have also been compared to both brown and red algae. Fossils of Drydenia consist of an elliptical blade attached to a branching filamentous holdfast, not unlike some species of Laminaria, Porphyra, or Gigartina. The single known specimen of Hungerfordia branches dichotomously into lobes and resembles genera like Chondrus and Fucus or Dictyota.
The earliest known fossils that can be assigned reliably to the Phaeophyceae come from Miocene diatomite deposits of the Monterey Formation in California. Several soft-bodied brown macroalgae, such as Julescraneia, have been found.
Classification
Phylogeny
Based on the work of Silberfeld, Rousseau & de Reviers 2014.
Taxonomy
This is a list of the orders in the class Phaeophyceae:
Class Phaeophyceae Hansgirg 1886 [Fucophyceae; Melanophycidae Rabenhorst 1863 stat. nov. Cavalier-Smith 2006]
Subclass Discosporangiophycidae Silberfeld, Rousseau & Reviers 2014
Order Discosporangiales Schmidt 1937 emend. Kawai et al. 2007
Family Choristocarpaceae Kjellman 1891
Family Discosporangiaceae Schmidt 1937
Subclass Ishigeophycidae Silberfeld, Rousseau & Reviers 2014
Order Ishigeales Cho & Boo 2004
Family Ishigeaceae Okamura 1935
Family Petrodermataceae Silberfeld, Rousseau & Reviers 2014
Subclass Dictyotophycidae Silberfeld, Rousseau & Reviers 2014
Order Dictyotales Bory de Saint-Vincent 1828 ex Phillips et al.
Family Dictyotaceae Lamouroux ex Dumortier 1822 [Scoresbyellaceae Womersley 1987; Dictyopsidaceae]
Order Onslowiales Draisma & Prud'homme van Reine 2008
Family Onslowiaceae Draisma & Prud'homme van Reine 2001
Order Sphacelariales Migula 1909
Family Cladostephaceae Oltmanns 1922
Family Lithodermataceae Hauck 1883
Family Phaeostrophiaceae Kawai et al. 2005
Family Sphacelariaceae Decaisne 1842
Family Sphacelodermaceae Draisma, Prud'homme & Kawai 2010
Family Stypocaulaceae Oltmanns 1922
Order Syringodermatales Henry 1984
Family Syringodermataceae Henry 1984
Subclass Fucophycidae Cavalier-Smith 1986
Order Ascoseirales Petrov1964 emend. Moe & Henry 1982
Family Ascoseiraceae Skottsberg 1907
Order Asterocladales T.Silberfeld et al. 2011
Family Asterocladaceae Silberfeld et al. 2011
Order Desmarestiales Setchell & Gardner 1925
Family Arthrocladiaceae Chauvin 1842
Family Desmarestiaceae (Thuret) Kjellman 1880
Order Ectocarpales Bessey 1907 emend. Rousseau & Reviers 1999a [Chordariales Setchell & Gardner 1925; Dictyosiphonales Setchell & Gardner 1925; Scytosiphonales Feldmann 1949]
Family Acinetosporaceae Hamel ex Feldmann 1937 [Pylaiellaceae; Pilayellaceae]
Family Adenocystaceae Rousseau et al. 2000 emend. Silberfeld et al. 2011 [Chordariopsidaceae]
Family Chordariaceae Greville 1830 emend. Peters & Ramírez 2001 [Myrionemataceae]
Family Ectocarpaceae Agardh 1828 emend. Silberfeld et al. 2011
Family Petrospongiaceae Racault et al. 2009
Family Scytosiphonaceae Ardissone & Straforello 1877 [Chnoosporaceae Setchell & Gardner 1925]
Order Fucales Bory de Saint-Vincent 1827 [Notheiales Womersley 1987; Durvillaeales Petrov 1965]
Family Bifurcariopsidaceae Cho et al. 2006
Family Durvillaeaceae (Oltmanns) De Toni 1891
Family Fucaceae Adanson 1763
Family Himanthaliaceae (Kjellman) De Toni 1891
Family Hormosiraceae Fritsch 1945
Family Notheiaceae Schmidt 1938
Family Sargassaceae Kützing 1843 [Cystoseiraceae De Toni 1891]
Family Seirococcaceae Nizamuddin 1987
Family Xiphophoraceae Cho et al. 2006
Order Laminariales Migula 1909 [Phaeosiphoniellales Silberfeld, Rousseau & Reviers 2014 ord. nov. prop.]
Family Agaraceae Postels & Ruprecht 1840 [Costariaceae]
Family Akkesiphycaceae Kawai & Sasaki 2000
Family Alariaceae Setchell & Gardner 1925
Family Aureophycaceae Kawai & Ridgway 2013
Family Chordaceae Dumortier 1822
Family Laminariaceae Bory de Saint-Vincent 1827 [Arthrothamnaceae Petrov 1974]
Family Lessoniaceae Setchell & Gardner 1925
Family Pseudochordaceae Kawai & Kurogi 1985
Order Nemodermatales Parente et al. 2008
Family Nemodermataceae Kuckuck ex Feldmann 1937
Order Phaeosiphoniellales Silberfeld, Rousseau & Reviers 2014
Family Phaeosiphoniellaceae Phillips et al. 2008
Order Ralfsiales Nakamura ex Lim & Kawai 2007
Family Mesosporaceae Tanaka & Chihara 1982
Family Neoralfsiaceae Lim & Kawai 2007
Family Ralfsiaceae Farlow 1881 [Heterochordariaceae Setchell & Gardner 1925]
Order Scytothamnales Peters & Clayton 1998 emend. Silberfeld et al. 2011
Family Asteronemataceae Silberfeld et al. 2011
Family Bachelotiaceae Silberfeld et al. 2011
Family Splachnidiaceae Mitchell & Whitting 1892 [Scytothamnaceae Womersley 1987]
Order Sporochnales Sauvageau 1926
Family Sporochnaceae Greville 1830
Order Tilopteridales Bessey 1907 emend. Phillips et al. 2008 [Cutleriales Bessey 1907]
Family Cutleriaceae Griffith & Henfrey 1856
Family Halosiphonaceae Kawai & Sasaki 2000
Family Phyllariaceae Tilden 1935
Family Stschapoviaceae Kawai 2004
Family Tilopteridaceae Kjellman 1890
Life cycle
Most brown algae, with the exception of the Fucales, perform sexual reproduction through sporic meiosis. Between generations, the algae go through separate sporophyte (diploid) and gametophyte (haploid) phases. The sporophyte stage is often the more visible of the two, though some species of brown algae have similar diploid and haploid phases. Free floating forms of brown algae often do not undergo sexual reproduction until they attach themselves to substrate. The haploid generation consists of male and female gametophytes. The fertilization of egg cells varies between species of brown algae, and may be isogamous, oogamous, or anisogamous. Fertilization may take place in the water with eggs and motile sperm, or within the oogonium itself.
Certain species of brown algae can also perform asexual reproduction through the production of motile diploid zoospores. These zoospores form in plurilocular sporangium, and can mature into the sporophyte phase immediately.
In a representative species Laminaria, there is a conspicuous diploid generation and smaller haploid generations. Meiosis takes place within several unilocular sporangium along the algae's blade, each one forming either haploid male or female zoospores. The spores are then released from the sporangia and grow to form male and female gametophytes. The female gametophyte produces an egg in the oogonium, and the male gametophyte releases motile sperm that fertilize the egg. The fertilized zygote then grows into the mature diploid sporophyte.
In the order Fucales, sexual reproduction is oogamous, and the mature diploid is the only form for each generation. Gametes are formed in specialized conceptacles that occur scattered on both surfaces of the receptacle, the outer portion of the blades of the parent plant. Egg cells and motile sperm are released from separate sacs within the conceptacles of the parent algae, combining in the water to complete fertilization. The fertilized zygote settles onto a surface and then differentiates into a leafy thallus and a finger-like holdfast. Light regulates differentiation of the zygote into blade and holdfast.(by samay the great )
Ecology
Brown algae have adapted to a wide variety of marine ecological niches including the tidal splash zone, rock pools, the whole intertidal zone and relatively deep near shore waters. They are an important constituent of some brackish water ecosystems, and have colonized freshwater on a minimum of six known occasions. A large number of Phaeophyceae are intertidal or upper littoral, and they are predominantly cool and cold water organisms that benefit from nutrients in up welling cold water currents and inflows from land; Sargassum being a prominent exception to this generalisation.
Brown algae growing in brackish waters are almost solely asexual.
Chemistry
Brown algae have a value in the range of −30.0‰ to −10.5‰, in contrast with red algae and greens. This reflects their different metabolic pathways.
They have cellulose walls with alginic acid and also contain the polysaccharide fucoidan in the amorphous sections of their cell walls. A few species (of Padina) calcify with aragonite needles.
In addition to alginates, fucoidan and cellulose, the carbohydrate composition of brown algae consists of mannitol, laminarin and glucan.
The photosynthetic system of brown algae is made of a P700 complex containing chlorophyll a. Their plastids also contain chlorophyll c and carotenoids (the most widespread of those being fucoxanthin).
Brown algae produce a specific type of tannin called phlorotannins in higher amounts than red algae do.
Importance and uses
Brown algae include a number of edible seaweeds. All brown algae contain alginic acid (alginate) in their cell walls, which is extracted commercially and used as an industrial thickening agent in food and for other uses. One of these products is used in lithium-ion batteries. Alginic acid is used as a stable component of a battery anode. This polysaccharide is a major component of brown algae, and is not found in land plants.
Alginic acid can also be used in aquaculture. For example, alginic acid enhances the immune system of rainbow trout. Younger fish are more likely to survive when given a diet with alginic acid.
Brown algae including kelp beds also fix a significant portion of the earth's carbon dioxide yearly through photosynthesis. Additionally, they can store a great amount of carbon dioxide which can help us in the fight against climate change.
Sargachromanol G, an extract of Sargassum siliquastrum, has been shown to have anti-inflammatory effects.
Edible brown algae
Kelp (Laminariales)
Arame (Eisenia bicyclis)
Badderlocks (Alaria esculenta)
Cochayuyo (Durvillaea antarctica)
Ecklonia cava
Kombu (Saccharina japonica)
Oarweed (Laminaria digitata)
Sea palm Postelsia palmaeformis
Sea whip (Nereocystis luetkeana)
Sugar kelp (Saccharina latissima)
Wakame (Undaria pinnatifida)
Hirome (Undaria undarioides)
Grapestone (Mastocarpus papillatus)
Fucales
Bladderwrack (Fucus vesiculosus)
Channelled wrack (Pelvetia canaliculata)
Hijiki or Hiziki (Sargassum fusiforme)
Limu Kala (Sargassum echinocarpum)
Sargassum
Sargassum cinetum
Sargassum vulgare
Sargassum swartzii
Sargassum myriocysum
Spiral wrack (Fucus spiralis)
Thongweed (Himanthalia elongata)
Ectocarpales
Mozuku (Cladosiphon okamuranus)
| Biology and health sciences | Other organisms | null |
233281 | https://en.wikipedia.org/wiki/Cementite | Cementite | Cementite (or iron carbide) is a compound of iron and carbon, more precisely an intermediate transition metal carbide with the formula Fe3C. By weight, it is 6.67% carbon and 93.3% iron. It has an orthorhombic crystal structure. It is a hard, brittle material, normally classified as a ceramic in its pure form, and is a frequently found and important constituent in ferrous metallurgy. While cementite is present in most steels and cast irons, it is produced as a raw material in the iron carbide process, which belongs to the family of alternative ironmaking technologies. The name cementite originated from the theory of Floris Osmond and J. Werth, in which the structure of solidified steel consists of a kind of cellular tissue, with ferrite as the nucleus and Fe3C the envelope of the cells. The carbide therefore cemented the iron.
Metallurgy
In the iron–carbon system (i.e. plain-carbon steels and cast irons) it is a common constituent because ferrite can contain at most 0.02wt% of uncombined carbon. Therefore, in carbon steels and cast irons that are slowly cooled, a portion of the carbon is in the form of cementite. Cementite forms directly from the melt in the case of white cast iron. In carbon steel, cementite precipitates from austenite as austenite transforms to ferrite on slow cooling, or from martensite during tempering. An intimate mixture with ferrite, the other product of austenite, forms a lamellar structure called pearlite.
While cementite is thermodynamically unstable, eventually being converted to austenite (low carbon level) and graphite (high carbon level) at higher temperatures, it does not decompose on heating at temperatures below the eutectoid temperature (723 °C) on the metastable iron-carbon phase diagram.
Mechanical properties are as follows: room temperature microhardness 760–1350 HV; bending strength 4.6–8 GPa, Young's modulus 160–180 GPa, indentation fracture toughness 1.5–2.7 MPa√m.
The morphology of cementite plays a critical role in the kinetics of phase transformations in steel. The coiling temperature and cooling rate significantly affect cementite formation. At lower coiling temperatures, cementite forms fine pearlitic colonies, whereas at higher temperatures, it precipitates as coarse particles at grain boundaries. This morphological difference influences the rate of austenite formation and decomposition, with fine cementite promoting faster transformations due to its increased surface area and the proximity of the carbide-ferrite interface. Furthermore, the dissolution kinetics of cementite during annealing are slower for coarse carbides, impacting the microstructural evolution during heat treatments.
Pure form
Cementite changes from ferromagnetic to paramagnetic upon heating to its Curie temperature of approximately .
A natural iron carbide (containing minor amounts of nickel and cobalt) occurs in iron meteorites and is called cohenite after the German mineralogist Emil Cohen, who first described it.
Other iron carbides
There are other forms of metastable iron carbides that have been identified in tempered steel and in the industrial Fischer–Tropsch process. These include epsilon (ε) carbide, hexagonal close-packed Fe2–3C, precipitates in plain-carbon steels of carbon content > 0.2%, tempered at 100–200 °C. Non-stoichiometric ε-carbide dissolves above ~200 °C, where Hägg carbides and cementite begin to form. Hägg carbide, monoclinic Fe5C2, precipitates in hardened tool steels tempered at 200–300 °C. It has also been found naturally as the mineral Edscottite in the Wedderburn meteorite.
| Physical sciences | Ceramic compounds | Chemistry |
233403 | https://en.wikipedia.org/wiki/Siege%20engine | Siege engine | A siege engine is a device that is designed to break or circumvent heavy castle doors, thick city walls and other fortifications in siege warfare. Some are immobile, constructed in place to attack enemy fortifications from a distance, while others have wheels to enable advancing up to the enemy fortification. There are many distinct types, such as siege towers that allow foot soldiers to scale walls and attack the defenders, battering rams that damage walls or gates, and large ranged weapons (such as ballistas, catapults/trebuchets and other similar constructions) that attack from a distance by launching projectiles. Some complex siege engines were combinations of these types.
Siege engines are fairly large constructions – from the size of a small house to a large building. From antiquity up to the development of gunpowder, they were made largely of wood, using rope or leather to help bind them, possibly with a few pieces of metal at key stress points. They could launch simple projectiles using natural materials to build up force by tension, torsion, or, in the case of trebuchets, human power or counterweights coupled with mechanical advantage. With the development of gunpowder and improved metallurgy, bombards and later heavy artillery became the primary siege engines.
Collectively, siege engines or artillery together with the necessary soldiers, sappers, ammunition, and transport vehicles to conduct a siege are referred to as a siege train.
Antiquity
Ancient Assyria through the Roman Empire
The earliest siege engines appear to be simple movable roofed towers used for cover to advance to the defenders' walls in conjunction with scaling ladders, depicted during the Middle Kingdom of Egypt. Advanced siege engines including battering rams were used by Assyrians, followed by the catapult in ancient Greece.
In Kush siege towers as well as battering rams were built from the 8th century BC and employed in Kushite siege warfare, such as the siege of Ashmunein in 715 BC.
The Spartans used battering rams in the siege of Plataea in 429 BC, but it seems that the Greeks limited their use of siege engines to assault ladders, though Peloponnesian forces used something resembling flamethrowers.
The first Mediterranean people to use advanced siege machinery were the Carthaginians, who used siege towers and battering rams against the Greek colonies of Sicily. These engines influenced the ruler of Syracuse, Dionysius I, who developed a catapult in 399 BC.
The first two rulers to make use of siege engines to a large extent were Philip II of Macedonia and Alexander the Great. Their large engines spurred an evolution that led to impressive machines, like the Demetrius Poliorcetes' Helepolis (or "Taker of Cities") of 304 BC: nine stories high and plated with iron, it stood tall and wide, weighing . The most used engines were simple battering rams, or tortoises, propelled in several ingenious ways that allowed the attackers to reach the walls or ditches with a certain degree of safety. For sea sieges or battles, seesaw-like machines (sambykē or sambuca) were used. These were giant ladders, hinged and mounted on a base mechanism and used for transferring marines onto the sea walls of coastal towns. They were normally mounted on two or more ships tied together and some sambuca included shields at the top to protect the climbers from arrows. Other hinged engines were used to catch enemy equipment or even opposing soldiers with opposable appendices which are probably ancestors to the Roman corvus. Other weapons dropped heavy weights on opposing soldiers.
The Romans preferred to assault enemy walls by building earthen ramps (agger) or simply scaling the walls, as in the early siege of the Samnite city of Silvium (306 BC). Soldiers working at the ramps were protected by shelters called vineae, that were arranged to form a long corridor. Convex wicker shields were used to form a screen (plutei or plute in English) to protect the front of the corridor during construction of the ramp. Another Roman siege engine sometimes used resembled the Greek ditch-filling tortoise of Diades, this galley (unlike the ram-tortoise of Hegetor the Byzantium) called a musculus ("muscle") was simply used as cover for sappers to engineer an offensive ditch or earthworks. Battering rams were also widespread. The Roman Legions first used siege towers ; in the first century BC, Julius Caesar accomplished a siege at Uxellodunum in Gaul using a ten-story siege tower. Romans were nearly always successful in besieging a city or fort, due to their persistence, the strength of their forces, their tactics, and their siege engines.
The first documented occurrence of ancient siege engine pieces in Europe was the gastraphetes ("belly-bow"), a kind of large crossbow. These were mounted on wooden frames. Greater machines forced the introduction of pulley system for loading the projectiles, which had extended to include stones also. Later torsion siege engines appeared, based on sinew springs. The onager was the main Roman invention in the field.
Ancient China
The earliest documented occurrence of ancient siege-artillery pieces in China was the levered principled traction catapult and an high siege crossbow from the Mozi (Mo Jing), a Mohist text written at about the 4th – 3rd century BC by followers of Mozi who founded the Mohist school of thought during the late Spring and Autumn period and the early Warring States period. Much of what we now know of the siege technology of the time comes from Books 14 and 15 (Chapters 52 to 71) on Siege Warfare from the Mo Jing. Recorded and preserved on bamboo strips, much of the text is now extremely corrupted. However, despite the heavy fragmentation, Mohist diligence and attention to details which set Mo Jing apart from other works ensured that the highly descriptive details of the workings of mechanical devices like Cloud Ladders, Rotating Arcuballistas and Levered Catapults, records of siege techniques and usage of siege weaponry can still be found today.
Elephant
Indian, Sri Lankan, Chinese and Southeast Asian kingdoms and empires used war elephants as battering rams.
Middle Ages
Medieval designs include a large number of catapults such as the mangonel, onager, the ballista, the traction trebuchet (first designed in China in the 3rd century BC and brought over to Europe in the 4th century AD), and the counterweight trebuchet (first described by Mardi bin Ali al-Tarsusi in the 12th century, though of unknown origin). These machines used mechanical energy to fling large projectiles to batter down stone walls. Also used were the battering ram and the siege tower, a wooden tower on wheels that allowed attackers to climb up and over castle walls, while protected somewhat from enemy arrows.
A typical military confrontation in medieval times was for one side to lay siege to an opponent's castle. When properly defended, they had the choice whether to assault the castle directly or to starve the people out by blocking food deliveries, or to employ war machines specifically designed to destroy or circumvent castle defenses. Defending soldiers also used trebuchets and catapults as a defensive advantage.
Other tactics included setting fires against castle walls in an effort to decompose the cement that held together the individual stones so they could be readily knocked over. Another indirect means was the practice of mining, whereby tunnels were dug under the walls to weaken the foundations and destroy them. A third tactic was the catapulting of diseased animals or human corpses over the walls in order to promote disease which would force the defenders to surrender, an early form of biological warfare.
Modern era
With the advent of gunpowder, firearms such as the arquebus and cannon—eventually the petard, mortar and artillery—were developed. These weapons proved so effective that fortifications, such as city walls, had to be low and thick, as exemplified by the designs of Vauban.
The development of specialized siege artillery, as distinct from field artillery, culminated during World War I and World War II. During the First World War, huge siege guns such as Big Bertha were designed to see use against the modern fortresses of the day. The apex of siege artillery was reached with the German Schwerer Gustav gun, a huge caliber railway gun, built during early World War II. Schwerer Gustav was initially intended to be used for breaching the French Maginot Line of fortifications, but was not finished in time and (as a sign of the times) the Maginot Line was circumvented by rapid mechanized forces instead of breached in a head-on assault. The long time it took to deploy and move the modern siege guns made them vulnerable to air attack and it also made them unsuited to the rapid troop movements of modern warfare.
| Technology | Military technology: General | null |
233488 | https://en.wikipedia.org/wiki/Machine%20learning | Machine learning | Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances in the field of deep learning have allowed neural networks, a class statistical algorithms, to surpass many previous machine learning approaches in performance.
ML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. The application of ML to business problems is known as predictive analytics.
Statistics and mathematical optimization (mathematical programming) methods comprise the foundations of machine learning. Data mining is a related field of study, focusing on exploratory data analysis (EDA) via unsupervised learning.
From a theoretical viewpoint, probably approximately correct (PAC) learning provides a framework for describing machine learning.
History
The term machine learning was coined in 1959 by Arthur Samuel, an IBM employee and pioneer in the field of computer gaming and artificial intelligence. The synonym self-teaching computers was also used in this time period.
Although the earliest machine learning model was introduced in the 1950s when Arthur Samuel invented a program that calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes. In 1949, Canadian psychologist Donald Hebb published the book The Organization of Behavior, in which he introduced a theoretical neural structure formed by certain interactions among nerve cells. Hebb's model of neurons interacting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, or artificial neurons used by computers to communicate data. Other researchers who have studied human cognitive systems contributed to the modern machine learning technologies as well, including logician Walter Pitts and Warren McCulloch, who proposed the early mathematical models of neural networks to come up with algorithms that mirror human thought processes.
By the early 1960s, an experimental "learning machine" with punched tape memory, called Cybertron, had been developed by Raytheon Company to analyse sonar signals, electrocardiograms, and speech patterns using rudimentary reinforcement learning. It was repetitively "trained" by a human operator/teacher to recognize patterns and equipped with a "goof" button to cause it to reevaluate incorrect decisions. A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that an artificial neural network learns to recognize 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.
Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".
Modern-day machine learning has two objectives. One is to classify data based on models which have been developed; the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform the trader of future potential predictions.
Relationships to other fields
Artificial intelligence
As a scientific endeavor, machine learning grew out of the quest for artificial intelligence (AI). In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis.
However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation. By 1980, expert systems had come to dominate AI, and statistics was out of favor. Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming(ILP), but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval. Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including John Hopfield, David Rumelhart, and Geoffrey Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.
Machine learning (ML), reorganized and recognized as its own field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics, fuzzy logic, and probability theory.
Data compression
Data mining
Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.
Machine learning also has intimate ties to optimization: Many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the preassigned labels of a set of examples).
Generalization
Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms.
Statistics
Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.
Conventional statistical analyses require the a priori selection of a model most suitable for the study data set. In addition, only significant or theoretically relevant variables based on previous experience are included for analysis. In contrast, machine learning is not built on a pre-structured model; rather, the data shape the model by detecting underlying patterns. The more variables (input) used to train the model, the more accurate the ultimate model will be.
Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random Forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.
Statistical physics
Analytical and computational techniques derived from deep-rooted physics of disordered systems can be extended to large-scale problems, including machine learning, e.g., to analyse the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
Theory
A core objective of a learner is to generalize from its experience. Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.
The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory via the Probably Approximately Correct Learning (PAC) model. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error.
For the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalization will be poorer.
In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.
Approaches
Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system:
Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).
Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximize.
Although each algorithm has advantages and limitations, no single algorithm works for all problems.
Supervised learning
Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs. The data, known as training data, consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimization of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs. An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.
Types of supervised-learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. As an example, for a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email. Examples of regression would be predicting the height of a person, or the future temperature.
Similarity learning is an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification.
Unsupervised learning
Unsupervised learning algorithms find structures in data that has not been labeled, classified or categorized. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. Central applications of unsupervised machine learning include clustering, dimensionality reduction, and density estimation.
Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity.
A special type of unsupervised learning called, self-supervised learning involves training a model by generating the supervisory signal from the data itself.
Semi-supervised learning
Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce a considerable improvement in learning accuracy.
In weakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets.
Reinforcement learning
Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In reinforcement learning, the environment is typically represented as a Markov decision process (MDP). Many reinforcements learning algorithms use dynamic programming techniques. Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent.
Dimensionality reduction
Dimensionality reduction is a process of reducing the number of random variables under consideration by obtaining a set of principal variables. In other words, it is a process of reducing the dimension of the feature set, also called the "number of features". Most of the dimensionality reduction techniques can be considered as either feature elimination or extraction. One of the popular methods of dimensionality reduction is principal component analysis (PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D).
The manifold hypothesis proposes that high-dimensional data sets lie along low-dimensional manifolds, and many dimensionality reduction techniques make this assumption, leading to the area of manifold learning and manifold regularization.
Other types
Other approaches have been developed which do not fit neatly into this three-fold categorization, and sometimes more than one is used by the same machine learning system. For example, topic modeling, meta-learning.
Self-learning
Self-learning, as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning, named crossbar adaptive array (CAA). It gives a solution to the problem learning without any external reward, by introducing emotion as an internal reward. Emotion is used as state evaluation of a self-learning agent. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion.
The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine:
in situation s perform action a
receive a consequence situation s compute emotion of being in the consequence situation v(s')
update crossbar memory w'(a,s) = w(a,s) + v(s')
It is a system with only one input, situation, and only one output, action (or behavior) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioral environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioral environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal-seeking behavior, in an environment that contains both desirable and undesirable situations.
Feature learning
Several learning algorithms aim at discovering better representations of the inputs provided during training. Classic examples include principal component analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task.
Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labeled input data. Examples include artificial neural networks, multilayer perceptrons, and supervised dictionary learning. In unsupervised feature learning, features are learned with unlabeled input data. Examples include dictionary learning, independent component analysis, autoencoders, matrix factorization and various forms of clustering.
Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros. Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors. Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.
Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.
Sparse dictionary learning
Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of basis functions and assumed to be a sparse matrix. The method is strongly NP-hard and difficult to solve approximately. A popular heuristic method for sparse dictionary learning is the k-SVD algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.
Anomaly detection
In data mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically, the anomalous items represent an issue such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to as outliers, novelties, noise, deviations and exceptions.
In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.
Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit the least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labeled as "normal" and "abnormal" and involves training a classifier (the key difference from many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behavior from a given normal training data set and then test the likelihood of a test instance to be generated by the model.
Robot learning
Robot learning is inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning, and finally meta-learning (e.g. MAML).
Association rules
Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".
Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems.
Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.
Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.
Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as functional programs.
Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting.Shapiro, Ehud Y. Inductive inference of theories from facts , Research Report 192, Yale University, Department of Computer Science, 1981. Reprinted in J.-L. Lassez, G. Plotkin (Eds.), Computational Logic, The MIT Press, Cambridge, MA, 1991, pp. 199–254. Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples. The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set.
Models
A ''' is a type of mathematical model that, once "trained" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimize errors in its predictions. By extension, the term "model" can refer to several levels of specificity, from a general class of models and their associated learning algorithms to a fully trained model with all its internal parameters tuned.
Various types of models have been used and researched for machine learning systems, picking the best model for a task is called model selection.
Artificial neural networks
Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules.
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times.
The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis.
Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.
Decision trees
Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels, and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making.
Support-vector machines
Support-vector machines (SVMs), also known as support-vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.
Regression analysis
Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space.
Multivariate linear regression extends the concept of linear regression to handle multiple dependent variables simultaneously. This approach estimates the relationships between a set of input variables and several output variables by fitting a multidimensional linear model. It is particularly useful in scenarios where outputs are interdependent or share underlying patterns, such as predicting multiple economic indicators or reconstructing images, which are inherently multi-dimensional.
Bayesian networks
A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
Gaussian processes
A Gaussian process is a stochastic process in which every finite collection of the random variables in the process has a multivariate normal distribution, and it relies on a pre-defined covariance function, or kernel, that models how pairs of points relate to each other depending on their locations.
Given a set of observed points, or input–output examples, the distribution of the (unobserved) output of a new point as function of its input data can be directly computed by looking like the observed points and the covariances between those points and the new, unobserved point.
Gaussian processes are popular surrogate models in Bayesian optimization used to do hyperparameter optimization.
Genetic algorithms
A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.
Belief functions
The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory, is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. These theoretical frameworks can be thought of as a kind of learner and have some analogous properties of how evidence is combined (e.g., Dempster's rule of combination), just like how in a pmf-based Bayesian approach would combine probabilities. However, there are many caveats to these beliefs functions when compared to Bayesian approaches in order to incorporate ignorance and uncertainty quantification. These belief function approaches that are implemented within the machine learning domain typically leverage a fusion approach of various ensemble methods to better handle the learner's decision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving. However, the computational complexity of these algorithms are dependent on the number of propositions (classes), and can lead to a much higher computation time when compared to other machine learning approaches.
Training models
Typically, machine learning models require a high quantity of reliable data to perform accurate predictions. When training a machine learning model, machine learning engineers need to target and collect a large and representative sample of data. Data from the training set can be as varied as a corpus of text, a collection of images, sensor data, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased or non-evaluated data can result in skewed or undesired predictions. Biased models may result in detrimental outcomes, thereby furthering the negative impacts on society or objectives. Algorithmic bias is a potential result of data not being fully prepared for training. Machine learning ethics is becoming a field of study and notably, becoming integrated within machine learning engineering teams.
Federated learning
Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.
Applications
There are many applications for machine learning, including:
Agriculture
Anatomy
Adaptive website
Affective computing
Astronomy
Automated decision-making
Banking
Behaviorism
Bioinformatics
Brain–machine interfaces
Cheminformatics
Citizen Science
Climate Science
Computer networks
Computer vision
Credit-card fraud detection
Data quality
DNA sequence classification
Economics
Financial market analysis
General game playing
Handwriting recognition
Healthcare
Information retrieval
Insurance
Internet fraud detection
Knowledge graph embedding
Linguistics
Machine learning control
Machine perception
Machine translation
Marketing
Medical diagnosis
Natural language processing
Natural language understanding
Online advertising
Optimization
Recommender systems
Robot locomotion
Search engines
Sentiment analysis
Sequence mining
Software engineering
Speech recognition
Structural health monitoring
Syntactic pattern recognition
Telecommunications
Theorem proving
Time-series forecasting
Tomographic reconstruction
User behavior analytics
In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million. Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly. In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis. In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software. In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists. In 2019 Springer Nature published the first research book created using machine learning. In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19. Machine learning was recently applied to predict the pro-environmental behavior of travelers. Recently, machine learning technology was also applied to optimize smartphone's performance and thermal behavior based on the user's interaction with the phone. When applied correctly, machine learning algorithms (MLAs) can utilize a wide range of company characteristics to predict stock returns without overfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques like OLS.
Recent advancements in machine learning have extended into the field of quantum chemistry, where novel algorithms now enable the prediction of solvent effects on chemical reactions, thereby offering new tools for chemists to tailor experimental conditions for optimal outcomes.
Machine Learning is becoming a useful tool to investigate and predict evacuation decision making in large scale and small scale disasters. Different solutions have been tested to predict if and when householders decide to evacuate during wildfires and hurricanes. Other applications have been focusing on pre evacuation decisions in building fires.
Limitations
Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results. Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.
The "black box theory" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data. The House of Lords Select Committee, which claimed that such an "intelligence system" that could have a "substantial impact on an individual's life" would not be considered acceptable unless it provided "a full and satisfactory explanation for the decisions" it makes.
In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision. Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested. Microsoft's Bing Chat chatbot has been reported to produce hostile and offensive response against its users.
Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves.
Explainability
Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI. It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation.
Overfitting
Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalizing the theory in accordance with how complex the theory is.
Other limitations and vulnerabilities
Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers often do not primarily make judgments from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies.
Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel. Machine learning models are often vulnerable to manipulation or evasion via adversarial machine learning.
Researchers have demonstrated how backdoors can be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type of data/software transparency is provided, possibly including white-box access.
Model assessments
Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.
In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning true positive rate (TPR) and true negative rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. Receiver operating characteristic (ROC) along with the accompanying Area Under the ROC Curve (AUC) offer additional tools for classification model assessment. Higher AUC is associated with a better performing model.
Ethics
Bias
Different machine learning approaches can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on human-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society.
Systems that are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices. For example, in 1988, the UK's Commission for Racial Equality found that St. George's Medical School had been using a computer program trained from data of previous admissions staff and that this program had denied nearly 60 candidates who were found to either be women or have non-European sounding names. Using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants by similarity to previous successful applicants. Another example includes predictive policing company Geolitica's predictive algorithm that resulted in "disproportionately high levels of over-policing in low-income and minority communities" after being trained with historical crime data.
While responsible collection of data and documentation of algorithmic rules used by a system is considered a critical part of machine learning, some researchers blame lack of participation and representation of minority population in the field of AI for machine learning's vulnerability to biases. In fact, according to research carried out by the Computing Research Association (CRA) in 2021, "female faculty merely make up 16.1%" of all faculty members who focus on AI among several universities around the world. Furthermore, among the group of "new U.S. resident AI PhD graduates," 45% identified as white, 22.4% as Asian, 3.2% as Hispanic, and 2.4% as African American, which further demonstrates a lack of diversity in the field of AI.
Language models learned from data have been shown to contain human-like biases. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases. In 2016, Microsoft tested Tay, a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.
In an experiment carried out by ProPublica, an investigative journalism organization, a machine learning algorithm's insight into the recidivism rates among prisoners falsely flagged "black defendants high risk twice as often as white defendants." In 2015, Google Photos once tagged a couple of black people as gorillas, which caused controversy. The gorilla label was subsequently removed, and in 2023, it still cannot recognize gorillas. Similar issues with recognizing non-white people have been found in many other systems.
Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains. Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good, is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who said that "[t]here's nothing artificial about AI. It's inspired by people, it's created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility."
Financial incentives
There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is potential for machine learning in health care to provide professionals an additional tool to diagnose, medicate, and plan recovery paths for patients, but this requires these biases to be mitigated.
Hardware
Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.
Neuromorphic computing
Neuromorphic computing refers to a class of computing systems designed to emulate the structure and functionality of biological neural networks. These systems may be implemented through software-based simulations on conventional hardware or through specialized hardware architectures.
physical neural networks
A physical neural network is a specific type of neuromorphic hardware that relies on electrically adjustable materials, such as memristors, to emulate the function of neural synapses. The term "physical neural network" highlights the use of physical hardware for computation, as opposed to software-based implementations. It broadly refers to artificial neural networks that use materials with adjustable resistance to replicate neural synapses.
Embedded machine learning
Embedded machine learning is a sub-field of machine learning where models are deployed on embedded systems with limited computing resources, such as wearable computers, edge devices and microcontrollers. Running models directly on these devices eliminates the need to transfer and store data on cloud servers for further processing, thereby reducing the risk of data breaches, privacy leaks and theft of intellectual property, personal data and business secrets. Embedded machine learning can be achieved through various techniques, such as hardware acceleration, approximate computing, and model optimization. Common optimization techniques include pruning, quantization, knowledge distillation, low-rank factorization, network architecture search, and parameter sharing.
Software
Software suites containing a variety of machine learning algorithms include the following:
Free and open-source software
Caffe
Deeplearning4j
DeepSpeed
ELKI
Google JAX
Infer.NET
Keras
Kubeflow
LightGBM
Mahout
Mallet
Microsoft Cognitive Toolkit
ML.NET
mlpack
MXNet
OpenNN
Orange
pandas (software)
ROOT (TMVA with ROOT)
scikit-learn
Shogun
Spark MLlib
SystemML
TensorFlow
Torch / PyTorch
Weka / MOA
XGBoost
Yooreeka
Proprietary software with free and open-source editions
KNIME
RapidMiner
Proprietary software
Amazon Machine Learning
Angoss KnowledgeSTUDIO
Azure Machine Learning
IBM Watson Studio
Google Cloud Vertex AI
Google Prediction API
IBM SPSS Modeler
KXEN Modeler
LIONsolver
Mathematica
MATLAB
Neural Designer
NeuroSolutions
Oracle Data Mining
Oracle AI Platform Cloud Service
PolyAnalyst
RCASE
SAS Enterprise Miner
SequenceL
Splunk
STATISTICA Data Miner
Journals
Journal of Machine Learning Research
Machine Learning
Nature Machine Intelligence
Neural Computation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Conferences
AAAI Conference on Artificial Intelligence
Association for Computational Linguistics (ACL)
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)
International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB)
International Conference on Machine Learning (ICML)
International Conference on Learning Representations (ICLR)
International Conference on Intelligent Robots and Systems (IROS)
Conference on Knowledge Discovery and Data Mining (KDD)
Conference on Neural Information Processing Systems (NeurIPS)
| Technology | Artificial intelligence concepts | null |
233528 | https://en.wikipedia.org/wiki/Brainstem | Brainstem | The brainstem (or brain stem) is the posterior stalk-like part of the brain that connects the cerebrum with the spinal cord. In the human brain the brainstem is composed of the midbrain, the pons, and the medulla oblongata. The midbrain is continuous with the thalamus of the diencephalon through the tentorial notch, and sometimes the diencephalon is included in the brainstem.
The brainstem is very small, making up around only 2.6 percent of the brain's total weight. It has the critical roles of regulating heart and respiratory function, helping to control heart rate and breathing rate. It also provides the main motor and sensory nerve supply to the face and neck via the cranial nerves. Ten pairs of cranial nerves come from the brainstem. Other roles include the regulation of the central nervous system and the body's sleep cycle. It is also of prime importance in the conveyance of motor and sensory pathways from the rest of the brain to the body, and from the body back to the brain. These pathways include the corticospinal tract (motor function), the dorsal column-medial lemniscus pathway (fine touch, vibration sensation, and proprioception), and the spinothalamic tract (pain, temperature, itch, and crude touch).
Structure
The parts of the brainstem are the midbrain, the pons, and the medulla oblongata; the diencephalon is sometimes considered part of the brainstem.
The brainstem extends from just above the tentorial notch superiorly to the first cervical vertebra below the foramen magnum inferiorly.
Midbrain
The midbrain is further subdivided into three parts: tectum, tegmentum, and the ventral tegmental area. The tectum forms the ceiling. The tectum comprises the paired structure of the superior and inferior colliculi and is the dorsal covering of the cerebral aqueduct. The inferior colliculus is the principal midbrain nucleus of the auditory pathway and receives input from several peripheral brainstem nuclei, as well as inputs from the auditory cortex. Its inferior brachium (arm-like process) reaches to the medial geniculate nucleus of the diencephalon. The superior colliculus is positioned above the inferior colliculus, and marks the rostral midbrain. It is involved in the special sense of vision and sends its superior brachium to the lateral geniculate body of the diencephalon.
The tegmentum which forms the floor of the midbrain, is ventral to the cerebral aqueduct. Several nuclei, tracts, and the reticular formation are contained here.
The ventral tegmental area (VTA) is composed of paired cerebral peduncles. These transmit axons of upper motor neurons.
Midbrain nuclei
The midbrain consists of:
Periaqueductal gray: The gray matter around the cerebral aqueduct contains neurons involved in the pain desensitization pathway. Neurons synapse here. When stimulated by a signal, the synaptic connections activate neurons in the nucleus raphe magnus. The pathway then projects down into the posterior grey column of the spinal cord, inhibiting pain sensation transmission.
Oculomotor nerve nucleus: This is the third cranial nerve nucleus.
Trochlear nerve nucleus: This is the fourth cranial nerve.
Red nucleus: This is a motor nucleus that sends a descending tract to the lower motor neurons.
Substantia nigra pars compacta: This is a concentration of neurons in the ventral portion of the midbrain that uses dopamine as its neurotransmitter and is involved in both motor function and emotion. Its dysfunction is implicated in Parkinson's disease.
Reticular formation: This is a large area in the midbrain that is involved in various important functions of the midbrain. In particular, it contains lower motor neurons, is involved in the pain desensitization pathway, is involved in the arousal and consciousness systems, and contains the locus coeruleus, which is involved in intensive alertness modulation and in autonomic reflexes.
Central tegmental tract: Directly anterior to the floor of the fourth ventricle, this is a pathway by which many tracts project up to the cortex and down to the spinal cord.
Ventral tegmental area: A dopaminergic nucleus, known as group A10 cells is located close to the midline on the floor of the midbrain.
Rostromedial tegmental nucleus: A GABAergic nucleus located adjacent to the ventral tegmental area.
Pons
The pons lies between the midbrain and the medulla oblongata. It is separated from the midbrain by the superior pontine sulcus, and from the medulla by the inferior pontine sulcus. It contains tracts that carry signals from the cerebrum to the medulla and to the cerebellum and also tracts that carry sensory signals to the thalamus. The pons is connected to the cerebellum by the cerebellar peduncles. The pons houses the respiratory pneumotaxic center and apneustic center that make up the pontine respiratory group in the respiratory center. The pons co-ordinates activities of the cerebellar hemispheres.
The pons and medulla oblongata are parts of the hindbrain that form much of the brainstem.
Medulla oblongata
The medulla oblongata, often just referred to as the medulla, is the lower half of the brainstem continuous with the spinal cord. Its upper part is continuous with the pons. The medulla contains the cardiac, dorsal and ventral respiratory groups, and vasomotor centres, dealing with heart rate, breathing and blood pressure. Another important medullary structure is the area postrema whose functions include the control of vomiting.
Pontomedullary junction
The pons meets the medulla at the pontomedullary junction. This region is supplied by the joining of the basilar, vertebral arteries. The posterior inferior cerebellar artery also joins from which a large number of perforating arteries arise. Lateral spinal arteries also emerge to supply the posterior surface of the medulla oblongata.
Appearance
From the front
In the medial part of the medulla is the anterior median fissure. Moving laterally on each side are the medullary pyramids. The pyramids contain the fibers of the corticospinal tract (also called the pyramidal tract), or the upper motor neuronal axons as they head inferiorly to synapse on lower motor neuronal cell bodies within the anterior grey column of the spinal cord.
The anterolateral sulcus is lateral to the pyramids. Emerging from the anterolateral sulci are the CN XII (hypoglossal nerve) rootlets. Lateral to these rootlets and the anterolateral sulci are the olives. The olives are swellings in the medulla containing underlying inferior nucleary nuclei (containing various nuclei and afferent fibers). Lateral (and dorsal) to the olives are the rootlets for CN IX (glossopharyngeal), CN X (vagus) and CN XI (accessory nerve). The pyramids end at the pontine medulla junction, noted most obviously by the large basal pons. From this junction, CN VI (abducens nerve), CN VII (facial nerve) and CN VIII (vestibulocochlear nerve) emerge. At the level of the midpons, CN V (the trigeminal nerve) emerges. Cranial nerve III (the oculomotor nerve) emerges ventrally from the midbrain, while the CN IV (the trochlear nerve) emerges out from the dorsal aspect of the midbrain.
Between the two pyramids can be seen a decussation of fibers which marks the transition from the medulla to the spinal cord. The medulla is above the decussation and the spinal cord below.
From behind
The most medial part of the medulla is the posterior median sulcus. Moving laterally on each side is the gracile fasciculus, and lateral to that is the cuneate fasciculus. Superior to each of these, and directly inferior to the obex, are the gracile and cuneate tubercles, respectively. Underlying these are their respective nuclei. The obex marks the end of the fourth ventricle and the beginning of the central canal. The posterior intermediate sulcus separates the gracile fasciculus from the cuneate fasciculus. Lateral to the cuneate fasciculus is the lateral funiculus.
Superior to the obex is the floor of the fourth ventricle. In the floor of the fourth ventricle, various nuclei can be visualized by the small bumps that they make in the overlying tissue. In the midline and directly superior to the obex is the vagal trigone and superior to that it the hypoglossal trigone. Underlying each of these are motor nuclei for the respective cranial nerves. Superior to these trigones are fibers running laterally in both directions. These fibers are known collectively as the striae medullares. Continuing in a rostral direction, the large bumps are called the facial colliculi. Each facial colliculus, contrary to their names, do not contain the facial nerve nuclei. Instead, they have facial nerve axons traversing superficial to underlying abducens (CN VI) nuclei. Lateral to all these bumps previously discussed is an indented line, or sulcus that runs rostrally, and is known as the sulcus limitans. This separates the medial motor neurons from the lateral sensory neurons. Lateral to the sulcus limitans is the area of the vestibular system, which is involved in special sensation. Moving rostrally, the inferior, middle, and superior cerebellar peduncles are found connecting the midbrain to the cerebellum. Directly rostral to the superior cerebellar peduncle, there is the superior medullary velum and then the two trochlear nerves. This marks the end of the pons as the inferior colliculus is directly rostral and marks the caudal midbrain. Middle cerebellar peduncle is located inferior and lateral to the superior cerebellar peduncle, connecting pons to the cerebellum. Likewise, inferior cerebellar peduncle is found connecting the medulla oblongata to the cerebellum.
Blood supply
The main supply of blood to the brainstem is provided by the basilar arteries and the vertebral arteries. It is important to note that there is a bit of variability in how these arteries connect and supply blood to the brain, such as where the arteries fuse or are reinforced. The variability that exists allows for syndromes to be introduced if certain vessels are excluded from where they should normally be. Syndromes can be in fragments or combinations depending on how the vessels are arranged and if the brain is getting adequate blood supply.
Development
The human brainstem emerges from two of the three primary brain vesicles formed of the neural tube. The mesencephalon is the second of the three primary vesicles, and does not further differentiate into a secondary brain vesicle. This will become the midbrain. The third primary vesicle, the rhombencephalon (hindbrain) will further differentiate into two secondary vesicles, the metencephalon and the myelencephalon. The metencephalon will become the cerebellum and the pons. The more caudal myelencephalon will become the medulla.
Function
The brainstem plays important functions in breathing, heart rate, arousal / consciousness, sleep / wake functions and attention / concentration.
There are three main functions of the brainstem:
The brainstem plays a role in conduction. That is, all information relayed from the body to the cerebrum and cerebellum and vice versa must traverse the brainstem. The ascending pathways coming from the body to the brain are the sensory pathways and include the spinothalamic tract for pain and temperature sensation and the dorsal column-medial lemniscus pathway (DCML) including the gracile fasciculus and the cuneate fasciculus for touch, proprioception, and pressure sensation. The facial sensations have similar pathways and will travel in the spinothalamic tract and the DCML. Descending tracts are the axons of upper motor neurons destined to synapse on lower motor neurons in the ventral horn and posterior horn. In addition, there are upper motor neurons that originate in the brainstem's vestibular, red, tectal, and reticular nuclei, which also descend and synapse in the spinal cord.
The cranial nerves III-XII emerge from the brainstem. These cranial nerves supply the face, head, and viscera. (The first two pairs of cranial nerves arise from the cerebrum).
The brainstem has integrative functions being involved in cardiovascular system control, respiratory control, pain sensitivity control, alertness, awareness, and consciousness. Thus, brainstem damage is a very serious and often life-threatening problem.
Cranial nerves
Ten of the twelve pairs of cranial nerves either target or are sourced from the brainstem nuclei. The nuclei of the oculomotor nerve (III) and trochlear nerve (IV) are located in the midbrain. The nuclei of the trigeminal nerve (V), abducens nerve (VI), facial nerve (VII) and vestibulocochlear nerve (VIII) are located in the pons. The nuclei of the glossopharyngeal nerve (IX), vagus nerve (X), accessory nerve (XI) and hypoglossal nerve (XII) are located in the medulla. The fibers of these cranial nerves exit the brainstem from these nuclei.
Clinical significance
Diseases of the brainstem can result in abnormalities in the function of cranial nerves that may lead to visual disturbances, pupil abnormalities, changes in sensation, muscle weakness, hearing problems, vertigo, swallowing and speech difficulty, voice change, and co-ordination problems. Localizing neurological lesions in the brainstem may be very precise, although it relies on a clear understanding on the functions of brainstem anatomical structures and how to test them.
Brainstem stroke syndrome can cause a range of impairments including locked-in syndrome.
Duret haemorrhages are areas of bleeding in the midbrain and upper pons due to a downward traumatic displacement of the brainstem.
Cysts known as syrinxes can affect the brainstem, in a condition, called syringobulbia. These fluid-filled cavities can be congenital, acquired or the result of a tumor.
Criteria for claiming brainstem death in the UK have developed in order to make the decision of when to stop ventilation of somebody who could not otherwise sustain life. These determining factors are that the patient is irreversibly unconscious and incapable of breathing unaided. All other possible causes must be ruled out that might otherwise indicate a temporary condition. The state of irreversible brain damage has to be unequivocal. There are brainstem reflexes that are checked for by two senior doctors so that imaging technology is unnecessary. The absence of the cough and gag reflexes, of the corneal reflex and the vestibulo-ocular reflex need to be established; the pupils of the eyes must be fixed and dilated; there must be an absence of motor response to stimulation and an absence of breathing marked by concentrations of carbon dioxide in the arterial blood. All of these tests must be repeated after a certain time before death can be declared.
Additional images
| Biology and health sciences | Nervous system | null |
233529 | https://en.wikipedia.org/wiki/Aftershock | Aftershock | In seismology, an aftershock is a smaller earthquake that follows a larger earthquake, in the same area of the main shock, caused as the displaced crust adjusts to the effects of the main shock. Large earthquakes can have hundreds to thousands of instrumentally detectable aftershocks, which steadily decrease in magnitude and frequency according to a consistent pattern. In some earthquakes the main rupture happens in two or more steps, resulting in multiple main shocks. These are known as doublet earthquakes, and in general can be distinguished from aftershocks in having similar magnitudes and nearly identical seismic waveforms.
Distribution of aftershocks
Most aftershocks are located over the full area of fault rupture and either occur along the fault plane itself or along other faults within the volume affected by the strain associated with the main shock. Typically, aftershocks are found up to a distance equal to the rupture length away from the fault plane.
The pattern of aftershocks helps confirm the size of area that slipped during the main shock. In both the 2004 Indian Ocean earthquake and the 2008 Sichuan earthquake, the aftershock distribution in each case showed that the epicenter (where the rupture initiated) lay to one end of the final area of slip, implying strongly asymmetric rupture propagation.
Aftershock size and frequency with time
Aftershocks rates and magnitudes follow several well-established empirical laws.
Omori's law
The frequency of aftershocks decreases roughly with the reciprocal of time after the main shock. This empirical relation was first described by Fusakichi Omori in 1894 and is known as Omori's law. It is expressed as
where k and c are constants, which vary between earthquake sequences. A modified version of Omori's law, now commonly used, was proposed by Utsu in 1961.
where p is a third constant which modifies the decay rate and typically falls in the range 0.7–1.5.
According to these equations, the rate of aftershocks decreases quickly with time. The rate of aftershocks is proportional to the inverse of time since the mainshock and this relationship can be used to estimate the probability of future aftershock occurrence. Thus whatever the probability of an aftershock are on the first day, the second day will have 1/2 the probability of the first day and the tenth day will have approximately 1/10 the probability of the first day (when p is equal to 1). These patterns describe only the statistical behavior of aftershocks; the actual times, numbers and locations of the aftershocks are stochastic , while tending to follow these patterns. As this is an empirical law, values of the parameters are obtained by fitting to data after a mainshock has occurred, and they imply no specific physical mechanism in any given case.
The Utsu-Omori law has also been obtained theoretically, as
the solution of a differential equation describing the evolution of the aftershock activity, where the interpretation of the evolution equation is based on the idea of deactivation of the faults in the vicinity of the main shock of the earthquake. Also, previously Utsu-Omori law was obtained from a nucleation process. Results show that the spatial and temporal distribution of aftershocks is separable into a dependence on space and a dependence on time. And more recently, through the application of a fractional solution of the reactive differential equation, a double power law model shows the number density decay in several possible ways, among which is a particular case the Utsu-Omori Law.
Båth's law
The other main law describing aftershocks is known as Båth's Law and this states that the difference in magnitude between a main shock and its largest aftershock is approximately constant, independent of the main shock magnitude, typically 1.1–1.2 on the Moment magnitude scale.
Gutenberg–Richter law
Aftershock sequences also typically follow the Gutenberg–Richter law of size scaling, which refers to the relationship between the magnitude and total number of earthquakes in a region in a given time period.
Where:
is the number of events greater or equal to
is magnitude
and are constants
In summary, there are more small aftershocks and fewer large aftershocks.
Effect of aftershocks
Aftershocks are dangerous because they are usually unpredictable, can be of a large magnitude, and can collapse buildings that are damaged from the main shock. Bigger earthquakes have more and larger aftershocks and the sequences can last for years or even longer especially when a large event occurs in a seismically quiet area; see, for example, the New Madrid seismic zone, where events still follow Omori's law from the main shocks of 1811–1812. An aftershock sequence is deemed to have ended when the rate of seismicity drops back to a background level; i.e., no further decay in the number of events with time can be detected.
Land movement around the New Madrid is reported to be no more than a year, in contrast to the San Andreas Fault which averages up to a year across California. Aftershocks on the San Andreas are now believed to top out at 10 years while earthquakes in New Madrid were considered aftershocks nearly 200 years after the 1812 New Madrid earthquake.
Foreshocks
Some scientists have tried to use foreshocks to help predict upcoming earthquakes, having one of their few successes with the 1975 Haicheng earthquake in China. On the East Pacific Rise however, transform faults show quite predictable foreshock behaviour before the main seismic event. Reviews of data of past events and their foreshocks showed that they have a low number of aftershocks and high foreshock rates compared to continental strike-slip faults.
Modeling
Seismologists use tools such as the Epidemic-Type Aftershock Sequence model (ETAS) to study cascading aftershocks and foreshocks.
Psychology
Following a large earthquake and aftershocks, many people have reported feeling "phantom earthquakes" when in fact no earthquake was taking place. This condition, known as "earthquake sickness" is thought to be related to motion sickness, and usually goes away as seismic activity tails off.
| Physical sciences | Seismology | Earth science |
233579 | https://en.wikipedia.org/wiki/Dormancy | Dormancy | Dormancy is a period in an organism's life cycle when growth, development, and (in animals) physical activity are temporarily stopped. This minimizes metabolic activity and therefore helps an organism to conserve energy. Dormancy tends to be closely associated with environmental conditions. Organisms can synchronize entry to a dormant phase with their environment through predictive or consequential means. Predictive dormancy occurs when an organism enters a dormant phase before the onset of adverse conditions. For example, photoperiod and decreasing temperature are used by many plants to predict the onset of winter. Consequential dormancy occurs when organisms enter a dormant phase after adverse conditions have arisen. This is commonly found in areas with an unpredictable climate. While very sudden changes in conditions may lead to a high mortality rate among animals relying on consequential dormancy, its use can be advantageous, as organisms remain active longer and are therefore able to make greater use of available resources.
Animals
Hibernation
Hibernation is a mechanism used by many mammals to reduce energy expenditure and survive food shortages over the winter. Hibernation may be predictive or consequential. An animal prepares for hibernation by building up a thick layer of body fat during late summer and autumn that will provide it with energy during the dormant period. During hibernation, the animal undergoes many physiological changes, including decreased heart rate (by as much as 95%) and decreased body temperature. In addition to shivering, some hibernating animals also produce body heat by non-shivering thermogenesis to avoid freezing. Non-shivering thermogenesis is a regulated process in which the proton gradient generated by electron transport in mitochondria is used to produce heat instead of ATP in brown adipose tissue. Animals that hibernate include bats, ground squirrels and other rodents, mouse lemurs, the European hedgehog and other insectivores, monotremes and marsupials. Although hibernation is almost exclusively seen in mammals, some birds, such as the common poorwill, may hibernate.
Diapause
Diapause is a predictive strategy that is predetermined by an animal's genotype. Diapause is common in insects, allowing them to suspend development between autumn and spring, and in mammals such as the roe deer (Capreolus capreolus, the only ungulate with embryonic diapause), in which a delay in attachment of the embryo to the uterine lining ensures that offspring are born in spring, when conditions are most favorable.
Aestivation
Aestivation, also spelled estivation, is an example of consequential dormancy in response to very hot or dry conditions. It is common in invertebrates such as the garden snail and worm but also occurs in other animals such as lungfish, salamanders, desert tortoises, and crocodiles.
Brumation
While endotherms and other heterotherms are described scientifically as hibernating, the way ectotherms such as lizards become dormant in cold conditions is very different, and a separate term was coined for it in the 1920s: brumation. It differs from hibernation in the metabolic processes involved: energy is stored in glycogen in addition to or in place of fats, and periodic water intake is required.
Reptiles generally begin brumation in late autumn (more specific times depend on the species). They often wake up to drink water and return to "sleep". They can go for months without food. Reptiles may eat more than usual before the brumation time but eat less or refuse food as the temperature drops. However, they do need to drink water. The brumation period is anywhere from one to eight months depending on the air temperature and the size, age, and health of the reptile. During the first year of life, many small reptiles do not fully brumate, but rather slow down and eat less often. Brumation is triggered by a lack of heat and a decrease in the hours of daylight in winter, similar to hibernation.
Plants
In plant physiology, dormancy is a period of arrested plant growth. It is a survival strategy exhibited by many plant species, which enables them to survive in harsh conditions and climates where part of the year is unsuitable for growth, such as winter or dry seasons.
Many plant species that exhibit dormancy have a biological clock that tells them when to slow activity and to prepare soft tissues for a period of freezing temperatures or water shortage. On the other hand, dormancy can be triggered after a normal growing season by decreasing temperatures, shortened day length, and/or a reduction in rainfall. Chemical treatment on dormant plants has been proven to be an effective method to break dormancy, particularly in woody plants such as grapes, berries, apples, peaches, and kiwis. Specifically, hydrogen cyanamide stimulates cell division and growth in dormant plants, causing buds to break when the plant is on the edge of breaking dormancy. Slight injury of cells may play a role in the mechanism of action. The injury is thought to result in increased permeability of cellular membranes. The injury is associated with the inhibition of catalase, which in turn stimulates the pentose phosphate cycle. Hydrogen cyanamide interacts with the cytokinin metabolic cycle, which results in triggering a new growth cycle. The two adjacent images show two particularly widespread dormancy patterns amongst sympodially growing orchids:
Seeds
When a mature and viable seed under a favorable condition fails to germinate, it is said to be dormant. Seed dormancy is referred to as embryo dormancy or internal dormancy and is caused by endogenous characteristics of the embryo that prevent germination (Black M, Butler J, Hughes M. 1987). Dormancy should not be confused with seed coat dormancy, external dormancy, or hardheadedness, which is caused by the presence of a hard seed covering or seed coat that prevents water and oxygen from reaching and activating the embryo. It is a physical barrier to germination, not a true form of dormancy (Quinliven, 1971; Quinliven and Nichol, 1971).
Seed dormancy is desired in nature, but the opposite in the agriculture field. This is because agricultural practice desires rapid germination and growth for food whereas in nature, most plants are only capable of germinating once every year, making it favorable for plants to pick a specific time to reproduce. For many plants, it is preferable to reproduce in spring as opposed to fall even when there are similar conditions in terms of light and temperature due to the ensuing winter that follows fall. Many plants and seeds recognize this and enter a dormant period in the fall to stop growing. The grain is a popular example in this aspect, where they would die above ground during the winter, so dormancy is favorable to its seedlings but extensive domestication and crossbreeding has removed most dormancy mechanisms that their ancestors had.
While seed dormancy is linked to many genes, abscisic acid (ABA), a plant hormone, has been linked as a major influencer to seed dormancy. In a study on rice and tobacco plants, plants defective in zeaxanthin epoxidase gene, which are linked to ABA-synthesis pathway. Seeds with higher ABA content, from over-expressing zeaxanthin epoxidase, led to an increased dormancy period while plants with lower numbers of zeaxanthin epoxidase were shown to have a shorter period of dormancy. A simple diagram can be drawn of ABA inhibits seed germination, while gibberellin (GA, also plant hormone) inhibits ABA production and promotes seed germination.
Trees
Typically, temperate woody perennial plants require chilling temperatures to overcome winter dormancy (rest). The effect of chilling temperatures depends on species and growth stage (Fuchigami et al. 1987). In some species, rest can be broken within hours at any stage of dormancy, with either chemicals, heat, or freezing temperatures, effective dosages of which would seem to be a function of sublethal stress, which results in stimulation of ethylene production and increased cell membrane permeability.
Dormancy is a general term applicable to any instance in which a tissue predisposed to elongate or grow in some other manner does not do so (Nienstaedt 1966). Quiescence is dormancy imposed by the external environment. Correlated inhibition is a kind of physiological dormancy maintained by agents or conditions originating within the plant, but not within the dormant tissue itself. Rest (winter dormancy) is a kind of physiological dormancy maintained by agents or conditions within the organ itself. However, physiological subdivisions of dormancy do not coincide with the morphological dormancy found in white spruce (Picea glauca) and other conifers (Owens et al. 1977). Physiological dormancy often includes early stages of bud-scale initiation before measurable shoot elongation or before flushing. It may also include late leaf initiation after shoot elongation has been completed. In either of those cases, buds that appear to be dormant are nevertheless very active morphologically and physiologically.
Dormancy of various kinds is expressed in white spruce (Romberger 1963). White spruce, like many woody plants in temperate and cooler regions, requires exposure to low temperature for a period of weeks before it can resume normal growth and development. This "chilling requirement" for white spruce is satisfied by uninterrupted exposure to temperatures below 7 °C for 4 to 8 weeks, depending on physiological condition (Nienstaedt 1966, 1967).
Tree species that have well-developed dormancy needs may be tricked to some degree, but not completely. For instance, if a Japanese maple (Acer palmatum) is given an "eternal summer" through exposure to additional daylight, it grows continuously for as long as two years. Eventually, however, a temperate-climate plant automatically goes dormant, no matter what environmental conditions it experiences. Deciduous plants lose their leaves; evergreens curtail all new growth. Going through an "eternal summer" and the resultant automatic dormancy is stressful to the plant and usually fatal. The fatality rate increases to 100% if the plant does not receive the necessary period of cold temperatures required to break the dormancy. Most plants require a certain number of hours of "chilling" at temperatures between about 0 °C and 10 °C to be able to break dormancy (Bewley, Black, K.D 1994).
Short photoperiods induce dormancy and permit the formation of needle primordia. Primordia formation requires 8 to 10 weeks and must be followed by 6 weeks of chilling at 2 °C. Bud break occurs promptly if seedlings are then exposed to 16-hour photoperiods at the 25 °C/20 °C temperature regime. The free growth mode, a juvenile characteristic that is lost after 5 years or so, ceases in seedlings experiencing environmental stress (Logan and Pollard 1976, Logan 1977).
Bacteria
Many bacteria can survive adverse conditions such as temperature, desiccation, and antibiotics by forming endospores, cysts, or general states of reduced metabolic activity lacking specialized cellular structures. Up to 80% of the bacteria in samples from the wild appear to be metabolically inactive—many of which can be resuscitated. Such dormancy is responsible for the high diversity levels of most natural ecosystems.
Bacteria enter a state of reduced metabolic activity not only during stress, but also when a bacterial population has reached a stable state. Many bacteria are capable of producing proteins called hibernation factors which can bind to and inactivate their ribosomes, pausing protein production, which can take more than 50% of a cell's energy usage.
A recent study has characterized the bacterial cytoplasm as a glass forming fluid approaching the liquid-glass transition, such that large cytoplasmic components require the aid of metabolic activity to fluidize the surrounding cytoplasm, allowing them to move through a viscous, glass-like cytoplasm. During dormancy, when such metabolic activities are put on hold, the cytoplasm behaves like a solid glass, 'freezing' subcellular structures in place and perhaps protecting them, while allowing small molecules like metabolites to move freely through the cell, which may be helpful in cells transitioning out of dormancy.
Viruses
Dormancy, in its rigid definition, does not apply to viruses, as they are not metabolically active. However, some viruses such as poxviruses and picornaviruses, after entering the host, can become latent for long periods of time, or even indefinitely until they are externally activated. Herpesviruses, for example, can become latent after infecting the host, and after years they can activate again if the host is under stress or exposed to ultraviolet radiation.
| Biology and health sciences | Ethology | Biology |
233609 | https://en.wikipedia.org/wiki/Pollination | Pollination | Pollination is the transfer of pollen from an anther of a plant to the stigma of a plant, later enabling fertilisation and the production of seeds. Pollinating agents can be animals such as insects, for example beetles or butterflies; birds, and bats; water; wind; and even plants themselves. Pollinating animals travel from plant to plant carrying pollen on their bodies in a vital interaction that allows the transfer of genetic material critical to the reproductive system of most flowering plants. When self-pollination occurs within a closed flower. Pollination often occurs within a species. When pollination occurs between species, it can produce hybrid offspring in nature and in plant breeding work.
In angiosperms, after the pollen grain (gametophyte) has landed on the stigma, it germinates and develops a pollen tube which grows down the style until it reaches an ovary. Its two gametes travel down the tube to where the gametophyte(s) containing the female gametes are held within the carpel. After entering an ovule through the micropyle, one male nucleus fuses with the polar bodies to produce the endosperm tissues, while the other fuses with the egg cell to produce the embryo. Hence the term: "double fertilisation". This process would result in the production of a seed, made of both nutritious tissues and embryo.
In gymnosperms, the ovule is not contained in a carpel, but exposed on the surface of a dedicated support organ, such as the scale of a cone, so that the penetration of carpel tissue is unnecessary. Details of the process vary according to the division of gymnosperms in question. Two main modes of fertilisation are found in gymnosperms: cycads and Ginkgo have motile sperm that swim directly to the egg inside the ovule, whereas conifers and gnetophytes have sperm that are unable to swim but are conveyed to the egg along a pollen tube.
Pollination research covers various fields, including botany, horticulture, entomology, and ecology. The pollination process as an interaction between flower and pollen vector was first addressed in the 18th century by Christian Konrad Sprengel. It is important in horticulture and agriculture, because fruiting is dependent on fertilisation: the result of pollination. The study of pollination by insects is known as anthecology. There are also studies in economics that look at the positives and negatives of pollination, focused on bees, and how the process affects the pollinators themselves.
Process of pollination
Pollen germination has three stages; hydration, activation and pollen tube emergence. The pollen grain is severely dehydrated so that its mass is reduced, enabling it to be more easily transported from flower to flower. Germination only takes place after rehydration, ensuring that premature germination does not take place in the anther. Hydration allows the plasma membrane of the pollen grain to reform into its normal bilayer organization providing an effective osmotic membrane. Activation involves the development of actin filaments throughout the cytoplasm of the cell, which eventually become concentrated at the point from which the pollen tube will emerge. Hydration and activation continue as the pollen tube begins to grow.
In conifers, the reproductive structures are borne on cones. The cones are either pollen cones (male) or ovulate cones (female), but some species are monoecious and others dioecious. A pollen cone contains hundreds of microsporangia carried on (or borne on) reproductive structures called sporophylls. Spore mother cells in the microsporangia divide by meiosis to form haploid microspores that develop further by two mitotic divisions into immature male gametophytes (pollen grains). The four resulting cells consist of a large tube cell that forms the pollen tube, a generative cell that will produce two sperm by mitosis, and two prothallial cells that degenerate. These cells comprise a very reduced microgametophyte, that is contained within the resistant.
The pollen grains are dispersed by the wind to the female, ovulate cone that is made up of many overlapping scales (sporophylls, and thus megasporophylls), each protecting two ovules, each of which consists of a megasporangium (the nucellus) wrapped in two layers of tissue, the integument and the cupule, that were derived from highly modified branches of ancestral gymnosperms. When a pollen grain lands close enough to the tip of an ovule, it is drawn in through the micropyle ( a pore in the integuments covering the tip of the ovule) often by means of a drop of liquid known as a pollination drop. The pollen enters a pollen chamber close to the nucellus, and there it may wait for a year before it germinates and forms a pollen tube that grows through the wall of the megasporangium (=nucellus) where fertilisation takes place. During this time, the megaspore mother cell divides by meiosis to form four haploid cells, three of which degenerate. The surviving one develops as a megaspore and divides repeatedly to form an immature female gametophyte (egg sac). Two or three archegonia containing an egg then develop inside the gametophyte. Meanwhile, in the spring of the second year two sperm cells are produced by mitosis of the body cell of the male gametophyte. The pollen tube elongates and pierces and grows through the megasporangium wall and delivers the sperm cells to the female gametophyte inside. Fertilisation takes place when the nucleus of one of the sperm cells enters the egg cell in the megagametophyte's archegonium.
In flowering plants, the anthers of the flower produce microspores by meiosis. These undergo mitosis to form male gametophytes, each of which contains two haploid cells. Meanwhile, the ovules produce megaspores by meiosis, further division of these form the female gametophytes, which are very strongly reduced, each consisting only of a few cells, one of which is the egg. When a pollen grain adheres to the stigma of a carpel it germinates, developing a pollen tube that grows through the tissues of the style, entering the ovule through the micropyle. When the tube reaches the egg sac, two sperm cells pass through it into the female gametophyte and fertilisation takes place.
Methods
Pollination may be biotic or abiotic. Biotic pollination relies on living pollinators to move the pollen from one flower to another. Abiotic pollination relies on wind, water or even rain. Adding natural habitat areas into farm systems generally improves pollination, as farms that are closer to natural habitat have higher crop yield because they are visited by more pollinators.
Biotic pollination
About 80% of angiosperms rely on biotic pollination. (also called pollen vectors): organisms that carry or move the pollen grains from the anther of one flower to the receptive part of the carpel or pistil (stigma) of another. Between 100,000 and 200,000 species of animal act as pollinators of the world's 250,000 species of flowering plant. The majority of these pollinators are insects, but about 1,500 species of birds and mammals visit flowers and may transfer pollen between them. Besides birds and bats which are the most frequent visitors, these include monkeys, lemurs, squirrels, rodents and possums.
Entomophily, pollination by insects, often occurs on plants that have developed colored petals and a strong scent to attract insects such as bees, wasps, and occasionally ants (Hymenoptera), beetles (Coleoptera), moths and butterflies (Lepidoptera), and flies (Diptera). The existence of insect pollination dates back to the dinosaur era.
Insect pollinators such as honey bees (Apis spp.),
bumblebees (Bombus spp.), and butterflies (e.g., Thymelicus flavus) have been observed to engage in flower constancy, which means they are more likely to transfer pollen to other conspecific plants. This can be beneficial for the pollinators, as flower constancy prevents the loss of pollen during interspecific flights and pollinators from clogging stigmas with pollen of other flower species. It also improves the probability that the pollinator will find productive flowers easily accessible and recognisable by familiar clues. The primary insect pollinators are hymenopterans, mostly bees, but also including sawflies, ants, and many species of wasps.
Many flowers attract pollinators by odor. For example, orchid bee species such as Euglossa cordata are attracted to orchids this way, and it has been suggested that some orchid species intoxicate bees during visits which can last up to 90 minutes. However, in general, plants that rely on pollen vectors tend to be adapted to their particular type of vector, for example day-pollinated species tend to be brightly coloured and have little odor, but if they are pollinated largely by birds or specialist mammals, they tend to be larger and have larger nectar rewards than species that are strictly insect-pollinated. Night-blooming flowers have little color, but are often very aromatic. Plants with vertebrate pollinators also tend to spread their rewards over longer periods, having long flowering seasons; their specialist pollinators would be likely to starve if the pollination season were too short.
Some flowers have specialized mechanisms to trap pollinators to increase effectiveness, attach pollen to specific body parts (as happens in many orchid and Asclepias species), or require specialized behaviors or morphology in order to extract pollen or nectar. One such syndrome is "buzz pollination" (or "sonication"), where a bee must vibrate at a certain frequency in order to cause pollen to be released from the anthers.
In zoophily, pollination is performed by vertebrates such as birds and bats, particularly, hummingbirds, sunbirds, spiderhunters, honeyeaters, and fruit bats. Ornithophily or bird pollination is the pollination of flowering plants by birds. Chiropterophily or bat pollination is the pollination of flowering plants by bats. Plants adapted to use bats or moths as pollinators typically have white petals, strong scent and flower at night, whereas plants that use birds as pollinators tend to produce copious nectar and have red petals.
Mammals are not generally thought of as pollinators, but some rodents, bats and marsupials are significant pollinators and some even specialise in such activities. In South Africa certain species of Protea (in particular Protea humiflora, P. amplexicaulis, P. subulifolia, P. decurrens and P. cordata) are adapted to pollination by rodents (particularly Cape Spiny Mouse, Acomys subspinosus) and elephant shrews (Elephantulus species). The flowers are borne near the ground, are yeasty smelling, not colourful, and sunbirds reject the nectar with its high xylose content. The mice apparently can digest the xylose and they eat large quantities of the pollen. In Australia pollination by flying, gliding and earthbound mammals has been demonstrated.
Reptile pollinators are known, but they form a minority in most ecological situations. They are most frequent and most ecologically significant in island systems, where insect and sometimes also bird populations may be unstable and less species-rich. Adaptation to a lack of animal food and of predation pressure, might therefore favour reptiles becoming more herbivorous and more inclined to feed on pollen and nectar. Most species of lizards in the families that seem to be significant in pollination seem to carry pollen only incidentally, especially the larger species such as Varanidae and Iguanidae, but especially several species of the Gekkonidae are active pollinators, and so is at least one species of the Lacertidae, Podarcis lilfordi, which pollinates various species, but in particular is the major pollinator of Euphorbia dendroides on various Mediterranean islands.
Abiotic pollination
Abiotic pollination uses nonliving methods such as wind and water to move pollen from one flower to another. This allows the plant to spend energy directly on pollen rather than on attracting pollinators with flowers and nectar. Pollination by wind is more common amongst abiotic pollination.
By wind
Some 98% of abiotic pollination is anemophily, i.e., pollination by wind. This probably arose from insect pollination (entomophily), most likely due to changes in the environment or the availability of pollinators. The transfer of pollen is more efficient than previously thought; wind pollinated plants have developed to have specific heights, in addition to specific floral, stamen and stigma positions that promote effective pollen dispersal and transfer.
By water
Pollination by water, hydrophily, uses water to transport pollen, sometimes as whole anthers; these can travel across the surface of the water to carry dry pollen from one flower to another. In Vallisneria spiralis, an unopened male flower floats to the surface of the water, and, upon reaching the surface, opens up and the fertile anthers project forward. The female flower, also floating, has its stigma protected from the water, while its sepals are slightly depressed into the water, allowing the male flowers to tumble in.
By rain
Rain pollination is used by a small percentage of plants. Heavy rain discourages insect pollination and damages unprotected flowers, but can itself disperse pollen of suitably adapted plants, such as Ranunculus flammula, Narthecium ossifragum, and Caltha palustris. In these plants, excess rain drains allowing the floating pollen to come in contact with the stigma. In some orchids ombrophily occurs, and rain water splashes cause the anther cap to be removed, allowing for the pollen to be exposed. After exposure, raindrops causes the pollen to be shot upward, when the stipe pulls them back, and then fall into the cavity of the stigma. Thus, for the orchid Acampe rigida, this allows the plant to self-pollinate, which is useful when biotic pollinators in the environment have decreased.
Switching methods
It is possible for a plant to have varying pollination methods, including both biotic and abiotic pollination. The orchid Oeceoclades maculata uses both rain and butterflies, depending on its environmental conditions.
Mechanism
Pollination can be accomplished by cross-pollination or by self-pollination:
Cross-pollination, also called allogamy, occurs when pollen is delivered from the stamen of one flower to the stigma of a flower on another plant of the same species. Plants adapted for cross-pollination have several mechanisms to prevent self-pollination; the reproductive organs may be arranged in such a way that self-fertilisation is unlikely, or the stamens and carpels may mature at different times.
Self-pollination occurs when pollen from one flower pollinates the same flower or other flowers of the same individual. It is thought to have evolved under conditions when pollinators were not reliable vectors for pollen transport, and is most often seen in short-lived annual species and plants that colonize new locations. Self-pollination may include autogamy, where pollen is transferred from anther (male part) to the stigma (female part) of the same flower; or geitonogamy, when pollen is transferred from anther of a flower to stigma of another flower on the same plant. Plants adapted to self-fertilize often have similar stamen and carpel lengths. Plants that can pollinate themselves and produce viable offspring are called self-fertile. Plants that cannot fertilize themselves are called self-sterile, a condition which mandates cross-pollination for the production of offspring.
Cleistogamy: is self-pollination that occurs before the flower opens. The pollen is released from the anther within the flower or the pollen on the anther grows a tube down the style to the ovules. It is a type of sexual breeding, in contrast to asexual systems such as apomixis. Some cleistogamous flowers never open, in contrast to chasmogamous flowers that open and are then pollinated. Cleistogamous flowers are by necessity found on self-compatible or self-fertile plants. Although certain orchids and grasses are entirely cleistogamous, other plants resort to this strategy under adverse conditions. Often there may be a mixture of both cleistogamous and chasmogamous flowers, sometimes on different parts of the plant and sometimes in mixed inflorescences. The ground bean produces cleistogamous flowers below ground, and mixed cleistogamous and chasmogamous flowers above.
An estimated 48.7% of plant species are either dioecious or self-incompatible obligate out-crossers. It is also estimated that about 42% of flowering plants have a mixed mating system in nature. In the most common kind of mixed mating system, individual plants produce a single type of flower and fruits may contain self-pollinated, out-crossed or a mixture of progeny types.
Pollination also requires consideration of pollenizers, the plants that serve as the pollen source for other plants. Some plants are self-compatible (self-fertile) and can pollinate and fertilize themselves. Other plants have chemical or physical barriers to self-pollination.
In agriculture and horticulture pollination management, a good pollenizer is a plant that provides compatible, viable and plentiful pollen and blooms at the same time as the plant that is to be pollinated or has pollen that can be stored and used when needed to pollinate the desired flowers. Hybridization is effective pollination between flowers of different species, or between different breeding lines or populations. see also Heterosis.
Peaches are considered self-fertile because a commercial crop can be produced without cross-pollination, though cross-pollination usually gives a better crop. Apples are considered self-incompatible, because a commercial crop must be cross-pollinated. Many commercial fruit tree varieties are grafted clones, genetically identical. An orchard block of apples of one variety is genetically a single plant. Many growers now consider this a mistake. One means of correcting this mistake is to graft a limb of an appropriate pollenizer (generally a variety of crabapple) every six trees or so.
Coevolution
The first fossil record for abiotic pollination is from fern-like plants in the late Carboniferous period. Gymnosperms show evidence for biotic pollination as early as the Triassic period. Many fossilized pollen grains show characteristics similar to the biotically dispersed pollen today. Furthermore, the gut contents, wing structures, and mouthpart morphology of fossilized beetles and flies suggest that they acted as early pollinators. The association between beetles and angiosperms during the early Cretaceous period led to parallel radiations of angiosperms and insects into the late Cretaceous. The evolution of nectaries in late Cretaceous flowers signals the beginning of the mutualism between hymenopterans and angiosperms.
Bees provide a good example of the mutualism that exists between hymenopterans and angiosperms. Flowers provide bees with nectar (an energy source) and pollen (a source of protein). When bees go from flower to flower collecting pollen they are also depositing pollen grains onto the flowers, thus pollinating them. While pollen and nectar, in most cases, are the most notable reward attained from flowers, bees also visit flowers for other resources such as oil, fragrance, resin and even waxes. It has been estimated that bees originated with the origin or diversification of angiosperms. In addition, cases of coevolution between bee species and flowering plants have been illustrated by specialized adaptations. For example, long legs are selected for in Rediviva neliana, a bee that collects oil from Diascia capsularis, which have long spur lengths that are selected for in order to deposit pollen on the oil-collecting bee, which in turn selects for even longer legs in R. neliana and again longer spur length in D. capsularis is selected for, thus, continually driving each other's evolution.
In agriculture
The most essential staple food crops on the planet, like wheat, maize, rice, soybeans and sorghum are wind pollinated or self pollinating. When considering the top 15 crops contributing to the human diet globally in 2013, slightly over 10% of the total human diet of plant crops (211 out of 1916 kcal/person/day) is dependent upon insect pollination.
Pollination management is a branch of agriculture that seeks to protect and enhance present pollinators and often involves the culture and addition of pollinators in monoculture situations, such as commercial fruit orchards. The largest managed pollination event in the world is in California almond orchards, where nearly half (about one million hives) of the US honey bees are trucked to the almond orchards each spring. New York's apple crop requires about 30,000 hives; Maine's blueberry crop uses about 50,000 hives each year. The US solution to the pollinator shortage, so far, has been for commercial beekeepers to become pollination contractors and to migrate. Just as the combine harvesters follow the wheat harvest from Texas to Manitoba, beekeepers follow the bloom from south to north, to provide pollination for many different crops.
In America, bees are brought to commercial plantings of cucumbers, squash, melons, strawberries, and many other crops. Honey bees are not the only managed pollinators: a few other species of bees are also raised as pollinators. The alfalfa leafcutter bee is an important pollinator for alfalfa seed in western United States and Canada. Bumblebees are increasingly raised and used extensively for greenhouse tomatoes and other crops.
The ecological and financial importance of natural pollination by insects to agricultural crops, improving their quality and quantity, becomes more and more appreciated and has given rise to new financial opportunities. The vicinity of a forest or wild grasslands with native pollinators near agricultural crops, such as apples, almonds or coffee can improve their yield by about 20%. The benefits of native pollinators may result in forest owners demanding payment for their contribution in the improved crop results – a simple example of the economic value of ecological services. Farmers can also raise native crops in order to promote native bee pollinator species as shown with the native sweat bees L. vierecki in Delaware and L. leucozonium in southwest Virginia.
The American Institute of Biological Sciences reports that native insect pollination saves the United States agricultural economy nearly an estimated $3.1 billion annually through natural crop production; pollination produces some $40 billion worth of products annually in the United States alone.
Pollination of food crops has become an environmental issue, due to two trends. The trend to monoculture means that greater concentrations of pollinators are needed at bloom time than ever before, yet the area is forage poor or even deadly to bees for the rest of the season. The other trend is the decline of pollinator populations, due to pesticide misuse and overuse, new diseases and parasites of bees, clearcut logging, decline of beekeeping, suburban development, removal of hedges and other habitat from farms, and public concern about bees. Widespread aerial spraying for mosquitoes due to West Nile fears is causing an acceleration of the loss of pollinators. Changes in land use, harmful pesticides, and advancing climate change threaten wild pollinators, key insect species that increase yields of three-fourths of crop varieties and are critical to growing healthy foods.
In some situations, farmers or horticulturists may aim to restrict natural pollination to only permit breeding with the preferred individuals plants. This may be achieved through the use of pollination bags.
Improving pollination in areas with suboptimal bee densities
In some instances growers' demand for beehives far exceeds the available supply. The number of managed beehives in the US has steadily declined from close to 6 million after WWII, to less than 2.5 million today. In contrast, the area dedicated to growing bee-pollinated crops has grown over 300% in the same time period. Additionally, in the past five years there has been a decline in winter managed beehives, which has reached an unprecedented rate of colony losses at near 30%. At present, there is an enormous demand for beehive rentals that cannot always be met. There is a clear need across the agricultural industry for a management tool to draw pollinators into cultivations and encourage them to preferentially visit and pollinate the flowering crop. By attracting pollinators like honey bees and increasing their foraging behavior, particularly in the center of large plots, we can increase grower returns and optimize yield from their plantings. ISCA Technologies, from Riverside, California, created a semiochemical formulation called SPLAT Bloom, that modifies the behavior of honey bees, inciting them to visit flowers in every portion of the field.
Environmental impacts
Loss of pollinators, also known as pollinator decline (of which colony collapse disorder is perhaps the most well known) has been noticed in recent years. These loss of pollinators have caused a disturbance in early plant regeneration processes such as seed dispersal and pollination. Early processes of plant regeneration greatly depend on plant-animal interactions and because these interactions are interrupted, biodiversity and ecosystem functioning are threatened. Pollination by animals aids in the genetic variability and diversity within plants because it allows for out-crossing instead for self-crossing. Without this genetic diversity there would be a lack of traits for natural selection to act on for the survival of the plant species. Seed dispersal is also important for plant fitness because it allows plants the ability to expand their populations. More than that, it permits plants to escape environments that have changed and have become difficult to reside in. All of these factors show the importance of pollinators for plants, which are a significant part of the foundation for a stable ecosystem. If only a few species of plants depended on Loss of pollinators is especially devastating because there are so many plant species rely on them. More than 87.5% of angiosperms, over 75% of tropical tree species, and 30-40% of tree species in temperate regions depend on pollination and seed dispersal.
Factors that contribute to pollinator decline include habitat destruction, pesticide, parasitism/diseases, and climate change. The more destructive forms of human disturbances are land use changes such as fragmentation, selective logging, and the conversion to secondary forest habitat. Defaunation of frugivores is also an important driver. These alterations are especially harmful due to the sensitivity of the pollination process of plants. Research on tropical palms found that defaunation has caused a decline in seed dispersal, which causes a decrease in genetic variability in this species. Habitat destruction such as fragmentation and selective logging remove areas that are most optimal for the different types of pollinators, which removes pollinators food resources, nesting sites, and leads to isolation of populations. The effect of pesticides on pollinators has been debated because it is difficult to determine that a single pesticide is the cause as opposed to a mixture or other threats. Whether exposure alone causes damage, or if the duration and potency are also factors is unknown. However, insecticides have negative effects, as in the case of neonicotinoids that harm bee colonies. Many researchers believe it is the synergistic effects of these factors which are ultimately detrimental to pollinator populations.
In the agriculture industry, climate change is causing a "pollinator crisis". This crisis is affecting the production of crops, and the relating costs, due to a decrease in pollination processes. This disturbance can be phenological or spatial. In the first case, species that normally occur in similar seasons or time cycles, now have different responses to environmental changes and therefore no longer interact. For example, a tree may flower sooner than usual, while the pollinator may reproduce later in the year and therefore the two species no longer coincide in time. Spatial disturbances occur when two species that would normally share the same distribution now respond differently to climate change and are shifting to different regions.
Examples of affected pollinators
The most known and understood pollinator, bees, have been used as the prime example of the decline in pollinators. Bees are essential in the pollination of agricultural crops and wild plants and are one of the main insects that perform this task. Out of the bees species, the honey bee or Apis mellifera has been studied the most and in the United States, there has been a loss of 59% of colonies from 1947 to 2005. The decrease in populations of the honey bee have been attributed to pesticides, genetically modified crops, fragmentation, parasites and diseases that have been introduced. There has been a focus on neonicotinoids effects on honey bee populations. Neonicotinoids insecticides have been used due to its low mammalian toxicity, target specificity, low application rates, and broad spectrum activity. However, the insecticides are able to make its way throughout the plant, which includes the pollen and nectar. Due to this, it has been shown to effect on the nervous system and colony relations in the honey bee populations.
Butterflies too have suffered due to these modifications. Butterflies are helpful ecological indicators since they are sensitive to changes within the environment like the season, altitude, and above all, human impact on the environment. Butterfly populations were higher within the natural forest and were lower in open land. The reason for the difference in density is the fact that in open land the butterflies would be exposed to desiccation and predation. These open regions are caused by habitat destruction like logging for timber, livestock grazing, and firewood collection. Due to this destruction, butterfly species' diversity can decrease and it is known that there is a correlation in butterfly diversity and plant diversity.
Food security and pollinator decline
Besides the imbalance of the ecosystem caused by the decline in pollinators, it may jeopardise food security. Pollination is necessary for plants to continue their populations and 3/4 of the plant species that contribute to the world's food supply are plants that require pollinators. Insect pollinators, like bees, are large contributors to crop production, over 200 billion dollars worth of crop species are pollinated by these insects. Pollinators are also essential because they improve crop quality and increase genetic diversity, which is necessary in producing fruit with nutritional value and various flavors. Crops that do not depend on animals for pollination but on the wind or self-pollination, like corn and potatoes, have doubled in production and make up a large part of the human diet but do not provide the micronutrients that are needed. The essential nutrients that are necessary in the human diet are present in plants that rely on animal pollinators. There have been issues in vitamin and mineral deficiencies and it is believed that if pollinator populations continue to decrease these deficiencies will become even more prominent.
Plant–pollinator networks
Wild pollinators often visit a large number of plant species and plants are visited by a large number of pollinator species. All these relations together form a network of interactions between plants and pollinators. Surprising similarities were found in the structure of networks consisting out of the interactions between plants and pollinators. This structure was found to be similar in very different ecosystems on different continents, consisting of entirely different species.
The structure of plant-pollinator networks may have large consequences for the way in which pollinator communities respond to increasingly harsh conditions. Mathematical models, examining the consequences of this network structure for the stability of pollinator communities suggest that the specific way in which plant-pollinator networks are organized minimizes competition between pollinators and may even lead to strong indirect facilitation between pollinators when conditions are harsh. This means that pollinator species together can survive under harsh conditions. But it also means that pollinator species collapse simultaneously when conditions pass a critical point. This simultaneous collapse occurs, because pollinator species depend on each other when surviving under difficult conditions.
Such a community-wide collapse, involving many pollinator species, can occur suddenly when increasingly harsh conditions pass a critical point and recovery from such a collapse might not be easy. The improvement in conditions needed for pollinators to recover, could be substantially larger than the improvement needed to return to conditions at which the pollinator community collapsed.
Economics of commercial honeybee pollination
While there are 200,000 - 350,000 different species of animals that help pollination, honeybees are responsible for majority of the pollination for consumed crops, providing between $235 and $577 US billion of benefits to global food production. The western honey bee (Apis mellifera L.) provides highly valued pollination services for a wide variety of agricultural crops, and ranks as the most frequent single species of pollinator for crops worldwide. Since the early 1900s, beekeepers in the United States started renting out their colonies to farmers to increase the farmer's crop yields, earning additional revenue from providing privatized pollination. As of 2016, 41% of an average US beekeeper's revenue comes from providing such pollination service to farmers, making it the biggest proportion of their income, with the rest coming from sales of honey, beeswax, government subsidy, etc. This is an example of how a positive externality, pollination of crops from beekeeping and honey-making, was successfully accounted for and incorporated into the overall market for agriculture. On top of assisting food production, pollination service provide beneficial spillovers as bees germinate not only the crops, but also other plants around the area that they are set loose to pollinate, increasing biodiversity for the local ecosystem. There is even further spillover as biodiversity increases ecosystem resistance for wildlife and crops. Due to their role of pollination in crop production, commercial honeybees are considered to be livestock by the US Department of Agriculture. The impact of pollination varies by crop. For example, almond production in the United States, an $11 billion industry based almost exclusively in the state of California, is heavily dependent on imported honeybees for pollination of almond trees. Almond industry uses up to 82% of the services in the pollination market. Each February, around 60% of the all bee colonies in the US are moved to California's Central Valley.
Over the past decade, beekeepers across the US have reported that the mortality rate of their bee colonies has stayed constant at about 30% every year, making the deaths an expected cost of business for the beekeepers. While the exact cause of this phenomenon is unknown, according to the US Department of Agriculture Colony Collapse Disorder Progress Report it can be traced to factors such as pollution, pesticides, and pathogens from evidences found in areas of the colonies affected and the colonies themselves. Pollution and pesticides are detrimental to the health of the bees and their colonies as the bees' ability to pollinate and return to their colonies are great greatly compromised. Moreover, California's Central Valley is determined by the World Health Organization as the location of country's worst air pollution. Almond pollinating bees, approximately 60% of the bees in the US as mentioned above, will be mixed with bees from thousands of other hives provided by different beekeepers, making them exponentially susceptible to diseases and mites that any of them could be carrying. The deaths do not stop at commercial honeybees as there is evidence of significant pathogen spillover to other pollinators including wild bumble bees, infecting up to 35-100% of wild bees within 2 km radius of commercial pollination. The negative externality of private pollination services is the decline of biodiversity through the deaths of commercial and wild bees. Despite losing about a third of their workforce every year, beekeepers continue to rent out their bees to almond farms due to the high pay from the almond industry. In 2016, a colony rented out for almond pollination gave beekeepers an income of $165 per colony rented, around three times from average of other crops that use the pollination rental service. However, a recent study published in Oxford Academic's Journal of Economic Entomology found that once the costs for maintaining bees specifically for almond pollination, including overwintering, summer management, and the replacement dying bees are considered, almond pollination is barely or not profitable for average beekeepers.
| Biology and health sciences | Plant reproduction | null |
233636 | https://en.wikipedia.org/wiki/Spherical%20Earth | Spherical Earth | Spherical Earth or Earth's curvature refers to the approximation of the figure of the Earth to a sphere. The concept of a spherical Earth gradually displaced earlier beliefs in a flat Earth during classical antiquity and the Middle Ages. The figure of the Earth is more accurately described as an ellipsoid, which was realized in the early modern period.
Cause
Earth is massive enough that the pull of gravity maintains its roughly spherical shape. Most of its deviation from spherical stems from the centrifugal force caused by rotation around its north-south axis. This force deforms the sphere into an oblate ellipsoid.
Formation
The Solar System formed from a dust cloud that was at least partially the remnant of one or more supernovas that produced heavy elements by nucleosynthesis. Grains of matter accreted through electrostatic interaction. As they grew in mass, gravity took over in gathering yet more mass, releasing the potential energy of their collisions and in-falling as heat. The protoplanetary disk also had a greater proportion of radioactive elements than Earth today because, over time, those elements decayed. Their decay heated the early Earth even further, and continue to contribute to Earth's internal heat budget. The early Earth was thus mostly liquid.
A sphere is the only stable shape for a non-rotating, gravitationally self-attracting liquid. The outward acceleration caused by Earth's rotation is greater at the equator than at the poles (where is it zero), so the sphere gets deformed into an ellipsoid, which represents the shape having the lowest potential energy for a rotating, fluid body. This ellipsoid is slightly fatter around the equator than a perfect sphere would be. Earth's shape is also slightly lumpy because it is composed of different materials of different densities that exert slightly different amounts of gravitational force per volume.
The liquidity of a hot, newly formed planet allows heavier elements to sink down to the middle and forces lighter elements closer to the surface, a process known as planetary differentiation. This event is known as the iron catastrophe; the most abundant heavier elements were iron and nickel, which now form the Earth's core.
Later shape changes and effects
Though the surface rocks of Earth have cooled enough to solidify, the outer core of the planet is still hot enough to remain liquid. Energy is still being released; volcanic and tectonic activity has pushed rocks into hills and mountains and blown them out of calderas. Meteors also cause impact craters and surrounding ridges. However, if the energy release from these processes halts, then they tend to erode away over time and return toward the lowest potential-energy curve of the ellipsoid. Weather powered by solar energy can also move water, rock, and soil to make Earth slightly out of round.
Earth undulates as the shape of its lowest potential energy changes daily due to the gravity of the Sun and Moon as they move around with respect to Earth. This is what causes tides in the oceans' water, which can flow freely along the changing potential.
History of concept and measurement
The spherical shape of the Earth was known and measured by astronomers, mathematicians, and navigators from a variety of literate ancient cultures, including the Hellenic World, and Ancient India. Greek ethnographer Megasthenes, , has been interpreted as stating that the contemporary Brahmans of India believed in a spherical Earth as the center of the universe. The knowledge of the Greeks was inherited by Ancient Rome, and Christian and Islamic realms in the Middle Ages. Circumnavigation of the world in the Age of Discovery provided direct evidence. Improvements in transportation and other technologies refined estimations of the size of the Earth, and helped spread knowledge of it.
The earliest documented mention of the concept dates from around the 5th century BC, when it appears in the writings of Greek philosophers. In the 3rd century BC, Hellenistic astronomy established the roughly spherical shape of Earth as a physical fact and calculated the Earth's circumference. This knowledge was gradually adopted throughout the Old World during Late Antiquity and the Middle Ages. A practical demonstration of Earth's sphericity was achieved by Ferdinand Magellan and Juan Sebastián Elcano's circumnavigation (1519–1522).
The concept of a spherical Earth displaced earlier beliefs in a flat Earth: In early Mesopotamian mythology, the world was portrayed as a disk floating in the ocean with a hemispherical sky-dome above, and this forms the premise for early world maps like those of Anaximander and Hecataeus of Miletus. Other speculations on the shape of Earth include a seven-layered ziggurat or cosmic mountain, alluded to in the Avesta and ancient Persian writings (see seven climes).
The realization that the figure of the Earth is more accurately described as an ellipsoid dates to the 17th century, as described by Isaac Newton in Principia. In the early 19th century, the flattening of the earth ellipsoid was determined to be of the order of 1/300 (Delambre, Everest). The modern value as determined by the US DoD World Geodetic System since the 1960s is close to 1/298.25.
Measurement and representation
Geodesy, also called geodetics, is the scientific discipline that deals with the measurement and representation of Earth, its gravitational field and geodynamic phenomena (polar motion, Earth tides, and crustal motion) in three-dimensional time-varying space.
Geodesy is primarily concerned with positioning and the gravity field and geometrical aspects of their temporal variations, although it can also include the study of Earth's magnetic field. Especially in the German speaking world, geodesy is divided into geomensuration ("Erdmessung" or "höhere Geodäsie"), which is concerned with measuring Earth on a global scale, and surveying ("Ingenieurgeodäsie"), which is concerned with measuring parts of the surface.
Earth's shape can be thought of in at least two ways:
as the shape of the geoid, the mean sea level of the world ocean; or
as the shape of Earth's land surface as it rises above and falls below the sea.
As the science of geodesy measured Earth more accurately, the shape of the geoid was first found not to be a perfect sphere but to approximate an oblate spheroid, a specific type of ellipsoid. More recent measurements have measured the geoid to unprecedented accuracy, revealing mass concentrations beneath Earth's surface.
Evidence
| Physical sciences | Earth science basics: General | Earth science |
233656 | https://en.wikipedia.org/wiki/Chiton | Chiton | Chitons () are marine molluscs of varying size in the class Polyplacophora ( ), formerly known as Amphineura. About 940 extant and 430 fossil species are recognized.
They are also sometimes known as sea cradles or coat-of-mail shells or suck-rocks, or more formally as loricates, polyplacophorans, and occasionally as polyplacophores.
Chitons have a shell composed of eight separate shell plates or valves. These plates overlap slightly at the front and back edges, and yet articulate well with one another. Because of this, the shell provides protection at the same time as permitting the chiton to flex upward when needed for locomotion over uneven surfaces, and even allows the animal to curl up into a ball when dislodged from rocks. The shell plates are encircled by a skirt known as a girdle.
Habitat
Chitons live worldwide, from cold waters through to the tropics. They live on hard surfaces, such as on or under rocks, or in rock crevices.
Some species live quite high in the intertidal zone and are exposed to the air and light for long periods. Most species inhabit intertidal or subtidal zones, and do not extend beyond the photic zone, but a few species live in deep water, as deep as .
Chitons are exclusively and fully marine, in contrast to the bivalves, which were able to adapt to brackish water and fresh water, and the gastropods which were able to make successful transitions to freshwater and terrestrial environments.
Morphology
Shell
All chitons bear a protective dorsal shell that is divided into eight articulating aragonite valves embedded in the tough muscular girdle that surrounds the chiton's body. Compared with the single or two-piece shells of other molluscs, this arrangement allows chitons to roll into a protective ball when dislodged and to cling tightly to irregular surfaces. In some species the valves are reduced or covered by the girdle tissue. The valves are variously colored, patterned, smooth, or sculptured.
The most anterior plate is crescent-shaped, and is known as the cephalic plate (sometimes called a head plate, despite the absence of a complete head). The most posterior plate is known as the anal plate (sometimes called the tail plate, although chitons do not have tails.)
The inner layer of each of the six intermediate plates is produced anteriorly as an articulating flange, called the articulamentum. This inner layer may also be produced laterally in the form of notched insertion plates. These function as an attachment of the valve plates to the soft body. A similar series of insertion plates may be attached to the convex anterior border of the cephalic plate or the convex posterior border of the anal plate.
The sculpture of the valves is one of the taxonomic characteristics, along with the granulation or spinulation of the girdle.
After a chiton dies, the individual valves which make up the eight-part shell come apart because the girdle is no longer holding them together, and then the plates sometimes wash up in beach drift. The individual shell plates from a chiton are sometimes known as butterfly shells due to their shape.
Girdle ornament
The girdle may be ornamented with scales or spicules which, like the shell plates, are mineralized with aragonite — although a different mineralization process operates in the spicules to that in the teeth or shells (implying an independent evolutionary innovation). This process seems quite simple in comparison to other shell tissue; in some taxa, the crystal structure of the deposited minerals closely resembles the disordered nature of crystals that form inorganically, although more order is visible in other taxa.
The protein component of the scales and sclerites is minuscule in comparison with other biomineralized structures, whereas the total proportion of matrix is 'higher' than in mollusc shells. This implies that polysaccharides make up the bulk of the matrix. The girdle spines often bear length-parallel striations.
The wide form of girdle ornament suggests it serves a secondary role; chitons can survive perfectly well without them. Camouflage or defence are two likely functions. Certainly species such as some members of the genus Acanthochitona bear conspicuous paired tufts of spicules on the girdle. The spicules are sharp, and if carelessly handled, easily penetrate the human skin, where they detach and remain as a painful irritant.
Spicules are secreted by cells that do not express engrailed, but these cells are surrounded by engrailed-expressing cells. These neighbouring cells secrete an organic pellicle on the outside of the developing spicule, whose aragonite is deposited by the central cell; subsequent division of this central cell allows larger spines to be secreted in certain taxa.
The organic pellicule is found in most polyplacophora (but not basal chitons, such as Hanleya) but is unusual in aplacophora. Developmentally, sclerite-secreting cells arise from pretrochal and postrochal cells: the 1a, 1d, 2a, 2c, 3c and 3d cells. The shell plates arise primarily from the 2d micromere, although 2a, 2b, 2c and sometimes 3c cells also participate in its secretion.
Internal anatomy
The girdle is often ornamented with spicules, bristles, hairy tufts, spikes, or snake-like scales. The majority of the body is a snail-like foot, but no head or other soft parts beyond the girdle are visible from the dorsal side.
The mantle cavity consists of a narrow channel on each side, lying between the body and the girdle. Water enters the cavity through openings in either side of the mouth, then flows along the channel to a second, exhalant, opening close to the anus. Multiple gills hang down into the mantle cavity along part or all of the lateral pallial groove, each consisting of a central axis with a number of flattened filaments through which oxygen can be absorbed.
The three-chambered heart is located towards the animal's hind end. Each of the two auricles collects blood from the gills on one side, while the muscular ventricle pumps blood through the aorta and round the body.
The excretory system consists of two nephridia, which connect to the pericardial cavity around the heart, and remove excreta through a pore that opens near the rear of the mantle cavity. The single gonad is located in front of the heart, and releases gametes through a pair of pores just in front of those used for excretion.
The mouth is located on the underside of the animal, and contains a tongue-like structure called a radula, which has numerous rows of 17 teeth each. The teeth are coated with magnetite, a hard ferric/ferrous oxide mineral. The radula is used to scrape microscopic algae off the substratum. The mouth cavity itself is lined with chitin and is associated with a pair of salivary glands. Two sacs open from the back of the mouth, one containing the radula, and the other containing a protrusible sensory subradular organ that is pressed against the substratum to taste for food.
Cilia pull the food through the mouth in a stream of mucus and through the oesophagus, where it is partially digested by enzymes from a pair of large pharyngeal glands. The oesophagus, in turn, opens into a stomach, where enzymes from a digestive gland complete the breakdown of the food. Nutrients are absorbed through the linings of the stomach and the first part of the intestine. The intestine is divided in two by a sphincter, with the latter part being highly coiled and functioning to compact the waste matter into faecal pellets. The anus opens just behind the foot.
Chitons lack a clearly demarcated head; their nervous system resembles a dispersed ladder. No true ganglia are present, as in other molluscs, although a ring of dense neural tissue occurs around the oesophagus. From this ring, nerves branch forwards to innervate the mouth and subradula, while two pairs of main nerve cords run back through the body. One pair, the pedal cords, innervate the foot, while the palliovisceral cords innervate the mantle and remaining internal organs.
Some species bear an array of tentacles in front of the head.
Senses
The primary sense organs of chitons are the subradular organ and a large number of unique organs called aesthetes. The aesthetes consist of light-sensitive cells just below the surface of the shell, although they are not capable of true vision. In some cases, however, they are modified to form ocelli, with a cluster of individual photoreceptor cells lying beneath a small aragonite-based lens. Each lens can form clear images, and is composed of relatively large, highly crystallographically aligned grains to minimize light scattering. An individual chiton may have thousands of such ocelli. These aragonite-based eyes make them capable of true vision, though research continues as to the extent of their visual acuity. It is known that they can differentiate between a predator's shadow and changes in light caused by clouds. An evolutionary trade-off has led to a compromise between the eyes and the shell; as the size and complexity of the eyes increase, the mechanical performance of their shells decrease, and vice versa.
A relatively good fossil record of chiton shells exists, but ocelli are only present in those dating to or younger; this would make the ocelli, whose precise function is unclear, likely the most recent eyes to evolve.
Although chitons lack osphradia, statocysts, and other sensory organs common to other molluscs, they do have numerous tactile nerve endings, especially on the girdle and within the mantle cavity.
The order Lepidopleurida also have a pigmented sensory organ called the Schwabe organ. Its function remains largely unknown, and has been suggested to be related to that of a larval eye.
However, chitons lack a cerebral ganglion.
Homing ability
Similar to many species of saltwater limpets, several species of chiton are known to exhibit homing behaviours, journeying to feed and then returning to the exact spot they previously inhabited. The method they use to perform such behaviors has been investigated to some extent, but remains unknown. One theory has the chitons remembering the topographic profile of the region, thus being able to guide themselves back to their home scar by a physical knowledge of the rocks and visual input from their numerous primitive eyespots.
The sea snail Nerita textilis (like all gastropods) deposits a mucus trail as it moves, which a chemoreceptive organ is able to detect and guide the snail back to its home site. It is unclear if chiton homing functions in the same way, but they may leave chemical cues along the rock surface and at the home scar which their olfactory senses can detect and home in on. Furthermore, older trails may also be detected, providing further stimulus for the chiton to find its home.
The radular teeth of chitons are made of magnetite, and the iron crystals within these may be involved in magnetoreception, the ability to sense the polarity and the inclination of the Earth's magnetic field. Experimental work has suggested that chitons can detect and respond to magnetism.
Culinary uses
Chitons are eaten in several parts of the world. This includes islands in the Caribbean, such as Trinidad, Tobago, The Bahamas, St. Maarten, Aruba, Bonaire, Anguilla and Barbados, as well as in Bermuda. They are also traditionally eaten in certain parts of the Philippines, where it is called kibet if raw and chiton if fried. Indigenous people of the Pacific coasts of North America eat chitons. They are a common food on the Pacific coast of South America and in the Galápagos. The foot of the chiton is prepared in a manner similar to abalone. Some islanders living in South Korea also eat chiton, slightly boiled and mixed with vegetables and hot sauce. Aboriginal people in Australia also eat chiton; for example they are recorded in the Narungga Nation Traditional Fishing Agreement.
Life habits
A chiton creeps along slowly on a muscular foot. It has considerable power of adhesion and can cling to rocks very powerfully, like a limpet.
Chitons are generally herbivorous grazers, though some are omnivorous and some carnivorous. They eat algae, bryozoans, diatoms, barnacles, and sometimes bacteria by scraping the rocky substrate with their well-developed radulae.
A few species of chitons are predatory, such as the small western Pacific species Placiphorella velata. These predatory chitons have enlarged anterior girdles. They catch other small invertebrates, such as shrimp and possibly even small fish, by holding the enlarged, hood-like front end of the girdle up off the surface, and then clamping down on unsuspecting, shelter-seeking prey.
Reproduction and life cycle
Chitons have separate sexes, and fertilization is usually external. The male releases sperm into the water, while the female releases eggs either individually, or in a long string. In most cases, fertilization takes place either in the surrounding water, or in the mantle cavity of the female. Some species brood the eggs within the mantle cavity, and the species Callistochiton viviparus even retains them within the ovary and gives birth to live young, an example of ovoviviparity.
The egg has a tough spiny coat, and usually hatches to release a free-swimming trochophore larva, typical of many other mollusc groups. In a few cases, the trochophore remains within the egg (and is then called lecithotrophic – deriving nutrition from yolk), which hatches to produce a miniature adult. Unlike most other molluscs, there is no intermediate stage, or veliger, between the trochophore and the adult. Instead, a segmented shell gland forms on one side of the larva, and a foot forms on the opposite side. When the larva is ready to become an adult, the body elongates, and the shell gland secretes the plates of the shell. Unlike the fully grown adult, the larva has a pair of simple eyes, although these may remain for some time in the immature adult.
Predators
Animals which prey on chitons include humans, seagulls, sea stars, crabs, lobsters and fish.
Evolutionary origins
Chitons have a relatively good fossil record, stretching back to the Cambrian, with the genus Preacanthochiton, known from fossils found in Late Cambrian deposits in Missouri, being classified as the earliest known polyplacophoran. However, the exact phylogenetic position of supposed Cambrian chitons is highly controversial, and some authors have instead argued that the earliest confirmed polyplacophorans date back to the Early Ordovician. Kimberella and Wiwaxia of the Precambrian and Cambrian may be related to ancestral polyplacophorans. Matthevia is a Late Cambrian polyplacophoran preserved as individual pointed valves, and sometimes considered to be a chiton, although at the closest, it can only be a stem-group member of the group.
Based on this and co-occurring fossils, one plausible hypothesis for the origin of polyplacophora has that they formed when an aberrant monoplacophoran was born with multiple centres of calcification, rather than the usual one. Selection quickly acted on the resultant conical shells to form them to overlap into protective armour; their original cones are homologous to the tips of the plates of modern chitons.
The chitons evolved from multiplacophora during the Palaeozoic, with their relatively conserved modern-day body plan being fixed by the Mesozoic.
The earliest fossil evidence of aesthetes in chitons comes from around 400 Ma, during the Early Devonian.
History of scientific investigation
Chitons were first studied by Carl Linnaeus in his 1758 10th edition of Systema Naturae. Since his description of the first four species, chitons have been variously classified. They were called Cyclobranchians (round arm) in the early 19th century, and then grouped with the aplacophorans in the subphylum Amphineura in 1876. The class Polyplacophora was named by de Blainville 1816.
Etymology
The name chiton is Neo-Latin derived from the Ancient Greek word khitōn, meaning tunic (which also is the source of the word chitin). The Ancient Greek word khitōn can be traced to the Central Semitic word *kittan, which is from the Akkadian words kitû or kita'um, meaning flax or linen, and originally the Sumerian word gada or gida.
The Greek-derived name Polyplacophora comes from the words poly- (many), plako- (tablet), and -phoros (bearing), a reference to the chiton's eight shell plates.
Taxonomy
Most classification schemes in use today are based, at least in part, on Pilsbry's Manual of Conchology (1892–1894), extended and revised by Kaas and Van Belle (1985–1990).
Since chitons were first described by Linnaeus (1758), extensive taxonomic studies at the species level have been made. However, the taxonomic classification at higher levels in the group has remained somewhat unsettled.
The most recent classification, by Sirenko (2006), is based not only on shell morphology, as usual, but also other important features, including aesthetes, girdle, radula, gills, glands, egg hull projections, and spermatozoids. It includes all the living and extinct genera of chitons.
Further resolution within the Chitonida has been recovered through molecular analysis.
This system is now generally accepted.
Class Polyplacophora de Blainville, 1816
Subclass Paleoloricata Bergenhayn, 1955
Order Chelodida Bergenhayn, 1943
Family Chelodidae Bergenhayn, 1943
Chelodes Davidson & King, 1874
Euchelodes Marek, 1962
Calceochiton Flower, 1968
Order Septemchitonida Bergenhayn, 1955
Family Gotlandochitonidae Bergenhayn, 1955
Gotlandochiton Bergenhayn, 1955
Family Helminthochitonidae Van Belle, 1975
Kindbladochiton Van Belle, 1975
Diadelochiton Hoare, 2000
Helminthochiton Salter in Griffith & M'Coy, 1846
Echinochiton Pojeta, Eernisse, Hoare & Henderson, 2003
Family Septemchitonidae Bergenhayn, 1955
Septemchiton Bergenhayn, 1955
Paleochiton A. G. Smith, 1964
Thairoplax Cherns, 1998
Subclass Loricata Shumacher, 1817
Order Lepidopleurida Thiele, 1910
Suborder Cymatochitonina Sirenko & Starobogatov, 1977
Family Acutichitonidae Hoare, Mapes & Atwater, 1983
Acutichiton Hoare, Sturgeon & Hoare, 1972
Elachychiton Hoare, Sturgeon & Hoare, 1972
Harpidochiton Hoare & Cook, 2000
Arcochiton Hoare, Sturgeon & Hoare, 1972
Kraterochiton Hoare, 2000
Soleachiton Hoare, Sturgeon & Hoare, 1972
Asketochiton Hoare & Sabattini, 2000
Family †Cymatochitonidae Sirenko & Starobogatov, 1977
Cymatochiton Dall, 1882
Compsochiton Hoare & Cook, 2000
Family Gryphochitonidae Pilsbry, 1900
Gryphochiton Gray, 1847
Family Lekiskochitonidae Smith & Hoare, 1987
Lekiskochiton Hoare & Smith, 1984
Family Permochitonidae Sirenko & Starobogatov, 1977
Permochiton Iredale & Hull, 1926
Suborder Lepidopleurina Thiele, 1910
Family Abyssochitonidae (synonym: Ferreiraellidae) Dell' Angelo & Palazzi, 1991
Glaphurochiton Raymond, 1910
?Pyknochiton Hoare, 2000
?Hadrochiton Hoare, 2000
Ferreiraella Sirenko, 1988
Family Glyptochitonidae Starobogatov & Sirenko, 1975
Glyptochiton Konninck, 1883
Family Leptochitonidae Dall, 1889
Colapterochiton Hoare & Mapes, 1985
Coryssochiton DeBrock, Hoare & Mapes, 1984
Proleptochiton Sirenko & Starobogatov, 1977
Schematochiton Hoare, 2002
Pterochiton (Carpenter MS) Dall, 1882
Leptochiton Gray, 1847
Parachiton Thiele, 1909
Terenochiton Iredale, 1914
Trachypleura Jaeckel, 1900
Pseudoischnochiton Ashby, 1930
Lepidopleurus Risso, 1826
Hanleyella Sirenko, 1973
Family †Camptochitonidae Sirenko, 1997
Camptochiton DeBrock, Hoare & Mapes, 1984
Pedanochiton DeBrock, Hoare & Mapes, 1984
Euleptochiton Hoare & Mapes, 1985
Pileochiton DeBrock, Hoare & Mapes, 1984
Chauliochiton Hoare & Smith, 1984
Stegochiton Hoare & Smith, 1984
Family Nierstraszellidae Sirenko, 1992
Nierstraszella Sirenko, 1992
Family Mesochitonidae Dell' Angelo & Palazzi, 1989
Mesochiton Van Belle, 1975
Pterygochiton Rochebrune, 1883
Family Protochitonidae Ashby, 1925
Protochiton Ashby, 1925
Deshayesiella (Carpenter MS) Dall, 1879
Oldroydia Dall, 1894
Family Hanleyidae Bergenhayn, 1955
Hanleya Gray, 1857
Hemiarthrum Dall, 1876
Order Chitonida Thiele, 1910
Suborder Chitonina Thiele, 1910
Superfamily Chitonoidea Rafinesque, 1815
Family Ochmazochitonidae Hoare & Smith, 1984
Ochmazochiton Hoare & Smith, 1984
Family Ischnochitonidae Dall, 1889
Ischnochiton Gray, 1847
Stenochiton H. Adams & Angas, 1864
Stenoplax (Carpenter MS) Dall, 1879
Lepidozona Pilsbry, 1892
Stenosemus Middendorff, 1847
Subterenochiton Iredale & Hull, 1924
Thermochiton Saito & Okutani, 1990
Connexochiton Kaas, 1979
Tonicina Thiele, 1906
Family Callistoplacidae Pilsbry, 1893
Ischnoplax Dall, 1879
Callistochiton Carpenter MS, Dall, 1879
Callistoplax Dall, 1882
Ceratozona Dall, 1882
Calloplax Thiele, 1909
Family Chaetopleuridae Plate, 1899
Chaetopleura Shuttleworth, 1853
Dinoplax Carpenter MS, Dall, 1882
Family Loricidae Iredale & Hull, 1923
Lorica H. & A. Adams, 1852
Loricella Pilsbry, 1893
Oochiton Ashby, 1929
Family Callochitonidae Plate, 1901
Callochiton Gray, 1847
Eudoxochiton Shuttleworth, 1853
Vermichiton Kaas, 1979
Family Chitonidae Rafinesque, 1815
Subfamily Chitoninae Rafinesque, 1815
Chiton Linnaeus, 1758
Amaurochiton Thiele, 1893
Radsia Gray, 1847
Sypharochiton Thiele, 1893
Nodiplax Beu, 1967
Rhyssoplax Thiele, 1893
Teguloaplax Iredale & Hull, 1926
Mucrosquama Iredale, 1893
Subfamily Toniciinae Pilsbry, 1893
Tonicia Gray, 1847
Onithochiton Gray, 1847
Subfamily Acanthopleurinae Dall, 1889
Acanthopleura Guilding, 1829
Liolophura Pilsbry, 1893
Enoplochiton Gray, 1847
Squamopleura Nierstrasz, 1905
Superfamily Schizochitonoidea Dall, 1889
Family Schizochitonidae Dall, 1889
Incissiochiton Van Belle, 1985
Schizochiton Gray, 1847
Suborder Acanthochitonina Bergenhayn, 1930
Superfamily Mopalioidea Dall, 1889
Family Tonicellidae Simroth, 1894
Subfamily Tonicellinae Simroth, 1894
Lepidochitona Gray, 1821
Particulazona Kaas, 1993
Boreochiton Sars, 1878
Tonicella Carpenter, 1873
Nuttallina (Carpenter MS) Dall, 1871
Spongioradsia Pilsbry, 1894
Oligochiton Berry, 1922
Subfamily Juvenichitoninae Sirenko, 1975
Juvenichiton Sirenko, 1975
Micichiton Sirenko, 1975
Nanichiton Sirenko, 1975
Family Schizoplacidae Bergenhayn, 1955
Schizoplax Dall, 1878
Family Mopaliidae Dall, 1889
Subfamily Heterochitoninae Van Belle, 1978
Heterochiton Fucini, 1912
Allochiton Fucini, 1912
Subfamily Mopaliinae Dall, 1889
Aerilamma Hull, 1924
Guildingia Pilsbry, 1893
Frembleya H. Adams, 1866
Diaphoroplax Iredale, 1914
Plaxiphora Gray, 1847
Placiphorina Kaas & Van Belle, 1994
Nuttallochiton Plate, 1899
Mopalia Gray, 1847
Maorichiton Iredale, 1914
Placiphorella (Carpenter MS) Dall, 1879
Katharina Gray, 1847
Amicula Gray, 1847
Superfamily Cryptoplacoidea H. & A. Adams, 1858
Family Acanthochitonidae Pilsbry, 1893
Subfamily Acanthochitoninae Pilsbry, 1893
Acanthochitona Gray, 1921
Craspedochiton Shuttleworth, 1853
Spongiochiton (Carpenter MS) Dall, 1882
Notoplax H. Adams, 1861
Pseudotonicia Ashby, 1928
Bassethullia Pilsbry, 1928
Americhiton Watters, 1990
Choneplax (Carpenter MS) Dall, 1882
Cryptoconchus (de Blainville MS) Burrow, 1815
Subfamily Cryptochitoninae Pilsbry, 1893
Cryptochiton Middendorff, 1847
Family Hemiarthridae Sirenko, 1997
Hemiarthrum Carpenter in Dall, 1876
Weedingia Kaas, 1988
Family Choriplacidae Ashby, 1928
Choriplax Pilsbry, 1894
Family Cryptoplacidae H. & A. Adams, 1858
Cryptoplax de Blainville, 1818
Incertae sedis
Family Scanochitonidae Bergenhayn, 1955
Scanochiton Bergenhayn, 1955
Family Olingechitonidae Starobogatov & Sirenko, 1977
Olingechiton Bergenhayn, 1943
Family Haeggochitonidae Sirenko & Starobogatov, 1977
Haeggochiton Bergenhayn, 1955
Family Ivoechitonidae Sirenko & Starobogatov, 1977
Ivoechiton Bergenhayn, 1955
Phylogeny
Chiton phylogeny has gone relatively underexplored compared to the more charismatic classes of molluscs, and as such is still somewhat poorly understood. The relationships between orders and superfamilies has been made clear thanks to phylogenomics, but interfamilial relationships are still largely unknown because of the lack of sampling from all families.
| Biology and health sciences | Mollusks | Animals |
233668 | https://en.wikipedia.org/wiki/Figure%20of%20the%20Earth | Figure of the Earth | In geodesy, the figure of the Earth is the size and shape used to model planet Earth. The kind of figure depends on application, including the precision needed for the model. A spherical Earth is a well-known historical approximation that is satisfactory for geography, astronomy and many other purposes. Several models with greater accuracy (including ellipsoid) have been developed so that coordinate systems can serve the precise needs of navigation, surveying, cadastre, land use, and various other concerns.
Motivation
Earth's topographic surface is apparent with its variety of land forms and water areas. This topographic surface is generally the concern of topographers, hydrographers, and geophysicists. While it is the surface on which Earth measurements are made, mathematically modeling it while taking the irregularities into account would be extremely complicated.
The Pythagorean concept of a spherical Earth offers a simple surface that is easy to deal with mathematically. Many astronomical and navigational computations use a sphere to model the Earth as a close approximation. However, a more accurate figure is needed for measuring distances and areas on the scale beyond the purely local. Better approximations can be made by modeling the entire surface as an oblate spheroid, using spherical harmonics to approximate the geoid, or modeling a region with a best-fit reference ellipsoid.
For surveys of small areas, a planar (flat) model of Earth's surface suffices because the local topography overwhelms the curvature. Plane-table surveys are made for relatively small areas without considering the size and shape of the entire Earth. A survey of a city, for example, might be conducted this way.
By the late 1600s, serious effort was devoted to modeling the Earth as an ellipsoid, beginning with French astronomer Jean Picard's measurement of a degree of arc along the Paris meridian. Improved maps and better measurement of distances and areas of national territories motivated these early attempts. Surveying instrumentation and techniques improved over the ensuing centuries. Models for the figure of the Earth improved in step.
In the mid- to late 20th century, research across the geosciences contributed to drastic improvements in the accuracy of the figure of the Earth. The primary utility of this improved accuracy was to provide geographical and gravitational data for the inertial guidance systems of ballistic missiles. This funding also drove the expansion of geoscientific disciplines, fostering the creation and growth of various geoscience departments at many universities. These developments benefited many civilian pursuits as well, such as weather and communication satellite control and GPS location-finding, which would be impossible without highly accurate models for the figure of the Earth.
Models
The models for the figure of the Earth vary in the way they are used, in their complexity, and in the accuracy with which they represent the size and shape of the Earth.
Sphere
The simplest model for the shape of the entire Earth is a sphere. The Earth's radius is the distance from Earth's center to its surface, about . While "radius" normally is a characteristic of perfect spheres, the Earth deviates from spherical by only a third of a percent, sufficiently close to treat it as a sphere in many contexts and justifying the term "the radius of the Earth".
The concept of a spherical Earth dates back to around the 6th century BC, but remained a matter of philosophical speculation until the 3rd century BC. The first scientific estimation of the radius of the Earth was given by Eratosthenes about 240 BC, with estimates of the accuracy of Eratosthenes's measurement ranging from −1% to 15%.
The Earth is only approximately spherical, so no single value serves as its natural radius. Distances from points on the surface to the center range from to . Several different ways of modeling the Earth as a sphere each yield a mean radius of . Regardless of the model, any radius falls between the polar minimum of about and the equatorial maximum of about . The difference correspond to the polar radius being approximately 0.3% shorter than the equatorial radius.
Ellipsoid of revolution
As theorized by Isaac Newton and Christiaan Huygens, the Earth is flattened at the poles and bulged at the equator. Thus, geodesy represents the figure of the Earth as an oblate spheroid. The oblate spheroid, or oblate ellipsoid, is an ellipsoid of revolution obtained by rotating an ellipse about its shorter axis. It is the regular geometric shape that most nearly approximates the shape of the Earth. A spheroid describing the figure of the Earth or other celestial body is called a reference ellipsoid. The reference ellipsoid for Earth is called an Earth ellipsoid.
An ellipsoid of revolution is uniquely defined by two quantities. Several conventions for expressing the two quantities are used in geodesy, but they are all equivalent to and convertible with each other:
Equatorial radius (called semimajor axis), and polar radius (called semiminor axis);
and eccentricity ;
and flattening .
Eccentricity and flattening are different ways of expressing how squashed the ellipsoid is. When flattening appears as one of the defining quantities in geodesy, generally it is expressed by its reciprocal. For example, in the WGS 84 spheroid used by today's GPS systems, the reciprocal of the flattening is set to be exactly .
The difference between a sphere and a reference ellipsoid for Earth is small, only about one part in 300. Historically, flattening was computed from grade measurements. Nowadays, geodetic networks and satellite geodesy are used. In practice, many reference ellipsoids have been developed over the centuries from different surveys. The flattening value varies slightly from one reference ellipsoid to another, reflecting local conditions and whether the reference ellipsoid is intended to model the entire Earth or only some portion of it.
A sphere has a single radius of curvature, which is simply the radius of the sphere. More complex surfaces have radii of curvature that vary over the surface. The radius of curvature describes the radius of the sphere that best approximates the surface at that point. Oblate ellipsoids have a constant radius of curvature east to west along parallels, if a graticule is drawn on the surface, but varying curvature in any other direction. For an oblate ellipsoid, the polar radius of curvature is larger than the equatorial
because the pole is flattened: the flatter the surface, the larger the sphere must be to approximate it. Conversely, the ellipsoid's north–south radius of curvature at the equator is smaller than the polar
where is the distance from the center of the ellipsoid to the equator (semi-major axis), and is the distance from the center to the pole. (semi-minor axis)
Non-spheroidal deviations
Triaxiality (equatorial eccentricity)
The possibility that the Earth's equator is better characterized as an ellipse rather than a circle and therefore that the ellipsoid is triaxial has been a matter of scientific inquiry for many years. Modern technological developments have furnished new and rapid methods for data collection and, since the launch of Sputnik 1, orbital data have been used to investigate the theory of ellipticity. More recent results indicate a 70 m difference between the two equatorial major and minor axes of inertia, with the larger semidiameter pointing to 15° W longitude (and also 180-degree away).
Egg or pear shape
Following work by Picard, Italian polymath Giovanni Domenico Cassini found that the length of a degree was apparently shorter north of Paris than to the south, implying the Earth to be egg-shaped. In 1498, Christopher Columbus dubiously suggested that the Earth was pear-shaped based on his disparate mobile readings of the angle of the North Star, which he incorrectly interpreted as having varying diurnal motion.
The theory of a slightly pear-shaped Earth arose when data was received from the U.S.'s artificial satellite Vanguard 1 in 1958. It was found to vary in its long periodic orbit, with the Southern Hemisphere exhibiting higher gravitational attraction than the Northern Hemisphere. This indicated a flattening at the South Pole and a bulge of the same degree at the North Pole, with the sea level increased about at the latter. This theory implies the northern middle latitudes to be slightly flattened and the southern middle latitudes correspondingly bulged. Potential factors involved in this aberration include tides and subcrustal motion (e.g. plate tectonics).
John A. O'Keefe and co-authors are credited with the discovery that the Earth had a significant third degree zonal spherical harmonic in its gravitational field using Vanguard 1 satellite data. Based on further satellite geodesy data, Desmond King-Hele refined the estimate to a difference between north and south polar radii, owing to a "stem" rising in the North Pole and a depression in the South Pole. The polar asymmetry is about a thousand times smaller than the Earth's flattening and even smaller than its geoidal undulation in some regions.
Geoid
Modern geodesy tends to retain the ellipsoid of revolution as a reference ellipsoid and treat triaxiality and pear shape as a part of the geoid figure: they are represented by the spherical harmonic coefficients and , respectively, corresponding to degree and order numbers 2.2 for the triaxiality and 3.0 for the pear shape.
It was stated earlier that measurements are made on the apparent or topographic surface of the Earth and it has just been explained that computations are performed on an ellipsoid. One other surface is involved in geodetic measurement: the geoid. In geodetic surveying, the computation of the geodetic coordinates of points is commonly performed on a reference ellipsoid closely approximating the size and shape of the Earth in the area of the survey. The actual measurements made on the surface of the Earth with certain instruments are however referred to the geoid. The ellipsoid is a mathematically defined regular surface with specific dimensions. The geoid, on the other hand, coincides with that surface to which the oceans would conform over the entire Earth if free to adjust to the combined effect of the Earth's mass attraction (gravitation) and the centrifugal force of the Earth's rotation. As a result of the uneven distribution of the Earth's mass, the geoidal surface is irregular and, since the ellipsoid is a regular surface, the separations between the two, referred to as geoid undulations, geoid heights, or geoid separations, will be irregular as well.
The geoid is a surface along which the gravity potential is equal everywhere and to which the direction of gravity is always perpendicular. The latter is particularly important because optical instruments containing gravity-reference leveling devices are commonly used to make geodetic measurements. When properly adjusted, the vertical axis of the instrument coincides with the direction of gravity and is, therefore, perpendicular to the geoid. The angle between the plumb line which is perpendicular to the geoid (sometimes called "the vertical") and the perpendicular to the ellipsoid (sometimes called "the ellipsoidal normal") is defined as the deflection of the vertical. It has two components: an east–west and a north–south component.
Local approximations
Simpler local approximations are possible.
Local tangent plane
The local tangent plane is appropriate for analysis across small distances.
Osculating sphere
The best local spherical approximation to the ellipsoid in the vicinity of a given point is the Earth's osculating sphere. Its radius equals Earth's Gaussian radius of curvature, and its radial direction coincides with the geodetic normal direction. The center of the osculating sphere is offset from the center of the ellipsoid, but is at the center of curvature for the given point on the ellipsoid surface. This concept aids the interpretation of terrestrial and planetary radio occultation refraction measurements and in some navigation and surveillance applications.
Earth rotation and Earth's interior
Determining the exact figure of the Earth is not only a geometric task of geodesy, but also has geophysical considerations. According to theoretical arguments by Newton, Leonhard Euler, and others, a body having a uniform density of 5,515 kg/m that rotates like the Earth should have a flattening of 1:229. This can be concluded without any information about the composition of Earth's interior. However, the measured flattening is 1:298.25, which is closer to a sphere and a strong argument that Earth's core is extremely compact. Therefore, the density must be a function of the depth, ranging from 2,600 kg/m at the surface (rock density of granite, etc.), up to 13,000 kg/m within the inner core.
Global and regional gravity field
Also with implications for the physical exploration of the Earth's interior is the gravitational field, which is the net effect of gravitation (due to mass attraction) and centrifugal force (due to rotation). It can be measured very accurately at the surface and remotely by satellites. True vertical generally does not correspond to theoretical vertical (deflection ranges up to 50") because topography and all geological masses disturb the gravitational field. Therefore, the gross structure of the Earth's crust and mantle can be determined by geodetic-geophysical models of the subsurface.
| Physical sciences | Earth science basics: General | Earth science |
234098 | https://en.wikipedia.org/wiki/Herbaceous%20plant | Herbaceous plant | Herbaceous plants are vascular plants that have no persistent woody stems above ground. This broad category of plants includes many perennials, and nearly all annuals and biennials.
Definitions of "herb" and "herbaceous"
The fourth edition of the Shorter Oxford English Dictionary defines "herb" as:
"A plant whose stem does not become woody and persistent (as in a tree or shrub) but remains soft and succulent, and dies (completely or down to the root) after flowering";
"A (freq. aromatic) plant used for flavouring or scent, in medicine, etc.". (See: Herb)
The same dictionary defines "herbaceous" as:
"Of the nature of a herb; esp. not forming a woody stem but dying down to the root each year";
"BOTANY Resembling a leaf in colour or texture. Opp. scarious".
Botanical sources differ from each other on the definition of "herb". For instance, the Hunt Institute for Botanical Documentation includes the condition "when persisting over more than one growing season, the parts of the shoot dying back seasonally". Some orchids, such as species of Phalaenopsis, are described in some sources (including the authoritative Plants of the World Online) as "herbs" but with "leaves persistent or sometimes deciduous". In the glossary of Flora of the Sydney Region, Roger Charles Carolin defines "herb" as a "plant that does not produce a woody stem", and the adjective "herbaceous" as meaning "herb-like, referring to parts of the plant that are green and soft in texture".
Description
Herbaceous plants include graminoids, forbs, and ferns. Forbs are generally defined as herbaceous broad-leafed plants, while graminoids are plants with grass-like appearance including true grasses, sedges, and rushes.
Herbaceous plants most often are low-growing plants, different from woody plants like trees and shrubs, tending to have soft green stems that lack lignification and their above-ground growth is ephemeral and often seasonal in duration. By contrast, non-herbaceous vascular plants are woody plants that have stems above ground that remain alive, even during any dormant season, and grow shoots the next year from the above-ground parts – these include trees, shrubs, vines and woody bamboos. Banana plants are also regarded as herbaceous plants because the stem does not contain true woody tissue.
Some herbaceous plants can grow rather large, such as the genus Musa, to which the banana belongs.
Habit and habitat
Some relatively fast-growing herbaceous plants (especially annuals) are pioneers, or early-successional species. Others form the main vegetation of many stable habitats, occurring for example in the ground layer of forests, or in naturally open habitats such as meadow, salt marsh or desert. Some habitats, like grasslands and prairies and savannas, are dominated by herbaceous plants along with aquatic environments like ponds, streams and lakes.
The age of some herbaceous perennial plants can be determined by herbchronology, the analysis of annual growth rings in the secondary root xylem.
Herbaceous plants do not produce perennializing above-ground structures using lignin, which is a complex phenolic polymer deposited in the secondary cell wall of all vascular plants. The development of lignin during vascular plant evolution provided mechanical strength, rigidity, and hydrophobicity to secondary cell walls creating a woody stem, allowing plants to grow tall and transport water and nutrients over longer distances within the plant body. Since most woody plants are perennials with a longer life cycle because it takes more time and more resources (nutrients and water) to produce persistently living lignified woody stems, they are not as able to colonize open and dry ground as rapidly as herbs.
The surface of herbs is a catalyst for dew, which in arid climates and seasons is the main type of precipitation and is necessary for the survival of vegetation, i.e. in arid areas, herbaceous plants are a generator of precipitation and the basis of an ecosystem. Most of the water vapor that turns into dew comes from the air, not the soil or clouds. The taller the herb (surface area is the main factor though), the more dew it produces, so a short cut of the herbs necessitates watering. For example, if you frequently and shortly cut the grass without watering in an arid zone, then desertification occurs.
Types of herbaceous plants
Most herbaceous plants have a perennial (85%) life cycle but some are annual (15%) or biennial (<1%). Annual plants die completely at the end of the growing season or when they have flowered and fruited, and then new plants grow from seed. Herbaceous perennial and biennial plants may have stems that die at the end of the growing season, but parts of the plant survive under or close to the ground from season to season (for biennials, until the next growing season, when they grow and flower again, then die).
New growth can also develop from living tissues remaining on or under the ground, including roots, a caudex (a thickened portion of the stem at ground level) or various types of underground stems, such as bulbs, corms, stolons, rhizomes and tubers. Examples of herbaceous biennials include carrot, parsnip and common ragwort; herbaceous perennials include potato, peony, hosta, mint, most ferns and most grasses.
| Biology and health sciences | Plant anatomy and morphology: General | Biology |
234101 | https://en.wikipedia.org/wiki/Thigh | Thigh | In anatomy, the thigh is the area between the hip (pelvis) and the knee. Anatomically, it is part of the lower limb.
The single bone in the thigh is called the femur. This bone is very thick and strong (due to the high proportion of bone tissue), and forms a ball and socket joint at the hip, and a modified hinge joint at the knee.
Structure
Bones
The femur is the only bone in the thigh and serves as an attachment site for all thigh muscles. The head of the femur articulates with the acetabulum in the pelvic bone forming the hip joint, while the distal part of the femur articulates with the tibia and patella forming the knee. By most measures, the femur is the strongest and longest bone in the body.
The femur is categorised as a long bone and comprises a diaphysis, the shaft (or body) and two epiphyses, the lower extremity and the upper extremity of femur, that articulate with adjacent bones in the hip and knee.
Muscular compartments
In cross-section, the thigh is divided up into three separate compartments, divided by fascia, each containing muscles. These compartments use the femur as an axis and are separated by tough connective tissue membranes (or septa). Each of these compartments has its own blood and nerve supply, and contains a different group of muscles.
Medial fascial compartment of thigh, adductor
Posterior fascial compartment of thigh, flexion, hamstring
Anterior fascial compartment of thigh, extension
Anterior compartment muscles of the thigh include sartorius, and the four muscles that comprise the quadriceps muscles – rectus femoris, vastus medialis, vastus intermedius and vastus lateralis.
Posterior compartment muscles of the thigh are the hamstring muscles, which include semimembranosus, semitendinosus, and biceps femoris.
Medial compartment muscles are pectineus, adductor magnus, adductor longus and adductor brevis, and also gracilis.
Because the major muscles of the thigh are the largest muscles of the body, resistance exercises (strength training) of them stimulate blood flow more than any other localized activity.
Blood supply
The arterial supply is by the femoral artery and the obturator artery. The lymphatic drainage closely follows the arterial supply and drains to the lumbar lymphatic trunks on the corresponding side, which in turn drains to the cisterna chyli.
The deep venous system of the thigh consists of the femoral vein, common femoral vein, deep femoral vein, the proximal part of the popliteal vein, and various smaller vessels; these are the site of proximal deep vein thrombosis. The perforating veins connect the deep and the superficial system, which consists of the small and great saphenous veins (the site of varicose veins).
Clinical significance
Thigh weakness can result in a positive Gowers' sign on physical examination.
Thigh injury resulting from sports, whether acute or from overuse, can mean significant incapacity to perform. Soft tissue injury can encompass sprains, strains, bruising and tendinitis.
Runner's knee (patellofemoral pain) is a direct consequence of the kneecap rubbing against the end of the thigh bone (femur). Tight hamstrings and weak thigh muscles, required to stabilize the knee, increase the risk of developing of runner's knee.
Society and culture
Western societies generally tolerate clothing that displays thighs, such as short shorts and miniskirts. Beachwear and many athleisure styles often display thighs as well. Professional dress codes may require covering up bare thighs.
Many Islamic countries disapprove of or prohibit the display of thighs, especially by women.
Strategic covering or display of thighs is used in popular fashion around the world, such as thigh-high boots and zettai ryoiki.
Additional images
| Biology and health sciences | Human anatomy | Health |
234143 | https://en.wikipedia.org/wiki/Seagrass | Seagrass | Seagrasses are the only flowering plants which grow in marine environments. There are about 60 species of fully marine seagrasses which belong to four families (Posidoniaceae, Zosteraceae, Hydrocharitaceae and Cymodoceaceae), all in the order Alismatales (in the clade of monocotyledons). Seagrasses evolved from terrestrial plants which recolonised the ocean 70 to 100 million years ago.
The name seagrass stems from the many species with long and narrow leaves, which grow by rhizome extension and often spread across large "meadows" resembling grassland; many species superficially resemble terrestrial grasses of the family Poaceae.
Like all autotrophic plants, seagrasses photosynthesize, in the submerged photic zone, and most occur in shallow and sheltered coastal waters anchored in sand or mud bottoms. Most species undergo submarine pollination and complete their life cycle underwater. While it was previously believed this pollination was carried out without pollinators and purely by sea current drift, this has been shown to be false for at least one species, Thalassia testudinum, which carries out a mixed biotic-abiotic strategy. Crustaceans (such as crabs, Majidae zoae, Thalassinidea zoea) and syllid polychaete worm larvae have both been found with pollen grains, the plant producing nutritious mucigenous clumps of pollen to attract and stick to them instead of nectar as terrestrial flowers do.
Seagrasses form dense underwater seagrass meadows which are among the most productive ecosystems in the world. They function as important carbon sinks and provide habitats and food for a diversity of marine life comparable to that of coral reefs.
Overview
Seagrasses are a paraphyletic group of marine angiosperms which evolved in parallel three to four times from land plants back to the sea. The following characteristics can be used to define a seagrass species:
It lives in an estuarine or in the marine environment, and nowhere else.
The pollination takes place underwater with specialized pollen.
The seeds which are dispersed by both biotic and abiotic agents are produced underwater.
The seagrass species have specialized leaves with a reduced cuticle, an epidermis which lacks stomata and is the main photosynthetic tissue.
The rhizome or underground stem is important in anchoring.
The roots can live in an anoxic environment and depend on oxygen transport from the leaves and rhizomes but are also important in the nutrient transfer processes.
Seagrasses profoundly influence the physical, chemical, and biological environments of coastal waters. Though seagrasses provide invaluable ecosystem services by acting as breeding and nursery ground for a variety of organisms and promote commercial fisheries, many aspects of their physiology are not well investigated. There are 26 species of seagrasses in North American coastal waters. Several studies have indicated that seagrass habitat is declining worldwide. Ten seagrass species are at elevated risk of extinction (14% of all seagrass species) with three species qualifying as endangered. Seagrass loss and degradation of seagrass biodiversity will have serious repercussions for marine biodiversity and the human population that depends upon the resources and ecosystem services that seagrasses provide.
Seagrasses form important coastal ecosystems. The worldwide endangering of these sea meadows, which provide food and habitat for many marine species, prompts the need for protection and understanding of these valuable resources.
Evolution
Around 140 million years ago, seagrasses evolved from early monocots which succeeded in conquering the marine environment. Monocots are grass and grass-like flowering plants (angiosperms), the seeds of which typically contain only one embryonic leaf or cotyledon.
Terrestrial plants evolved perhaps as early as 450 million years ago from a group of green algae. Seagrasses then evolved from terrestrial plants which migrated back into the ocean. Between about 70 million and 100 million years ago, three independent seagrass lineages (Hydrocharitaceae, Cymodoceaceae complex, and Zosteraceae) evolved from a single lineage of the monocotyledonous flowering plants.
Other plants that colonised the sea, such as salt marsh plants, mangroves, and marine algae, have more diverse evolutionary lineages. In spite of their low species diversity, seagrasses have succeeded in colonising the continental shelves of all continents except Antarctica.
Recent sequencing of the genomes of Zostera marina and Zostera muelleri has given a better understanding of angiosperm adaptation to the sea. During the evolutionary step back to the ocean, different genes have been lost (e.g., stomatal genes) or have been reduced (e.g., genes involved in the synthesis of terpenoids) and others have been regained, such as in genes involved in sulfation.
Genome information has shown further that adaptation to the marine habitat was accomplished by radical changes in cell wall composition. However the cell walls of seagrasses are not well understood. In addition to the ancestral traits of land plants one would expect habitat-driven adaptation process to the new environment characterized by multiple abiotic (high amounts of salt) and biotic (different seagrass grazers and bacterial colonization) stressors. The cell walls of seagrasses seem intricate combinations of features known from both angiosperm land plants and marine macroalgae with new structural elements.
Taxonomy
Today, seagrasses are a polyphyletic group of marine angiosperms with around 60 species in five families (Zosteraceae, Hydrocharitaceae, Posidoniaceae, Cymodoceaceae, and Ruppiaceae), which belong to the order Alismatales according to the Angiosperm Phylogeny Group IV System. The genus Ruppia, which occurs in brackish water, is not regarded as a "real" seagrass by all authors and has been shifted to the Cymodoceaceae by some authors. The APG IV system and The Plant List Webpage do not share this family assignment.
Sexual recruitment
Seagrass populations are currently threatened by a variety of anthropogenic stressors. The ability of seagrasses to cope with environmental perturbations depends, to some extent, on genetic variability, which is obtained through sexual recruitment. By forming new individuals, seagrasses increase their genetic diversity and thus their ability to colonise new areas and to adapt to environmental changes.
Seagrasses have contrasting colonisation strategies. Some seagrasses form seed banks of small seeds with hard pericarps that can remain in the dormancy stage for several months. These seagrasses are generally short-lived and can recover quickly from disturbances by not germinating far away from parent meadows (e.g., Halophila sp., Halodule sp., Cymodocea sp., Zostera sp. and Heterozostera sp.). In contrast, other seagrasses form dispersal propagules. This strategy is typical of long-lived seagrasses that can form buoyant fruits with inner large non-dormant seeds, such as the genera Posidonia sp., Enhalus sp. and Thalassia sp. Accordingly, the seeds of long-lived seagrasses have a large dispersal capacity compared to the seeds of the short-lived type, which permits the evolution of species beyond unfavourable light conditions by the seedling development of parent meadows.
The seagrass Posidonia oceanica (L.) Delile is one of the oldest and largest species on Earth. An individual can form meadows measuring nearly 15 km wide and can be hundreds to thousands of years old. P. oceanica meadows play important roles in the maintenance of the geomorphology of Mediterranean coasts, which, among others, makes this seagrass a priority habitat of conservation. Currently, the flowering and recruitment of P. oceanica seems to be more frequent than that expected in the past. Further, this seagrass has singular adaptations to increase its survival during recruitment. The large amounts of nutrient reserves contained in the seeds of this seagrass support shoot and root growth, even up to the first year of seedling development. In the first months of germination, when leaf development is scarce, P. oceanica seeds perform photosynthetic activity, which increases their photosynthetic rates and thus maximises seedling establishment success. Seedlings also show high morphological plasticity during their root system development by forming adhesive root hairs to help anchor themselves to rocky sediments. However, many factors about P. oceanica sexual recruitment remain unknown, such as when photosynthesis in seeds is active or how seeds can remain anchored to and persist on substrate until their root systems have completely developed.
Intertidal and subtidal
Seagrasses occurring in the intertidal and subtidal zones are exposed to highly variable environmental conditions due to tidal changes. Subtidal seagrasses are more frequently exposed to lower light conditions, driven by plethora of natural and human-caused influences that reduce light penetration by increasing the density of suspended opaque materials. Subtidal light conditions can be estimated, with high accuracy, using artificial intelligence, enabling more rapid mitigation than was available using in situ techniques. Seagrasses in the intertidal zone are regularly exposed to air and consequently experience extreme high and low temperatures, high photoinhibitory irradiance, and desiccation stress relative to subtidal seagrass. Such extreme temperatures can lead to significant seagrass dieback when seagrasses are exposed to air during low tide. Desiccation stress during low tide has been considered the primary factor limiting seagrass distribution at the upper intertidal zone. Seagrasses residing the intertidal zone are usually smaller than those in the subtidal zone to minimize the effects of emergence stress. Intertidal seagrasses also show light-dependent responses, such as decreased photosynthetic efficiency and increased photoprotection during periods of high irradiance and air exposure.
In contrast, seagrasses in the subtidal zone adapt to reduced light conditions caused by light attenuation and scattering due to the overlaying water column and suspended particles. Seagrasses in the deep subtidal zone generally have longer leaves and wider leaf blades than those in the shallow subtidal or intertidal zone, which allows more photosynthesis, in turn resulting in greater growth. Seagrasses also respond to reduced light conditions by increasing chlorophyll content and decreasing the chlorophyll a/b ratio to enhance light absorption efficiency by using the abundant wavelengths efficiently. As seagrasses in the intertidal and subtidal zones are under highly different light conditions, they exhibit distinctly different photoacclimatory responses to maximize photosynthetic activity and photoprotection from excess irradiance.
Seagrasses assimilate large amounts of inorganic carbon to achieve high level production. Marine macrophytes, including seagrass, use both and (bicarbonate) for photosynthetic carbon reduction. Despite air exposure during low tide, seagrasses in the intertidal zone can continue to photosynthesize utilizing CO2 in the air. Thus, the composition of inorganic carbon sources for seagrass photosynthesis probably varies between intertidal and subtidal plants. Because stable carbon isotope ratios of plant tissues change based on the inorganic carbon sources for photosynthesis, seagrasses in the intertidal and subtidal zones may have different stable carbon isotope ratio ranges.
Seagrass meadows
Seagrass beds/meadows can be either monospecific (made up of a single species) or in mixed beds. In temperate areas, usually one or a few species dominate (like the eelgrass Zostera marina in the North Atlantic), whereas tropical beds usually are more diverse, with up to thirteen species recorded in the Philippines.
Seagrass beds are diverse and productive ecosystems, and can harbor hundreds of associated species from all phyla, for example juvenile and adult fish, epiphytic and free-living macroalgae and microalgae, mollusks, bristle worms, and nematodes. Few species were originally considered to feed directly on seagrass leaves (partly because of their low nutritional content), but scientific reviews and improved working methods have shown that seagrass herbivory is an important link in the food chain, feeding hundreds of species, including green turtles, dugongs, manatees, fish, geese, swans, sea urchins and crabs. Some fish species that visit/feed on seagrasses raise their young in adjacent mangroves or coral reefs.
Seagrasses trap sediment and slow down water movement, causing suspended sediment to settle out. Trapping sediment benefits coral by reducing sediment loads, improving photosynthesis for both coral and seagrass.
Although often overlooked, seagrasses provide a number of ecosystem services. Seagrasses are considered ecosystem engineers. This means that the plants alter the ecosystem around them. This adjusting occurs in both physical and chemical forms. Many seagrass species produce an extensive underground network of roots and rhizome which stabilizes sediment and reduces coastal erosion. This system also assists in oxygenating the sediment, providing a hospitable environment for sediment-dwelling organisms. Seagrasses also enhance water quality by stabilizing heavy metals, pollutants, and excess nutrients. The long blades of seagrasses slow the movement of water which reduces wave energy and offers further protection against coastal erosion and storm surge. Furthermore, because seagrasses are underwater plants, they produce significant amounts of oxygen which oxygenate the water column. These meadows account for more than 10% of the ocean's total carbon storage. Per hectare, it holds twice as much carbon dioxide as rain forests and can sequester about 27.4 million tons of CO2 annually.
Seagrass meadows provide food for many marine herbivores. Sea turtles, manatees, parrotfish, surgeonfish, sea urchins and pinfish feed on seagrasses. Many other smaller animals feed on the epiphytes and invertebrates that live on and among seagrass blades. Seagrass meadows also provide physical habitat in areas that would otherwise be bare of any vegetation. Due to this three dimensional structure in the water column, many species occupy seagrass habitats for shelter and foraging. It is estimated that 17 species of coral reef fish spend their entire juvenile life stage solely on seagrass flats. These habitats also act as a nursery grounds for commercially and recreationally valued fishery species, including the gag grouper (Mycteroperca microlepis), red drum, common snook, and many others. Some fish species utilize seagrass meadows and various stages of the life cycle. In a recent publication, Dr. Ross Boucek and colleagues discovered that two highly sought after flats fish, the common snook and spotted sea trout provide essential foraging habitat during reproduction. Sexual reproduction is extremely energetically expensive to be completed with stored energy; therefore, they require seagrass meadows in close proximity to complete reproduction. Furthermore, many commercially important invertebrates also reside in seagrass habitats including bay scallops (Argopecten irradians), horseshoe crabs, and shrimp. Charismatic fauna can also be seen visiting the seagrass habitats. These species include West Indian manatee, green sea turtles, and various species of sharks. The high diversity of marine organisms that can be found on seagrass habitats promotes them as a tourist attraction and a significant source of income for many coastal economies along the Gulf of Mexico and in the Caribbean.
Seagrass microbiome
Seagrass holobiont
The concept of the holobiont, which emphasizes the importance and interactions of a microbial host with associated microorganisms and viruses and describes their functioning as a single biological unit, has been investigated and discussed for many model systems, although there is substantial criticism of a concept that defines diverse host-microbe symbioses as a single biological unit. The holobiont and hologenome concepts have evolved since the original definition, and there is no doubt that symbiotic microorganisms are pivotal for the biology and ecology of the host by providing vitamins, energy and inorganic or organic nutrients, participating in defense mechanisms, or by driving the evolution of the host.
Although most work on host-microbe interactions has been focused on animal systems such as corals, sponges, or humans, there is a substantial body of literature on plant holobionts. Plant-associated microbial communities impact both key components of the fitness of plants, growth and survival, and are shaped by nutrient availability and plant defense mechanisms. Several habitats have been described to harbor plant-associated microbes, including the rhizoplane (surface of root tissue), the rhizosphere (periphery of the roots), the endosphere (inside plant tissue), and the phyllosphere (total above-ground surface area). The microbial community in the P. oceanica rhizosphere shows similar complexity as terrestrial habitats that contain thousands of taxa per gram of soil. In contrast, the chemistry in the rhizosphere of P. oceanica was dominated by the presence of sugars like sucrose and phenolics.
Cell walls
Seagrass cell walls contain the same polysaccharides found in angiosperm land plants, such as cellulose However, the cell walls of some seagrasses are characterised by sulfated polysaccharides, which is a common attribute of macroalgae from the groups of red, brown and also green algae. It was proposed in 2005 that the ability to synthesise sulfated polysaccharides was regained by marine angiosperms. Another unique feature of cell walls of seagrasses is the occurrence of unusual pectic polysaccharides called apiogalacturonans.
In addition to polysaccharides, glycoproteins of the hydroxyproline-rich glycoprotein family, are important components of cell walls of land plants. The highly glycosylated arabinogalactan proteins are of interest because of their involvement in both wall architecture and cellular regulatory processes. Arabinogalactan proteins are ubiquitous in seed land plants and have also been found in ferns, lycophytes and mosses. They are structurally characterised by large polysaccharide moieties composed of arabinogalactans (normally over 90% of the molecule) which are covalently linked via hydroxyproline to relatively small protein/peptide backbones (normally less than 10% of the molecule). Distinct glycan modifications have been identified in different species and tissues and it has been suggested these influence physical properties and function. In 2020, AGPs were isolated and structurally characterised for the first time from a seagrass. Although the common backbone structure of land plant arabinogalactan proteins is conserved, the glycan structures exhibit unique features suggesting a role of seagrass arabinogalactan proteins in osmoregulation.
Further components of secondary walls of plants are cross-linked phenolic polymers called lignin, which are responsible for mechanical strengthening of the wall. In seagrasses, this polymer has also been detected, but often in lower amounts compared to angiosperm land plants. Thus, the cell walls of seagrasses seem to contain combinations of features known from both angiosperm land plants and marine macroalgae together with new structural elements. Dried seagrass leaves might be useful for papermaking or as insulating materials, so knowledge of cell wall composition has some technological relevance.
Threats and conservation
Despite only covering 0.1 - 0.2% of the ocean’s surface, seagrasses form critically important ecosystems. Much like many other regions of the ocean, seagrasses have been faced with an accelerating global decline. Since the late 19th century, over 20% of the global seagrass area has been lost, with seagrass bed loss occurring at a rate of 1.5% each year. Of the 72 global seagrass species, approximately one quarter (15 species) could be considered at a Threatened or Near Threatened status on the IUCN’s Red List of Threatened Species. Threats include a combination of natural factors, such as storms and disease, and anthropogenic in origin, including habitat destruction, pollution, and climate change.
By far the most common threat to seagrass is human activity. Up to 67 species (93%) of seagrasses are affected by human activity along coastal regions. Activities such as coastal land development, motorboating, and fishing practices like trawling either physically destroy seagrass beds or increase turbidity in the water, causing seagrass die-off. Since seagrasses have some of the highest light requirements of angiosperm plant species, they are highly affected by environmental conditions that change water clarity and block light.
Seagrasses are also negatively affected by changing global climatic conditions. Increased weather events, sea level rise, and higher temperatures as a result of global warming all have the potential to induce widespread seagrass loss. An additional threat to seagrass beds is the introduction of non-native species. For seagrass beds worldwide, at least 28 non-native species have become established. Of these invasive species, the majority (64%) have been documented to infer negative effects on the ecosystem.
Another major cause of seagrass disappearance is coastal eutrophication. Rapidly developing human population density along coastlines has led to high nutrient loads in coastal waters from sewage and other impacts of development. Increased nutrient loads create an accelerating cascade of direct and indirect effects that lead to seagrass decline. While some exposure to high concentrations of nutrients, especially nitrogen and phosphorus, can result in increased seagrass productivity, high nutrient levels can also stimulate the rapid overgrowth of macroalgae and epiphytes in shallow water, and phytoplankton in deeper water. In response to high nutrient levels, macroalgae form dense canopies on the surface of the water, limiting the light able to reach the benthic seagrasses. Algal blooms caused by eutrophication also lead to hypoxic conditions, which seagrasses are also highly susceptible to. Since coastal sediment is generally anoxic, seagrass must supply oxygen to their below-ground roots either through photosynthesis or by the diffusion of oxygen in the water column. When the water surrounding seagrass becomes hypoxic, so too do seagrass tissues. Hypoxic conditions negatively affect seagrass growth and survival with seagrasses exposed to hypoxic conditions shown to have reduced rates of photosynthesis, increased respiration, and smaller growth. Hypoxic conditions can eventually lead to seagrass die-off which creates a positive feedback cycle, where the decomposition of organic matter further decreases the amount of oxygen present in the water column.
Possible seagrass population trajectories have been studied in the Mediterranean sea. These studies suggest that the presence of seagrass depends on physical factors such as temperature, salinity, depth and turbidity, along with natural phenomena like climate change and anthropogenic pressure. While there are exceptions, regression was a general trend in many areas of the Mediterranean Sea. There is an estimated 27.7% reduction along the southern coast of Latium, 18%-38% reduction in the Northern Mediterranean basin, 19%-30% reduction on Ligurian coasts since the 1960s and 23% reduction in France in the past 50 years. In Spain the main reason for regression was due to human activity such as illegal trawling and aquaculture farming. It was found that areas with medium to high human impact suffered more severe reduction. Overall, it was suggested that 29% of known areal seagrass populations have disappeared since 1879. The reduction in these areas suggests that should warming in the Mediterranean basin continue, it may lead to a functional extinction of Posidonia oceanica in the Mediterranean by 2050. Scientists suggested that the trends they identified appear to be part of a large-scale trend worldwide.
Conservation efforts are imperative to the survival of seagrass species. While there are many challenges to overcome with respect to seagrass conservation there are some major ones that can be addressed. Societal awareness of what seagrasses are and their importance to human well-being is incredibly important. As the majority of people become more urbanized they are increasingly more disconnected from the natural world. This allows for misconceptions and a lack of understanding of seagrass ecology and its importance. Additionally, it is a challenge to obtain and maintain information on the status and condition of seagrass populations. With many populations across the globe, it is difficult to map the current populations. Another challenge faced in seagrass conservation is the ability to identify threatening activities on a local scale. Also, in an ever growing human population, there is a need to balance the needs of the people while also balancing the needs of the planet. Lastly, it is challenging to generate scientific research to support conservation of seagrass. Limited efforts and resources are dedicated to the study of seagrasses. This is seen in areas such as India and China where there is little to no plan in place to conserve seagrass populations. However, the conservation and restoration of seagrass may contribute to 16 of the 17 UN Sustainable Development Goals.
In a study of seagrass conservation in China, several suggestions were made by scientists on how to better conserve seagrass. They suggested that seagrass beds should be included in the Chinese conservation agenda as done in other countries. They called for the Chinese government to forbid land reclamation in areas near or in seagrass beds, to reduce the number and size of culture ponds, to control raft aquaculture and improve sediment quality, to establish seagrass reserves, to increase awareness of seagrass beds to fishermen and policy makers and to carry out seagrass restoration. Similar suggestions were made in India where scientists suggested that public engagement was important. Also, scientists, the public, and government officials should work in tandem to integrate traditional ecological knowledge and socio-cultural practices to evolve conservation policies.
World Seagrass Day is an annual event held on March 1 to raise awareness about seagrass and its important functions in the marine ecosystem.
| Biology and health sciences | Alismatales | Plants |
234226 | https://en.wikipedia.org/wiki/Traffic%20congestion | Traffic congestion | Traffic congestion is a condition in transport that is characterized by slower speeds, longer trip times, and increased vehicular queueing. Traffic congestion on urban road networks has increased substantially since the 1950s, resulting in many of the roads becoming obsolete. When traffic demand is great enough that the interaction between vehicles slows the traffic stream, this results in congestion. While congestion is a possibility for any mode of transportation, this article will focus on automobile congestion on public roads. Mathematically, traffic is modeled as a flow through a fixed point on the route, analogously to fluid dynamics.
As demand approaches the capacity of a road (or of the intersections along the road), extreme traffic congestion sets in. When vehicles are fully stopped for periods of time, this is known as a traffic jam or (informally) a traffic snarl-up or a tailback. Drivers can become frustrated and engage in road rage. Drivers and driver-focused road planning departments commonly propose to alleviate congestion by adding another lane to the road. This is ineffective: increasing road capacity induces more demand for driving.
Causes
Traffic congestion occurs when a volume of traffic generates demand for space greater than the available street capacity; this point is commonly termed saturation. Several specific circumstances can cause or aggravate congestion; most of them reduce the capacity of a road at a given point or over a certain length, or increase the number of vehicles required for a given volume of people or goods. About half of U.S. traffic congestion is recurring, and is attributed to sheer volume of traffic; most of the rest is attributed to traffic incidents, road work and weather events. In terms of traffic operation, rainfall reduces traffic capacity and operating speeds, thereby resulting in greater congestion and road network productivity loss.
Individual incidents such as crashes or even a single car braking heavily in a previously smooth flow may cause ripple effects, a cascading failure, which then spread out and create a sustained traffic jam when, otherwise, the normal flow might have continued for some time longer.
Separation of work and residential areas
People often work and live in different parts of the city. Many workplaces are located in a central business district away from residential areas, resulting in workers commuting. According to a 2011 report published by the United States Census Bureau, a total of 132.3 million people in the United States commute between their work and residential areas daily.
Movement to obtain or provide goods and services
People may need to move about within the city to obtain goods and services, for instance to purchase goods or attend classes in a different part of the city. Brussels, a Belgian city with a strong service economy, has one of the worst traffic congestion in the world, wasting 74 hours in traffic in 2014.
Mathematical theories
Some traffic engineers have attempted to apply the rules of fluid dynamics to traffic flow, likening it to the flow of a fluid in a pipe. Congestion simulations and real-time observations have shown that in heavy but free flowing traffic, jams can arise spontaneously, triggered by minor events ("butterfly effects"), such as an abrupt steering maneuver by a single motorist. Traffic scientists liken such a situation to the sudden freezing of supercooled fluid.
However, unlike a fluid, traffic flow is often affected by signals or other events at junctions that periodically affect the smooth flow of traffic. Alternative mathematical theories exist, such as Boris Kerner's three-phase traffic theory (see also spatiotemporal reconstruction of traffic congestion).
Because of the poor correlation of theoretical models to actual observed traffic flows, transportation planners and highway engineers attempt to forecast traffic flow using empirical models. Their working traffic models typically use a combination of macro-, micro- and mesoscopic features, and may add matrix entropy effects, by "platooning" groups of vehicles and by randomizing the flow patterns within individual segments of the network. These models are then typically calibrated by measuring actual traffic flows on the links in the network, and the baseline flows are adjusted accordingly.
A team of MIT mathematicians has developed a model that describes the formation of "phantom jams", in which small disturbances (a driver hitting the brake too hard, or getting too close to another car) in heavy traffic can become amplified into a full-blown, self-sustaining traffic jam. Key to the study is the realization that the mathematics of such jams, which the researchers call "jamitons", are strikingly similar to the equations that describe detonation waves produced by explosions, says Aslan Kasimov, lecturer in MIT's Department of Mathematics. That discovery enabled the team to solve traffic-jam equations that were first theorized in the 1950s.
Economic theories
Congested roads can be seen as an example of the tragedy of the commons. Because roads in most places are free at the point of usage, there is little financial incentive for drivers not to over-use them, up to the point where traffic collapses into a jam, when demand becomes limited by opportunity cost. Privatization of highways and road pricing have both been proposed as measures that may reduce congestion through economic incentives and disincentives . Congestion can also happen due to non-recurring highway incidents, such as a crash or roadworks, which may reduce the road's capacity below normal levels.
Economist Anthony Downs argues that rush hour traffic congestion is inevitable because of the benefits of having a relatively standard work day . In a capitalist economy, goods can be allocated either by pricing (ability to pay) or by queueing (first-come first-served); congestion is an example of the latter. Instead of the traditional solution of making the "pipe" large enough to accommodate the total demand for peak-hour vehicle travel (a supply-side solution), either by widening roadways or increasing "flow pressure" via automated highway systems, Downs advocates greater use of road pricing to reduce congestion (a demand-side solution, effectively rationing demand), in turn putting the revenues generated therefrom into public transportation projects.
A 2011 study in The American Economic Review indicates that there may be a "fundamental law of road congestion." The researchers, from the University of Toronto and the London School of Economics, analyzed data from the U.S. Highway Performance and Monitoring System for 1983, 1993 and 2003, as well as information on population, employment, geography, transit, and political factors. They determined that the number of vehicle-kilometers traveled (VKT) increases in direct proportion to the available lane-kilometers of roadways. The implication is that building new roads and widening existing ones only results in additional traffic that continues to rise until peak congestion returns to the previous level.
Classification
Qualitative classification of traffic is often done in the form of a six-letter A-F level of service (LOS) scale defined in the Highway Capacity Manual, a US document used (or used as a basis for national guidelines) worldwide. These levels are used by transportation engineers as a shorthand and to describe traffic levels to the lay public. While this system generally uses delay as the basis for its measurements, the particular measurements and statistical methods vary depending on the facility being described. For instance, while the percent time spent following a slower-moving vehicle figures into the LOS for a rural two-lane road, the LOS at an urban intersection incorporates such measurements as the number of drivers forced to wait through more than one signal cycle.
Traffic congestion occurs in time and space, i.e., it is a spatiotemporal process. Therefore, another classification schema of traffic congestion is associated with some common spatiotemporal features of traffic congestion found in measured traffic data. Common spatiotemporal empirical features of traffic congestion are those features, which are qualitatively the same for different highways in different countries measured during years of traffic observations. Common features of traffic congestion are independent on weather, road conditions and road infrastructure, vehicular technology, driver characteristics, day time, etc. Examples of common features of traffic congestion are the features [J] and [S] for, respectively, the wide moving jam and synchronized flow traffic phases found in Kerner's three-phase traffic theory. The common features of traffic congestion can be reconstructed in space and time with the use of the ASDA and FOTO models.
Negative impacts
Traffic congestion has a number of negative effects:
Wasting time of motorists and passengers ("opportunity cost"). As a non-productive activity for most people, congestion reduces regional economic health.
Delays, which may result in late arrival for employment, meetings, and education, resulting in lost business, disciplinary action or other personal losses.
Inability to forecast travel time accurately, leading to drivers allocating more time to travel "just in case", and less time on productive activities.
Wasted fuel increasing air pollution and carbon dioxide emissions owing to increased idling, acceleration and braking.
Wear and tear on vehicles as a result of idling in traffic and frequent acceleration and braking, leading to more frequent repairs and replacements.
Stressed and frustrated motorists, encouraging road rage and reduced health of motorists
Emergencies: blocked traffic may interfere with the passage of emergency vehicles traveling to their destinations where they are urgently needed.
Spillover effect from congested main arteries to secondary roads and side streets as alternative routes are attempted ('rat running'), which may affect neighborhood amenity and real estate prices.
Higher chance of collisions due to tight spacing and constant stopping-and-going.
Road rage
Road rage is aggressive or angry behavior by a driver of an automobile or other motor vehicle. Such behavior might include rude gestures, verbal insults, deliberately driving in an unsafe or threatening manner, or making threats. Road rage can lead to altercations, assaults, and collisions which result in injuries and even deaths. It can be thought of as an extreme case of aggressive driving.The term originated in the United States in 1987–1988 (specifically, from Newscasters at KTLA, a local television station), when a rash of freeway shootings occurred on the 405, 110 and 10 freeways in Los Angeles, California. These shooting sprees even spawned a response from the AAA Motor Club to its members on how to respond to drivers with road rage or aggressive maneuvers and gestures.
Economic loss
Positive effects
Congestion has the benefit of encouraging motorists to retime their trips so that expensive road space is in full use for more hours per day. It may also encourage travellers to pick alternate modes with a lower environmental impact, such as public transport or bicycles.
It has been argued that traffic congestion, by reducing road speeds in cities, could reduce the frequency and severity of road crashes. More recent research suggests that a U-shaped curve exists between the number of accidents and the flow of traffic, implying that more accidents happen not only at high congestion levels, but also when there are very few vehicles on the road.
Countermeasures
Improving Road infrastructure
Increasing road capacity is standard response to congestion, perhaps by widening an existing road or adding a new road, bridge or tunnel. However, this has been shown to result in attracting more traffic, otherwise known as induced demand. The result can be greater congestion on the expanded artery itself or on auxiliary roads. In a similar vein, Braess's paradox shows that adding road capacity might make congestion worse, even if demand does not increase. In his paper, "The Law of Peak Hour Express Way Congestion", published in 1962, Anthony Downs formulated this phenomenon as a "law": “on urban commuter expressways, peak-hour traffic congestion rises to meet maximum capacity.”
Junction improvements
Grade separation, using bridges (or, less often, tunnels) freeing movements from having to stop for other crossing movements
Ramp signaling, 'drip-feeding' merging traffic via traffic signals onto a congested motorway-type roadway
Reducing junctions
Local-express lanes, providing through lanes that bypass junction on-ramp and off-ramp zones
Limited-access road, roads that limit the type and amounts of driveways along their lengths
Reversible lanes, where certain sections of highway operate in the opposite direction on different times of the day(s) of the week, to match asymmetric demand. These pose a potential for collisions, if drivers do not notice the change in direction indicators. This may be controlled by variable-message signs or by movable physical separation
Separate lanes for specific user groups (usually with the goal of higher people throughput with fewer vehicles)
Bus lanes as part of a busway system
Express toll lanes
HOV lanes, for vehicles with at least three (sometimes at least two) riders, intended to encourage carpooling
Slugging, impromptu carpooling at HOV access points, on a hitchhiking or payment basis
Market-based carpooling with pre-negotiated financial incentives for the driver
Urban planning and design
City planning and urban design practices can have a huge impact on levels of future traffic congestion, though they are of limited relevance for short-term change.
Grid plans including fused grid road network geometry, rather than tree-like network topology which branches into cul-de-sacs (which reduce local traffic, but increase total distances driven and discourage walking by reducing connectivity). This avoids concentration of traffic on a small number of arterial roads and allows more trips to be made without a car.
Zoning laws that encourage mixed-use development, which reduces distances between residential, commercial, retail, and recreational destinations and encourage cycling and walking. Cycling modal share is strongly associated with the availability of local cycling infrastructure.
Carfree cities, car-light cities, and eco-cities designed to eliminate the need to travel by car for most inhabitants.
Transit-oriented development are residential and commercial areas designed to maximize access to public transport by providing a transit station or stop (train station, metro station, tram stop, or bus stop).
Supply and demand
Congestion can be reduced by either increasing road capacity (supply), or by reducing traffic (demand). Capacity can be increased in a number of ways, but needs to take account of latent demand otherwise it may be used more strongly than anticipated. Critics of the approach of adding capacity have compared it to "fighting obesity by letting out your belt" (inducing demand that did not exist before). For example, when new lanes are created, households with a second car that used to be parked most of the time may begin to use this second car for commuting. Reducing road capacity has in turn been attacked as removing free choice as well as increasing travel costs and times, placing an especially high burden on the low income residents who must commute to work.
Increased supply can include:
Adding more capacity at bottlenecks (such as by adding more lanes at the expense of hard shoulders or safety zones, or by removing local obstacles like bridge supports and widening tunnels)
Adding more capacity over the whole of a route (generally by adding more lanes)
Creating new routes
Traffic management improvements (see separate section below)
Reduction of demand can include:
Parking restrictions, making motor vehicle use less attractive by increasing the monetary and non-monetary costs of parking, introducing greater competition for limited city or road space. Most transport planning experts agree that free parking distorts the market in favor of car travel, exacerbating congestion.
Park and ride facilities allowing parking at a distance and allowing continuation by public transport or ride sharing. Park-and-ride car parks are commonly found at metro stations, freeway entrances in suburban areas, and at the edge of smaller cities.
Reduction of road capacity to force traffic onto other travel modes. Methods include traffic calming and the shared space concept.
Road pricing, charging money for access onto a road/specific area at certain times, congestion levels or for certain road users
"Cap and trade", in which only licensed cars are allowed on the roads. A limited quota of car licenses are issued each year and traded in a free market fashion. This guarantees that the number of cars does not exceed road capacity while avoiding the negative effects of shortages normally associated with quotas. However, since demand for cars tends to be inelastic, the result are exorbitant purchase prices for the licenses, pricing out the lower levels of society, as seen Singapore's Certificate of Entitlement scheme.
Congestion pricing, including:
Congestion zone charges, in which entry via car to a certain area, such as the inner part of a city, requires payment. Enforcement may be a physical boundary (e.g., toll stations) or it may be virtual, via spot checks or cameras. Major examples include congestion pricing in New York City; Singapore's electronic road pricing; the London congestion charge; and the Stockholm congestion tax.
Fixed (the same at all times of day), variable (higher at peak times), or dynamic (higher during actual congestion) toll roads, toll bridges, toll tunnels, and toll lanes
Managed lanes
High-occupancy toll lanes
Reversible lanes
High-occupancy vehicle lanes
Bus lanes
Truck lane restrictions and climbing lanes, to allow faster vehicles to move unimpeded
Allowing driving on highway shoulders at peak times
Road space rationing, where regulatory restrictions prevent certain types of vehicles from driving under certain circumstances or in certain areas.
Number plate restrictions based on days of the week, as practiced in several large cities in the world, such as Athens, Mexico City, Manila, and São Paulo. In effect, such cities are banning a different part of the automobile fleet from roads each day of the week. Mainly introduced to combat smog, these measures also reduce congestion. A weakness of this method is that richer drivers can purchase a second or third car to circumvent the ban.
Permits, where only certain types of vehicles (such as residents) are permitted to enter a certain area, and other types (such as through-traffic) are banned. For example, Bertrand Delanoë, the mayor of Paris, has proposed to impose a complete ban on motor vehicles in the city's inner districts, with exemptions only for residents, businesses, and the disabled.
Policy approaches, which usually attempt to provide either strategic alternatives or which encourage greater usage of existing alternatives through promotion, subsidies or restrictions.
Incentives to use public transport, increasing modal shares. This can be achieved through infrastructure investment, subsidies, transport integration, pricing strategies that decrease the marginal cost/fixed cost ratios, improved timetabling and greater priority for buses to reduce journey time e.g. bus lanes or bus rapid transit .
Cycling promotion through legislation, cycle facilities, subsidies, and awareness campaigns. The Netherlands has been pursuing cycle friendly policies for decades, and around a quarter of their commuting is done by bicycle.
Promotion of more flexible work place practices. For example, a flexible workplaces pilot was undertaken in Brisbane, Australia during 2009 to test the applicability of a voluntary travel behavior change program to achieve transport system outcomes, particularly as they related to managing congestion, either through mode shift or peak spreading. During the one-month Pilot, amongst almost 900 Brisbane CBD workers across 20 private and public sector organizations, shifts of more than 30% out of the morning and afternoon peak travel was recorded.
Remote work encouraged through legislation and subsidies.
Online shopping promotion, potentially with automated delivery booths helping to solve the last mile problem and reduce shopping trips made by car.
Traffic management
Use of so-called intelligent transportation systems, which guide traffic:
Traffic reporting, via radio, GPS and mobile apps, to advise road users
Variable message signs installed along the roadway, to advise road users
Navigation systems, possibly linked up to automatic traffic reporting
Traffic counters permanently installed, to provide real-time traffic counts
Automated highway systems, a future idea which could reduce the safe interval between cars (required for braking in emergencies) and increase highway capacity by as much as 100% while increasing travel speeds
Parking guidance and information systems providing dynamic advice to motorists about free parking
Active traffic management system opens up UK motorway hard shoulder as an extra traffic lane; it uses CCTV and VMS to control and monitor the traffic's use of the extra lane.
Other associated
School opening times arranged to avoid rush hour traffic (in some countries, private car school pickup and drop-off traffic are substantial percentages of peak hour traffic).
Considerate driving behavior promotion and enforcement. Driving practices such as tailgating, frequent lane changes, and impeding the flow of traffic can reduce a road's capacity and exacerbate jams. In some countries signs are placed on highways to raise awareness, while others have introduced legislation against inconsiderate driving.
Visual barriers to prevent drivers from slowing down out of curiosity (often called "rubbernecking" in the United States). This often includes crashes, with traffic slowing down even on roadsides physically separated from the crash location. This also tends to occur at construction sites, which is why some countries have introduced rules that motorway construction has to occur behind visual barrier
Speed limit reductions, as practiced on the M25 motorway in London. With lower speeds allowing cars to drive closer together, this increases the capacity of a road. Note that this measure is only effective if the interval between cars is reduced, not the distance itself. Low intervals are generally only safe at low speeds.
Lane splitting/filtering, in which some jurisdictions allow motorcycles, scooters and bicycles to travel in the space between cars, buses, and trucks.
Reduction of road freight avoiding problems such as double parking with innovative solutions including cargo bicycles and Gothenburg's Stadsleveransens.
Reducing the quantity of cars that are on the road, i.e. through proof-of-parking requirements, circulation plans, corporate car sharing, bans on on-street parking or by increasing the costs of car ownership
By country
Australia
Traffic during peak hours in major Australian cities, such as Sydney, Melbourne, Brisbane and Perth, is usually very congested and can cause considerable delay for motorists. Australians rely mainly on radio and television to obtain current traffic information. GPS, webcams, and online resources are increasingly being used to monitor and relay traffic conditions to motorists. Based on a survey in 2024, Brisbane is the most congested cities in Australia and 10th in the world, with drivers averagely losing 84 hours throughout the year.
Bangladesh
Traffic jams have become intolerable in Dhaka. Some other major reasons are the total absence of a rapid transit system; the lack of an integrated urban planning scheme for over 30 years; poorly maintained road surfaces, with potholes rapidly eroded further by frequent flooding and poor or non-existent drainage; haphazard stopping and parking; poor driving standards; total lack of alternative routes, with several narrow and (nominally) one-way roads.
Brazil
According to Time magazine, São Paulo has the world's worst daily traffic jams. Based on reports from the Companhia de Engenharia de Tráfego, the city's traffic management agency, the historical congestion record was set on May 23, 2014, with of cumulative queues around the city during the evening rush hour. The previous record occurred on November 14, 2013, with of cumulative queues.
Despite implementation since 1997 of road space rationing by the last digit of the plate number during rush hours every weekday, traffic in this 20-million-strong city still experiences severe congestion. According to experts, this is due to the accelerated rate of motorization occurring since 2003 and the limited capacity of public transport. In São Paulo, traffic is growing at a rate of 7.5% per year, with almost 1,000 new cars bought in the city every day. The subway has only of lines, though 35 further kilometers are under construction or planned by 2010. Every day, many citizens spend between three up to four hours behind the wheel. In order to mitigate the aggravating congestion problem, since June 30, 2008, the road space rationing program was expanded to include and restrict trucks and light commercial vehicles.
Canada
According to the Toronto Board of Trade, in 2010, Toronto is ranked as the most congested city of 19 surveyed cities, with an average commute time of 80 minutes.
China
The Chinese city of Beijing started a license plate rationing since the 2008 Summer Olympics whereby each car is banned from the urban core one workday per week, depending on the last digit of its license plate. As of 2016, 11 major Chinese cities have implemented similar policies. Towards the end of 2010, Beijing announced a series of drastic measures to tackle the city's chronic traffic congestion, such as limiting the number of new plates issued to passenger cars to 20,000 a month, barring vehicles with non-Beijing plates from entering areas within the Fifth Ring Road during rush hours and expanding its subway system. The government aims to cap the number of locally registered cars in Beijing to below 6.3 million by the end of 2020. In addition, more than nine major Chinese cities including Shanghai, Guangzhou and Hangzhou started limiting the number of new plates issued to passenger cars in an attempt to curb the growth of car ownership. In response to the increased demand to public transit caused by these policies, aggressive programs to rapidly expand public transport systems in many Chinese cities are currently underway.
A unique Chinese phenomenon of severe traffic congestion occurs during Chunyun Period or Spring Festival travel season. It is a long-held tradition for most Chinese people to reunite with their families during Chinese New Year. People return to their hometown to have a reunion dinner with their families on Chinese New Year. It has been described as the largest annual human migration in the world. Since the economic boom and rapid urbanization of China since the late 1970s, many people work and study a considerable distance from their hometowns. Traffic flow is typically directional, with large amounts of the population working in more developed coastal provinces needing travel to their hometowns in the less developed interior. The process reverses near the end of Chunyun. With almost 3 billion trips made in 40 days of the 2016 Chunyun Period, the Chinese intercity transportation network is extremely strained during this period.
The August 2010 China National Highway 110 traffic jam in Hebei province caught media attention for its severity, stretching more than from August 14 to 26, including at least 11 days of total gridlock. The event was caused by a combination of road works and thousands of coal trucks from Inner Mongolia's coalfields that travel daily to Beijing. The New York Times has called this event the "Great Chinese Gridlock of 2010." The congestion is regarded as the worst in history by duration, and is one of the longest in length after the long Lyon-Paris traffic jam in France on February 16, 1980.
Recently, in Hangzhou City Brain has become active, reducing traffic congestion somewhat.
A 2021 study of subway constructions in China found that in the first year of a new subway line, road congestion declined.
Greece
Since the 70s, the traffic on the streets of Athens has increased dramatically, with the existing road network unable to serve the ever-increasing demand. In addition, it has also caused an environmental burden, such as the photochemical smog. To deal with it, the Daktylios has been enforced.
India
The number of vehicles in India is quickly increasing as a growing middle class can now afford to buy cars. India's road conditions have not kept up with the exponential growth in number of vehicles.
Various causes for this include:
Private encroachments
Non cooperation among drivers
Unscientific road design
Lack of free ways/exit ways where local roads and main roads intersect
Lack of demarcated footpaths
Lack of bus bays
Lack of cycle tracks
Lack of coordination among various government departments (e.g. digging of roads by telecom/water department and leaving it open)
Indonesia
According to a 2015 study by motor oil company Castrol, Jakarta is found to be the worst city in the world for traffic congestion. Relying on information from TomTom navigation devices in 78 countries, the index found that drivers are stopping and starting their cars 33,240 times per year on the road. After Jakarta, the worst cities for traffic are Istanbul, Mexico City, Surabaya, and St. Petersburg.
Daily congestion in Jakarta is not a recent problem. The expansion of commercial area without road expansion shows worsening daily congestion even in main roads such as Jalan Jenderal Sudirman, Jalan M.H. Thamrin, and Jalan Gajah Mada in the mid-1970s.
In 2016, 22 people died as a result of traffic congestion in Java. They were among those stuck in a three-day traffic jam at a toll exit in Brebes, Central Java called Brebes Exit or 'Brexit'. The traffic block stretched for 21 km here and thousands of cars clogged the highway. Many people died because of carbon monoxide poisoning, fatigue or heat.
New Zealand
New Zealand has followed strongly car-oriented transport policies since after World War II (especially in Auckland, where one third of the country's population lives, is New Zealand's most traffic congested city, and has been labeled worse than New York for traffic congestion with commuters sitting in traffic congestion for 95 hours per year), and currently has one of the highest car-ownership rates per capita in the world, after the United States. Traffic congestion in New Zealand is increasing with drivers on New Zealand's motorways reported to be struggling to exceed 20 km/h on an average commute, sometimes crawling along at 8 km/h for more than half an hour.
Philippines
According to a survey by Waze, traffic congestion in Metro Manila is called the "worst" in the world, after Rio de Janeiro, São Paulo, and Jakarta. It is worsened by violations of traffic laws, like illegal parking, loading and unloading, beating the red light, and wrong-way driving. Traffic congestion in Metro Manila is caused by the large number of registered vehicles, lack of roads, and overpopulation, especially in the cities of Manila and Caloocan, as well as the municipality of Pateros.
Traffic caused losses of ₱137,500,000,000 on the economy in 2011, and unbuilt roads and railway projects also causes worsening congestion. The Japan International Cooperation Agency (JICA) feared that daily economic losses will reach Php 6,000,000,000 by 2030 if traffic congestion cannot be controlled.
Turkey
In recent years, the Istanbul Metropolitan Municipality has made huge investments on intelligent transportation systems and public transportation. Despite that, traffic is a significant problem in Istanbul.
Istanbul has chosen the second most congested and the most sudden-stopping traffic in the world. Travel times in Turkey's largest city take on average 55 percent longer than they should, even in relatively less busy hours.
United Kingdom
In the United Kingdom the inevitability of congestion in some urban road networks has been officially recognized since the Department for Transport set down policies based on the report Traffic in Towns in 1963:
Even when everything that it is possibly to do by way of building new roads and expanding public transport has been done, there would still be, in the absence of deliberate limitation, more cars trying to move into, or within our cities than could possibly be accommodated.
The Department for Transport sees growing congestion as one of the most serious transport problems facing the UK. On December 1, 2006, Rod Eddington published a UK government-sponsored report into the future of Britain's transport infrastructure. The Eddington Transport Study set out the case for action to improve road and rail networks, as a "crucial enabler of sustained productivity and competitiveness". Eddington has estimated that congestion may cost the economy of England £22 bn a year in lost time by 2025. He warned that roads were in serious danger of becoming so congested that the economy would suffer. At the launch of the report Eddington told journalists and transport industry representatives introducing road pricing to encourage drivers to drive less was an "economic no-brainer". There was, he said "no attractive alternative". It would allegedly cut congestion by half by 2025, and bring benefits to the British economy totaling £28 bn a year.
A congestion charge for driving in central London was introduced in 2003. In 2013, ten years later, Transport for London reported that the scheme resulted in a 10% reduction in traffic volumes from baseline conditions, and an overall reduction of 11% in vehicle kilometers in London. Despite these gains, traffic speeds in central London became progressively slower.
United States
The Texas Transportation Institute estimated that, in 2000, the 75 largest metropolitan areas experienced 3.6 billion vehicle-hours of delay, resulting in 5.7 billion U.S. gallons (21.6 billion liters) in wasted fuel and $67.5 billion in lost productivity, or about 0.7% of the nation's GDP. It also estimated that the annual cost of congestion for each driver was approximately $1,000 in very large cities and $200 in small cities. Traffic congestion is increasing in major cities and delays are becoming more frequent in smaller cities and rural areas.
30% of traffic is cars looking for parking.
According to traffic analysis firm INRIX in 2019, the top 31 worst US traffic congested cities (measured in average hours wasted per vehicle for the year) were:
The most congested highway in the United States, according to a 2010 study of freight congestion (truck speed and travel time), is Chicago's Interstate 290 at the Circle Interchange. The average truck speed was just .
| Technology | Basics_7 | null |
31161 | https://en.wikipedia.org/wiki/Tsunami | Tsunami | A tsunami ( ; from , ) is a series of waves in a water body caused by the displacement of a large volume of water, generally in an ocean or a large lake. Earthquakes, volcanic eruptions and underwater explosions (including detonations, landslides, glacier calvings, meteorite impacts and other disturbances) above or below water all have the potential to generate a tsunami. Unlike normal ocean waves, which are generated by wind, or tides, which are in turn generated by the gravitational pull of the Moon and the Sun, a tsunami is generated by the displacement of water from a large event.
Tsunami waves do not resemble normal undersea currents or sea waves because their wavelength is far longer. Rather than appearing as a breaking wave, a tsunami may instead initially resemble a rapidly rising tide. For this reason, it is often referred to as a tidal wave, although this usage is not favoured by the scientific community because it might give the false impression of a causal relationship between tides and tsunamis. Tsunamis generally consist of a series of waves, with periods ranging from minutes to hours, arriving in a so-called "wave train". Wave heights of tens of metres can be generated by large events. Although the impact of tsunamis is limited to coastal areas, their destructive power can be enormous, and they can affect entire ocean basins. The 2004 Indian Ocean tsunami was among the deadliest natural disasters in human history, with at least 230,000 people killed or missing in 14 countries bordering the Indian Ocean.
The Ancient Greek historian Thucydides suggested in his 5th century BC History of the Peloponnesian War that tsunamis were related to submarine earthquakes, but the understanding of tsunamis remained slim until the 20th century, and much remains unknown. Major areas of current research include determining why some large earthquakes do not generate tsunamis while other smaller ones do. This ongoing research is designed to help accurately forecast the passage of tsunamis across oceans as well as how tsunami waves interact with shorelines.
Terminology
Tsunami
The term "tsunami" is a borrowing from the Japanese tsunami , meaning "harbour wave." For the plural, one can either follow ordinary English practice and add an s, or use an invariable plural as in the Japanese. Some English speakers alter the word's initial to an by dropping the "t," since English does not natively permit at the beginning of words, though the original Japanese pronunciation is . The term has become commonly accepted in English, although its literal Japanese meaning is not necessarily descriptive of the waves, which do not occur only in harbours.
Tidal wave
Tsunamis are sometimes referred to as tidal waves. This once-popular term derives from the most common appearance of a tsunami, which is that of an extraordinarily high tidal bore. Tsunamis and tides both produce waves of water that move inland, but in the case of a tsunami, the inland movement of water may be much greater, giving the impression of an incredibly high and forceful tide. In recent years, the term "tidal wave" has fallen out of favour, especially in the scientific community, because the causes of tsunamis have nothing to do with those of tides, which are produced by the gravitational pull of the moon and sun rather than the displacement of water. Although the meanings of "tidal" include "resembling" or "having the form or character of" tides, use of the term tidal wave is discouraged by geologists and oceanographers.
A 1969 episode of the TV crime show Hawaii Five-O entitled "Forty Feet High and It Kills!" used the terms "tsunami" and "tidal wave" interchangeably.
Seismic sea wave
The term seismic sea wave is also used to refer to the phenomenon because the waves most often are generated by seismic activity such as earthquakes. Prior to the rise of the use of the term tsunami in English, scientists generally encouraged the use of the term seismic sea wave rather than tidal wave. However, like tidal wave, seismic sea wave is not a completely accurate term, as forces other than earthquakes—including underwater landslides, volcanic eruptions, underwater explosions, land or ice slumping into the ocean, meteorite impacts, and the weather when the atmospheric pressure changes very rapidly—can generate such waves by displacing water.
Other terms
The use of the term tsunami for waves created by landslides entering bodies of water has become internationally widespread in both scientific and popular literature, although such waves are distinct in origin from large waves generated by earthquakes. This distinction sometimes leads to the use of other terms for landslide-generated waves, including landslide-triggered tsunami, displacement wave, non-seismic wave, impact wave, and, simply, giant wave.
History
While Japan may have the longest recorded history of tsunamis, the sheer destruction caused by the 2004 Indian Ocean earthquake and tsunami event mark it as the most devastating of its kind in modern times, killing around 230,000 people. The Sumatran region is also accustomed to tsunamis, with earthquakes of varying magnitudes regularly occurring off the coast of the island.
Tsunamis are an often underestimated hazard in the Mediterranean Sea and parts of Europe. Of historical and current (with regard to risk assumptions) importance are the 1755 Lisbon earthquake and tsunami (which was caused by the Azores–Gibraltar transform fault), the 1783 Calabrian earthquakes, each causing several tens of thousands of deaths and the 1908 Messina earthquake and tsunami. The tsunami claimed more than 123,000 lives in Sicily and Calabria and is among the deadliest natural disasters in modern Europe. The Storegga Slide in the Norwegian Sea and some examples of tsunamis affecting the British Isles refer to landslide and meteotsunamis, predominantly and less to earthquake-induced waves.
As early as 426 BC the Greek historian Thucydides inquired in his book History of the Peloponnesian War about the causes of tsunami, and was the first to argue that ocean earthquakes must be the cause. The oldest human record of a tsunami dates back to 479 BC, in the Greek colony of Potidaea, thought to be triggered by an earthquake. The tsunami may have saved the colony from an invasion by the Achaemenid Empire.
The cause, in my opinion, of this phenomenon must be sought in the earthquake. At the point where its shock has been the most violent the sea is driven back, and suddenly recoiling with redoubled force, causes the inundation. Without an earthquake I do not see how such an accident could happen.
The Roman historian Ammianus Marcellinus (Res Gestae 26.10.15–19) described the typical sequence of a tsunami, including an incipient earthquake, the sudden retreat of the sea and a following gigantic wave, after the 365 AD tsunami devastated Alexandria.
Causes
The principal generation mechanism of a tsunami is the displacement of a substantial volume of water or perturbation of the sea. This displacement of water is usually caused by earthquakes, but can also be attributed to landslides, volcanic eruptions, glacier calvings or more rarely by meteorites and nuclear tests. However, the possibility of a meteorite causing a tsunami is debated.
Seismicity
Tsunamis can be generated when the sea floor abruptly deforms and vertically displaces the overlying water. Tectonic earthquakes are a particular kind of earthquake that are associated with the Earth's crustal deformation; when these earthquakes occur beneath the sea, the water above the deformed area is displaced from its equilibrium position. More specifically, a tsunami can be generated when thrust faults associated with convergent or destructive plate boundaries move abruptly, resulting in water displacement, owing to the vertical component of movement involved. Movement on normal (extensional) faults can also cause displacement of the seabed, but only the largest of such events (typically related to flexure in the outer trench swell) cause enough displacement to give rise to a significant tsunami, such as the 1977 Sumba and 1933 Sanriku events.
Tsunamis have a small wave height offshore, and a very long wavelength (often hundreds of kilometres long, whereas normal ocean waves have a wavelength of only 30 or 40 metres), which is why they generally pass unnoticed at sea, forming only a slight swell usually about above the normal sea surface. They grow in height when they reach shallower water, in a wave shoaling process described below. A tsunami can occur in any tidal state and even at low tide can still inundate coastal areas.
On April 1, 1946, the 8.6 Aleutian Islands earthquake occurred with a maximum Mercalli intensity of VI (Strong). It generated a tsunami which inundated Hilo on the island of Hawaii with a surge. Between 165 and 173 were killed. The area where the earthquake occurred is where the Pacific Ocean floor is subducting (or being pushed downwards) under Alaska.
Examples of tsunamis originating at locations away from convergent boundaries include Storegga about 8,000 years ago, Grand Banks in 1929, and Papua New Guinea in 1998 (Tappin, 2001). The Grand Banks and Papua New Guinea tsunamis came from earthquakes which destabilised sediments, causing them to flow into the ocean and generate a tsunami. They dissipated before travelling transoceanic distances.
The cause of the Storegga sediment failure is unknown. Possibilities include an overloading of the sediments, an earthquake or a release of gas hydrates (methane etc.).
The 1960 Valdivia earthquake (Mw 9.5), 1964 Alaska earthquake (Mw 9.2), 2004 Indian Ocean earthquake (Mw 9.2), and 2011 Tōhoku earthquake (Mw9.0) are recent examples of powerful megathrust earthquakes that generated tsunamis (known as teletsunamis) that can cross entire oceans. Smaller (Mw 4.2) earthquakes in Japan can trigger tsunamis (called local and regional tsunamis) that can devastate stretches of coastline, but can do so in only a few minutes at a time.
Landslides
The Tauredunum event was a large tsunami on Lake Geneva in 563 CE, caused by sedimentary deposits destabilised by a landslide.
In the 1950s, it was discovered that tsunamis larger than had previously been believed possible can be caused by giant submarine landslides. These large volumes of rapidly displaced water transfer energy at a faster rate than the water can absorb. Their existence was confirmed in 1958, when a giant landslide in Lituya Bay, Alaska, caused the highest wave ever recorded, which had a height of . The wave did not travel far as it struck land almost immediately. The wave struck three boats—each with two people aboard—anchored in the bay. One boat rode out the wave, but the wave sank the other two, killing both people aboard one of them.
Another landslide-tsunami event occurred in 1963 when a massive landslide from Monte Toc entered the reservoir behind the Vajont Dam in Italy. The resulting wave surged over the -high dam by and destroyed several towns. Around 2,000 people died. Scientists named these waves megatsunamis.
Some geologists claim that large landslides from volcanic islands, e.g. Cumbre Vieja on La Palma (Cumbre Vieja tsunami hazard) in the Canary Islands, may be able to generate megatsunamis that can cross oceans, but this is disputed by many others.
In general, landslides generate displacements mainly in the shallower parts of the coastline, and there is conjecture about the nature of large landslides that enter the water. This has been shown to subsequently affect water in enclosed bays and lakes, but a landslide large enough to cause a transoceanic tsunami has not occurred within recorded history. Susceptible locations are believed to be the Big Island of Hawaii, Fogo in the Cape Verde Islands, La Reunion in the Indian Ocean, and Cumbre Vieja on the island of La Palma in the Canary Islands; along with other volcanic ocean islands. This is because large masses of relatively unconsolidated volcanic material occurs on the flanks and in some cases detachment planes are believed to be developing. However, there is growing controversy about how dangerous these slopes actually are.
Volcanic eruptions
Other than by landslides or sector collapse, volcanoes may be able to generate waves by pyroclastic flow submergence, caldera collapse, or underwater explosions. Tsunamis have been triggered by a number of volcanic eruptions, including the 1883 eruption of Krakatoa, and the 2022 Hunga Tonga–Hunga Ha'apai eruption. Over 20% of all fatalities caused by volcanism during the past 250 years are estimated to have been caused by volcanogenic tsunamis.
Debate has persisted over the origins and source mechanisms of these types of tsunamis, such as those generated by Krakatoa in 1883, and they remain lesser understood than their seismic relatives. This poses a large problem of awareness and preparedness, as exemplified by the eruption and collapse of Anak Krakatoa in 2018, which killed 426 and injured thousands when no warning was available.
It is still regarded that lateral landslides and ocean-entering pyroclastic currents are most likely to generate the largest and most hazardous waves from volcanism; however, field investigation of the Tongan event, as well as developments in numerical modelling methods, currently aim to expand the understanding of the other source mechanisms.
Meteorological
Some meteorological conditions, especially rapid changes in barometric pressure, as seen with the passing of a front, can displace bodies of water enough to cause trains of waves with wavelengths. These are comparable to seismic tsunamis, but usually with lower energies. Essentially, they are dynamically equivalent to seismic tsunamis, the only differences being 1) that meteotsunamis lack the transoceanic reach of significant seismic tsunamis, and 2) that the force that displaces the water is sustained over some length of time such that meteotsunamis cannot be modelled as having been caused instantaneously. In spite of their lower energies, on shorelines where they can be amplified by resonance, they are sometimes powerful enough to cause localised damage and potential for loss of life. They have been documented in many places, including the Great Lakes, the Aegean Sea, the English Channel, and the Balearic Islands, where they are common enough to have a local name, rissaga. In Sicily they are called marubbio and in Nagasaki Bay, they are called abiki. Some examples of destructive meteotsunamis include 31 March 1979 at Nagasaki and 15 June 2006 at Menorca, the latter causing damage in the tens of millions of euros.
Meteotsunamis should not be confused with storm surges, which are local increases in sea level associated with the low barometric pressure of passing tropical cyclones, nor should they be confused with setup, the temporary local raising of sea level caused by strong on-shore winds. Storm surges and setup are also dangerous causes of coastal flooding in severe weather but their dynamics are completely unrelated to tsunami waves. They are unable to propagate beyond their sources, as waves do.
Human-made or triggered tsunamis
The accidental Halifax Explosion in 1917 triggered an high tsunami in the harbour at Halifax, Nova Scotia, Canada.
There have been studies of the potential for the use of explosives to induce tsunamis as a tectonic weapon. As early as World War II (1939–1945), consideration of the use of conventional explosives was explored, and New Zealand's military forces initiated Project Seal, which attempted to create small tsunamis with explosives in the area of what is now Shakespear Regional Park at the tip of the Whangaparāoa Peninsula in the Auckland Region of New Zealand; the attempt failed.
There has been considerable speculation about the possibility of using nuclear weapons to cause tsunamis near an enemy coastline. Nuclear testing in the Pacific Proving Ground by the United States generated poor results. In Operation Crossroads in July 1946, two bombs were detonated, one in the air over and one underwater within the shallow waters of the deep lagoon at Bikini Atoll. The bombs detonated about from the nearest island, where the waves were no higher than when they reached the shoreline. Other underwater tests, mainly Operation Hardtack I/Wahoo in deep water and Operation Hardtack I/Umbrella in shallow water, confirmed the results. Analysis of the effects of shallow and deep underwater explosions indicate that the energy of the explosions does not easily generate the kind of deep, all-ocean waveforms typical of tsunamis because most of the energy creates steam, causes vertical fountains above the water, and creates compressional waveforms. Tsunamis are hallmarked by permanent large vertical displacements of very large volumes of water which do not occur in explosions.
Characteristics
Tsunamis are caused by earthquakes, landslides, volcanic explosions, glacier calvings, and bolides. They cause damage by two mechanisms: the smashing force of a wall of water travelling at high speed, and the destructive power of a large volume of water draining off the land and carrying a large amount of debris with it, even with waves that do not appear to be large.
While everyday wind waves have a wavelength (from crest to crest) of about and a height of roughly , a tsunami in the deep ocean has a much larger wavelength of up to . Such a wave travels at well over , but owing to the enormous wavelength the wave oscillation at any given point takes 20 or 30 minutes to complete a cycle and has an amplitude of only about . This makes tsunamis difficult to detect over deep water, where ships are unable to feel their passage.
The velocity of a tsunami can be calculated by obtaining the square root of the depth of the water in metres multiplied by the acceleration due to gravity (approximated to 10 m/s2). For example, if the Pacific Ocean is considered to have a depth of 5000 metres, the velocity of a tsunami would be = ≈ , which equates to a speed of about . This is the formula used for calculating the velocity of shallow-water waves. Even the deep ocean is shallow in this sense because a tsunami wave is so long (horizontally from crest to crest) by comparison.
The reason for the Japanese name "harbour wave" is that sometimes a village's fishermen would sail out, and encounter no unusual waves while out at sea fishing, and come back to land to find their village devastated by a huge wave.
As the tsunami approaches the coast and the waters become shallow, wave shoaling compresses the wave and its speed decreases below . Its wavelength diminishes to less than and its amplitude grows enormously—in accord with Green's law. Since the wave still has the same very long period, the tsunami may take minutes to reach full height. Except for the very largest tsunamis, the approaching wave does not break, but rather appears like a fast-moving tidal bore. Open bays and coastlines adjacent to very deep water may shape the tsunami further into a step-like wave with a steep-breaking front.
When the tsunami's wave peak reaches the shore, the resulting temporary rise in sea level is termed run up. Run up is measured in metres above a reference sea level. A large tsunami may feature multiple waves arriving over a period of hours, with significant time between the wave crests. The first wave to reach the shore may not have the highest run-up.
About 80% of tsunamis occur in the Pacific Ocean, but they are possible wherever there are large bodies of water, including lakes. However, tsunami interactions with shorelines and the seafloor topography are extremely complex, which leaves some countries more vulnerable than others. For example, the Pacific coasts of the United States and Mexico lie adjacent to each other, but the United States has recorded ten tsunamis in the region since 1788, while Mexico has recorded twenty-five since 1732. Similarly, Japan has had more than a hundred tsunamis in recorded history, while the neighbouring island of Taiwan has registered only two, in 1781 and 1867.
Drawback
All waves have a positive and negative peak; that is, a ridge and a trough. In the case of a propagating wave like a tsunami, either may be the first to arrive. If the first part to arrive at the shore is the ridge, a massive breaking wave or sudden flooding will be the first effect noticed on land. However, if the first part to arrive is a trough, a drawback will occur as the shoreline recedes dramatically, exposing normally submerged areas. The drawback can exceed hundreds of metres, and people unaware of the danger sometimes remain near the shore to satisfy their curiosity or to collect fish from the exposed seabed.
A typical wave period for a damaging tsunami is about twelve minutes. Thus, the sea recedes in the drawback phase, with areas well below sea level exposed after three minutes. For the next six minutes, the wave trough builds into a ridge which may flood the coast, and destruction ensues. During the next six minutes, the wave changes from a ridge to a trough, and the flood waters recede in a second drawback. Victims and debris may be swept into the ocean. The process repeats with succeeding waves.
Scales of intensity and magnitude
As with earthquakes, several attempts have been made to set up scales of tsunami intensity or magnitude to allow comparison between different events.
Intensity scales
The first scales used routinely to measure the intensity of tsunamis were the Sieberg-Ambraseys scale (1962), used in the Mediterranean Sea and the Imamura-Iida intensity scale (1963), used in the Pacific Ocean. The latter scale was modified by Soloviev (1972), who calculated the tsunami intensity "I" according to the formula:
where is the "tsunami height" in metres, averaged along the nearest coastline, with the tsunami height defined as the rise of the water level above the normal tidal level at the time of occurrence of the tsunami. This scale, known as the Soloviev-Imamura tsunami intensity scale, is used in the global tsunami catalogues compiled by the NGDC/NOAA and the Novosibirsk Tsunami Laboratory as the main parameter for the size of the tsunami.
This formula yields:
I = 2 for = 2.8 metres
I = 3 for = 5.5 metres
I = 4 for = 11 metres
I = 5 for = 22.5 metres
etc.
In 2013, following the intensively studied tsunamis in 2004 and 2011, a new 12-point scale was proposed, the Integrated Tsunami Intensity Scale (ITIS-2012), intended to match as closely as possible to the modified ESI2007 and EMS earthquake intensity scales.
Magnitude scales
The first scale that genuinely calculated a magnitude for a tsunami, rather than an intensity at a particular location was the ML scale proposed by Murty & Loomis based on the potential energy. Difficulties in calculating the potential energy of the tsunami mean that this scale is rarely used. Abe introduced the tsunami magnitude scale , calculated from,
where h is the maximum tsunami-wave amplitude (in m) measured by a tide gauge at a distance R from the epicentre, a, b and D are constants used to make the Mt scale match as closely as possible with the moment magnitude scale.
Tsunami heights
Several terms are used to describe the different characteristics of tsunami in terms of their height:
Amplitude, Wave Height, or Tsunami Height: Refers to the height of a tsunami relative to the normal sea level at the time of the tsunami, which may be tidal High Water, or Low Water. It is different from the crest-to-trough height which is commonly used to measure other type of wave height.
Run-up Height, or Inundation Height: The height reached by a tsunami on the ground above sea level, Maximum run-up height refers to the maximum height reached by water above sea level, which is sometimes reported as the maximum height reached by a tsunami.
Flow Depth: Refers to the height of tsunami above ground, regardless of the height of the location or sea level.
(Maximum) Water Level: Maximum height above sea level as seen from trace or water mark. Different from maximum run-up height in the sense that they are not necessarily water marks at inundation line/limit.
Warnings and predictions
Drawbacks can serve as a brief warning. People who observe drawback (many survivors report an accompanying sucking sound) can survive only if they immediately run for high ground or seek the upper floors of nearby buildings.
In 2004, ten-year-old Tilly Smith of Surrey, England, was on Maikhao beach in Phuket, Thailand with her parents and sister, and having learned about tsunamis recently in school, told her family that a tsunami might be imminent. Her parents warned others minutes before the wave arrived, saving dozens of lives. She credited her geography teacher, Andrew Kearney.
In the 2004 Indian Ocean tsunami drawback was not reported on the African coast or any other east-facing coasts that it reached. This was because the initial wave moved downwards on the eastern side of the megathrust and upwards on the western side. The western pulse hit coastal Africa and other western areas.
A tsunami cannot be precisely predicted, even if the magnitude and location of an earthquake is known. Geologists, oceanographers, and seismologists analyse each earthquake and based on many factors may or may not issue a tsunami warning. However, there are some warning signs of an impending tsunami, and automated systems can provide warnings immediately after an earthquake in time to save lives. One of the most successful systems uses bottom pressure sensors, attached to buoys, which constantly monitor the pressure of the overlying water column.
Regions with a high tsunami risk typically use tsunami warning systems to warn the population before the wave reaches land. On the west coast of the United States, which is prone to tsunamis from the Pacific Ocean, warning signs indicate evacuation routes. In Japan, the populace is well-educated about earthquakes and tsunamis, and along Japanese shorelines, tsunami warning signs remind people of the natural hazards along with a network of warning sirens, typically at the top of the cliffs of surrounding hills.
The Pacific Tsunami Warning System is based in Honolulu, Hawaii. It monitors Pacific Ocean seismic activity. A sufficiently large earthquake magnitude and other information triggers a tsunami warning. While the subduction zones around the Pacific are seismically active, not all earthquakes generate a tsunami. Computers assist in analysing the tsunami risk of every earthquake that occurs in the Pacific Ocean and the adjoining land masses.
As a direct result of the Indian Ocean tsunami, a re-appraisal of the tsunami threat for all coastal areas is being undertaken by national governments and the United Nations Disaster Mitigation Committee. A tsunami warning system is being installed in the Indian Ocean.
Computer models can predict tsunami arrival, usually within minutes of the arrival time. Bottom pressure sensors can relay information in real time. Based on these pressure readings and other seismic information and the seafloor's shape (bathymetry) and coastal topography, the models estimate the amplitude and surge height of the approaching tsunami. All Pacific Rim countries collaborate in the Tsunami Warning System and most regularly practise evacuation and other procedures. In Japan, such preparation is mandatory for government, local authorities, emergency services and the population.
Along the United States west coast, in addition to sirens, warnings are sent on television and radio via the National Weather Service, using the Emergency Alert System.
Possible animal reaction
Some zoologists hypothesise that some animal species have an ability to sense subsonic Rayleigh waves from an earthquake or a tsunami. If correct, monitoring their behaviour could provide advance warning of earthquakes and tsunamis. However, the evidence is controversial and is not widely accepted. There are unsubstantiated claims about the Lisbon quake that some animals escaped to higher ground, while many other animals in the same areas drowned. The phenomenon was also noted by media sources in Sri Lanka in the 2004 Indian Ocean earthquake. It is possible that certain animals (e.g., elephants) may have heard the sounds of the tsunami as it approached the coast. The elephants' reaction was to move away from the approaching noise. By contrast, some humans went to the shore to investigate and many drowned as a result.
Mitigation
In some tsunami-prone countries, earthquake engineering measures have been taken to reduce the damage caused onshore.
Japan, where tsunami science and response measures first began following a disaster in 1896, has produced ever-more elaborate countermeasures and response plans. The country has built many tsunami walls of up to high to protect populated coastal areas. Other localities have built floodgates of up to high and channels to redirect the water from an incoming tsunami. However, their effectiveness has been questioned, as tsunamis often overtop the barriers.
The Fukushima Daiichi nuclear disaster was directly triggered by the 2011 Tōhoku earthquake and tsunami, when waves exceeded the height of the plant's sea wall and flooded the emergency generators. Iwate Prefecture, which is an area at high risk from tsunami, had tsunami barriers walls (Taro sea wall) totalling long at coastal towns. The 2011 tsunami toppled more than 50% of the walls and caused catastrophic damage.
The Okushiri, Hokkaidō tsunami, which struck within two to five minutes of the earthquake on July 12, 1993, created waves tall—as high as a 10-storey building. The port town of Aonae was completely surrounded by a tsunami wall, but the waves washed right over the wall and destroyed all the wood-framed structures in the area. The wall may have succeeded in slowing down and moderating the height of the tsunami, but it did not prevent major destruction and loss of life.
| Physical sciences | Natural disasters | null |
31185 | https://en.wikipedia.org/wiki/Tonne | Tonne | The tonne ( or ; symbol: t) is a unit of mass equal to 1,000 kilograms. It is a non-SI unit accepted for use with SI. It is also referred to as a metric ton in the United States to distinguish it from the non-metric units of the short ton (United States customary units) and the long ton (British imperial units). It is equivalent to approximately 2,204.6 pounds, 1.102 short tons, and 0.984 long tons. The official SI unit is the megagram (Mg), a less common way to express the same amount.
Symbol and abbreviations
The BIPM symbol for the tonne is t, adopted at the same time as the unit in 1879. Its use is also official for the metric ton in the United States, having been adopted by the United States National Institute of Standards and Technology (NIST). It is a symbol, not an abbreviation, and should not be followed by a period. Use of lower case is significant, and use of other letter combinations can lead to ambiguity. For example, T, MT, mT, are the SI symbols for the tesla, megatesla, and millitesla, respectively, while Mt and mt are SI-compatible symbols for the megatonne (one teragram) and millitonne (one kilogram). If describing TNT equivalent units of energy, one megatonne of TNT is equivalent to approximately 4.184 petajoules.
Origin and spelling
In English, tonne is an established spelling alternative to metric ton. In American English and British English, tonne is usually pronounced the same as ton (), but the final "e" can also be pronounced, i.e. "tunnie" (). In Australian English, the common and recommended pronunciation is . In the United States, metric ton is the name for this unit used and recommended by NIST; an unqualified mention of a ton typically refers to a short ton of 2,000 lb (907.2 kg) and to a lesser extent to a long ton of 2,240 lb (1,016 kg), with the term tonne rarely used in speech or writing. Both terms are acceptable in Canadian English.
Ton and tonne are both derived from a Germanic word in general use in the North Sea area since the Middle Ages ( Old English and Old Frisian tunne, Old High German and Medieval Latin , German and French tonne) to designate a large cask, or tun. A full tun, standing about a metre high, could easily weigh a tonne. | Physical sciences | Mass and weight | Basics and measurement |
31188 | https://en.wikipedia.org/wiki/Triassic%E2%80%93Jurassic%20extinction%20event | Triassic–Jurassic extinction event | The Triassic–Jurassic (Tr-J) extinction event (TJME), often called the end-Triassic extinction, marks the boundary between the Triassic and Jurassic periods, . It is one of five major extinction events, profoundly affecting life on land and in the oceans. In the seas, about 23–34% of marine genera disappeared. On land, all archosauromorph reptiles other than crocodylomorphs (the lineage leading to modern crocodilians), dinosaurs, and pterosaurs (flying reptiles) became extinct; some of the groups that died out were previously abundant, such as aetosaurs, phytosaurs, and rauisuchids. Plants, crocodylomorphs, dinosaurs, pterosaurs and mammals were left largely untouched, allowing the dinosaurs, pterosaurs, and crocodylomorphs to become the dominant land animals for the next 135 million years.
The cause of the Tr-J extinction event may have been extensive volcanic eruptions in the Central Atlantic Magmatic Province (CAMP), which released large amounts of carbon dioxide into the Earth's atmosphere, causing profound global warming along with ocean acidification. Older hypotheses have proposed that gradual climate or sea level change may be the culprit, or perhaps one or more asteroid strikes.
Research history
The earliest research on the TJME was conducted in the mid-20th century, when events in earth history where widely assumed to have been gradual (a paradigm known as uniformitarianism) and comparatively rapid cataclysms as a cause of extinction events were dismissed as catastrophism. Consequently, gradual environmental changes were favoured as the cause of the extinction. In the 1980s, Jack Sepkoski identified the Triassic-Jurassic boundary drop in biodiversity as one of the "Big 5" mass extinction events. After the discovery that the Cretaceous-Palaeogene extinction event was caused by a bolide impact, the TJME has also been suggested to have been caused by such an impact in the 1980s and 1990s. The theory that the TJME was caused by massive volcanism in the Central Atlantic Magmatic Province (CAMP) first emerged in the 1990s after similar research examining the Permian-Triassic extinction event found it to have been caused by volcanic activity. Despite some early objections, this paradigm remains the scientific consensus in the present day.
Effects
Marine invertebrates
The Triassic-Jurassic extinction completed the transition from the Palaeozoic evolutionary fauna to the Modern evolutionary fauna, a change that began in the aftermath of the end-Guadalupian extinction and continued following the Permian-Triassic extinction event (PTME). Between 23% and 34.1% of marine genera went extinct. Plankton diversity dropped suddenly, but it was relatively mildly impacted at the Triassic-Jurassic boundary, although extinction rates among radiolarians rose significantly. Ammonites were affected substantially by the Triassic-Jurassic extinction and were nearly wiped out. Ceratitidans, the most prominent group of ammonites in the Triassic, became extinct at the end of the Rhaetian after having their diversity reduced significantly in the Norian, while other ammonite groups such as the Ammonitina, Lytoceratina, and Phylloceratina diversified from the Early Jurassic onward. Bivalves suffered heavy losses, although the extinction was highly selective, with some bivalve clades escaping substantial diversity losses. The Lilliput effect affected megalodontid bivalves, whereas file shell bivalves experienced the Brobdingnag effect, the reverse of the Lilliput effect, as a result of the mass extinction event. There is some evidence of a bivalve cosmopolitanism event during the mass extinction. Additionally, following the TJME, mobile bivalve taxa outnumbered stationary bivalve taxa. Gastropod diversity was barely affected at the Triassic-Jurassic boundary, although gastropods gradually suffered numerous losses over the late Norian and Rhaetian, during the leadup to the TJME. Brachiopods declined in diversity at the end of the Triassic before rediversifying in the Sinemurian and Pliensbachian. Bryozoans, particularly taxa that lived in offshore settings, had already been in decline since the Norian and suffered further losses in the TJME. Conulariids seemingly completely died out at the end of the Triassic. Around 96% of coral genera died out, with integrated corals being especially devastated. Corals practically disappeared from the Tethys Ocean at the end of the Triassic except for its northernmost reaches, resulting in an early Hettangian "coral gap". There is good evidence for a collapse in the reef community, which was likely driven by ocean acidification resulting from supplied to the atmosphere by the CAMP eruptions.
Most evidence points to a relatively fast recovery from the mass extinction. Benthic ecosystems recovered far more rapidly after the TJME than they did after the PTME. British Early Jurassic benthic marine environments display a relatively rapid recovery that began almost immediately after the end of the mass extinction despite numerous relapses into anoxic conditions during the earliest Jurassic. In the Neuquén Basin, recovery began in the late early Hettangian and lasted until a new biodiversity equilibrium in the late Hettangian. Also despite recurrent anoxic episodes, large bivalves began to reappear shortly after the extinction event. Siliceous sponges dominated the immediate aftermath interval thanks to the enormous influx of silica into the oceans from the weathering of the CAMP's aerially extensive basalts. Some clades recovered more slowly than others, however, as exemplified by corals and their disappearance in the early Hettangian.
Marine vertebrates
Fish did not suffer a mass extinction at the end of the Triassic. The Late Triassic in general did experience a gradual drop in actinopterygiian diversity after an evolutionary explosion in the Middle Triassic. Though this may have been due to falling sea levels or the Carnian Pluvial Event, it may instead be a result of sampling bias considering that Middle Triassic fish have been more extensively studied than Late Triassic fish. Despite the apparent drop in diversity, neopterygiians (which include most modern bony fish) suffered less than more "primitive" actinopterygiians, indicating a biological turnover where modern groups of fish started to supplant earlier groups. Pycnodontiform fish were insignificantly affected. Conodonts, which were prominent index fossils throughout the Paleozoic and Triassic, finally became extinct at the T-J boundary following declining diversity.
Like fish, marine reptiles experienced a substantial drop in diversity between the Middle Triassic and the Jurassic. However, their extinction rate at the Triassic–Jurassic boundary was not elevated. The highest extinction rates experienced by Mesozoic marine reptiles actually occurred at the end of the Ladinian stage, which corresponds to the end of the Middle Triassic. The only marine reptile families which became extinct at or slightly before the Triassic–Jurassic boundary were the placochelyids (the last family of placodonts), making plesiosaurs the only surviving sauropterygians, and giant ichthyosaurs such as shastasaurids. Nevertheless, some authors have argued that the end of the Triassic acted as a genetic "bottleneck" for ichthyosaurs, which never regained the level of anatomical diversity and disparity which they possessed during the Triassic. The high diversity of rhomaelosaurids immediately after the TJME points to a gradual extinction of marine reptiles rather than an abrupt one.
Terrestrial animals
Terrestrial fauna was affected by the TJME much more severely than marine fauna. One of the earliest pieces of evidence for a Late Triassic extinction was a major turnover in terrestrial tetrapods such as amphibians, reptiles, and synapsids. Edwin H. Colbert drew parallels between the system of extinction and adaptation between the Triassic–Jurassic and Cretaceous–Paleogene boundaries. He recognized how dinosaurs, lepidosaurs (lizards and their relatives), and crocodyliforms (crocodilians and their relatives) filled the niches of more ancient groups of amphibians and reptiles which were extinct by the start of the Jurassic. Olsen (1987) estimated that 42% of all terrestrial tetrapods became extinct at the end of the Triassic, based on his studies of faunal changes in the Newark Supergroup of eastern North America. More modern studies have debated whether the turnover in Triassic tetrapods was abrupt at the end of the Triassic, or instead more gradual.
During the Triassic, amphibians were mainly represented by large, crocodile-like members of the order Temnospondyli. Although the earliest lissamphibians (modern amphibians like frogs and salamanders) did appear during the Triassic, they would become more common in the Jurassic while the temnospondyls diminished in diversity past the Triassic–Jurassic boundary. Although the decline of temnospondyls did send shockwaves through freshwater ecosystems, it was probably not as abrupt as some authors have suggested. Brachyopoids, for example, survived until the Cretaceous according to new discoveries in the 1990s. Several temnospondyl groups did become extinct near the end of the Triassic despite earlier abundance, but it is uncertain how close their extinctions were to the end of the Triassic. The last known metoposaurids ("Apachesaurus") were from the Redonda Formation, which may have been early Rhaetian or late Norian. Gerrothorax, the last known plagiosaurid, has been found in rocks which are probably (but not certainly) Rhaetian, while a capitosaur humerus was found in Rhaetian-age deposits in 2018. Therefore, plagiosaurids and capitosaurs were likely victims of an extinction at the very end of the Triassic, while most other temnospondyls were already extinct.
Terrestrial reptile faunas were dominated by archosauromorphs during the Triassic, particularly phytosaurs and members of Pseudosuchia (the reptile lineage which leads to modern crocodilians). In the Early Jurassic and onwards, dinosaurs and pterosaurs became the most common land reptiles, while small reptiles were mostly represented by lepidosauromorphs (such as lizards and tuatara relatives). Among pseudosuchians, only small crocodylomorphs did not become extinct by the end of the Triassic, with both dominant herbivorous subgroups (such as aetosaurs) and carnivorous ones (rauisuchids) having died out. Phytosaurs, drepanosaurs, trilophosaurids, tanystropheids, and procolophonids, which were other common reptiles in the Late Triassic, had also become extinct by the start of the Jurassic. However, pinpointing the extinction of these different land reptile groups is difficult, as the last stage of the Triassic, the Rhaetian, and the first stage of the Jurassic, the Hettangian, each have few records of large land animals; some paleontologists have considered only phytosaurs and procolophonids to have become extinct at the Triassic–Jurassic boundary, with other groups having become extinct earlier. However, it is likely that many other groups survived up until the boundary according to British fissure deposits from the Rhaetian. Aetosaurs, kuehneosaurids, drepanosaurs, thecodontosaurids, "saltoposuchids" (like Terrestrisuchus), trilophosaurids, and various non-crocodylomorph pseudosuchians are all examples of Rhaetian reptiles which may have become extinct at the Triassic–Jurassic boundary.
In the TJME's aftermath, dinosaurs experienced a major radiation, filling some of the niches vacated by the victims of the extinction. Crocodylomorphs likewise underwent a very rapid and major adaptive radiation. Surviving non-mammalian synapsid clades similarly played a role in the post-TJME adaptive radiation during the Early Jurassic.
Herbivorous insects were minimally affected by the TJME; evidence from the Sichuan Basin shows they were overall able to quickly adapt to the floristic turnover by exploiting newly abundant plants. Odonates suffered highly selective losses, and their morphospace was heavily restructured as a result.
Terrestrial plants
The extinction event marks a floral turnover as well, with estimates of the percentage of Rhaetian pre-extinction plants being lost ranging from 17% to 73%. Though spore turnovers are observed across the Triassic-Jurassic boundary, the abruptness of this transition and the relative abundances of given spore types both before and after the boundary are highly variable from one region to another, pointing to a global ecological restructuring rather than a mass extinction of plants. Overall, plants suffered minor diversity losses on a global scale as a result of the extinction, but species turnover rates were high and substantial changes occurred in terms of relative abundance and growth distribution among taxa. Evidence from Central Europe suggests that rather than a sharp, very rapid decline followed by an adaptive radiation, a more gradual turnover in both fossil plants and spores with several intermediate stages is observed over the course of the extinction event. Extinction of plant species can in part be explained by the suspected increased carbon dioxide in the atmosphere as a result of CAMP volcanic activity, which would have created photoinhibition and decreased transpiration levels among species with low photosynthetic plasticity, such as the broad leaved Ginkgoales which declined to near extinction across the Tr–J boundary.
Ferns and other species with dissected leaves displayed greater adaptability to atmosphere conditions of the extinction event, and in some instances were able to proliferate across the boundary and into the Jurassic. In the Jiyuan Basin of North China, Classopolis content increased drastically in concordance with warming, drying, wildfire activity, enrichments in isotopically light carbon, and an overall reduction in floral diversity. In the Sichuan Basin, relatively cool mixed forests in the late Rhaetian were replaced by hot, arid fernlands during the Triassic–Jurassic transition, which in turn later gave way to a cheirolepid-dominated flora in the Hettangian and Sinemurian. The abundance of ferns in China that were resistant to high levels of aridity increased significantly across the Triassic–Jurassic boundary, though ferns better adapted for moist, humid environments declined, indicating that plants experienced major environmental stress, albeit not an outright mass extinction. In some regions, however, major floral extinctions did occur, with some researchers challenging the hypothesis of there being no significant floral mass extinction on this basis. In the Newark Supergroup of the United States East Coast, about 60% of the diverse monosaccate and bisaccate pollen assemblages disappear at the Tr–J boundary, indicating a major extinction of plant genera. Early Jurassic pollen assemblages are dominated by Corollina, a new genus that took advantage of the empty niches left by the extinction. The site of St. Audrie's Bay displays a shift from diverse gymnosperm-dominated forests to Cheirolepidiaceae-dominated monocultures. The Danish Basin saw 34% of its Rhaetian spore-pollen assemblage, including Cingulizonates rhaeticus, Limbosporites lundbladiae, Polypodiisporites polymicroforatus, and Ricciisporites tuberculatus, disappear, with the post-extinction plant community being dominated by pinacean conifers such as Pinuspollenites minimus and tree ferns such as Deltoidospora, with ginkgos, cycads, cypresses, and corystospermous seed ferns also represented. Along the margins of the European Epicontinental Sea and the European shores of the Tethys, coastal and near-coastal mires fell victim to an abrupt sea level rise. These mires were replaced by a pioneering opportunistic flora after an abrupt sea level fall, although its heyday was short lived and it died out shortly after its rise. The opportunists that established themselves along the Tethyan coastline were primarily spore-producers. In the Eiberg Basin of the Northern Calcareous Alps, there was a very rapid palynomorph turnover. The palynological and palaeobotanical succession in Queensland shows a Classopolis bloom after the TJME. Polyploidy may have been an important factor that mitigated a conifer species' risk of going extinct.
Possible causes
Central Atlantic Magmatic Province
The leading and best evidenced explanation for the TJME is massive volcanic eruptions, specifically from the Central Atlantic Magmatic Province (CAMP), the largest known large igneous province by area, and one of the most voluminous, with its flood basalts extending across parts of southwestern Europe, northwestern Africa, northeastern South America, and southeastern North America. The coincidence and synchrony of CAMP activity and the TJME is indicated by uranium-lead dating, argon-argon dating, and palaeomagnetism. The isotopic composition of fossil soils and marine sediments near the boundary between the Late Triassic and Early Jurassic has been tied to a large negative δ13C excursion, with values as low as -2.8%. Carbon isotopes of hydrocarbons (n-alkanes) derived from leaf wax and lignin, and total organic carbon from two sections of lake sediments interbedded with the CAMP in eastern North America have shown carbon isotope excursions similar to those found in the mostly marine St. Audrie's Bay section, Somerset, England; the correlation suggests that the TJME began at the same time in marine and terrestrial environments, slightly before the oldest basalts in eastern North America but simultaneous with the eruption of the oldest flows in Morocco, with both a critical greenhouse and a marine biocalcification crisis. Contemporaneous CAMP eruptions, mass extinction, and the carbon isotopic excursions are shown in the same places, making the case for a volcanic cause of a mass extinction. The observed negative carbon isotope excursion is lower in some sites that correspond to what was then eastern Panthalassa because of the extreme aridity of western Pangaea limiting weathering and erosion there. The negative δ13C excursion associated with CAMP volcanism lasted for approximately 20,000 to 40,000 years, or about one or two of Earth's axial precession cycles, although the carbon cycle was so disrupted that it did not stabilise until the Sinemurian. Mercury anomalies from deposits in various parts of the world have further bolstered the volcanic cause hypothesis, as have anomalies from various platinum-group elements. Nickel enrichments are also observed at the Triassic-Jurassic boundary coevally with light carbon enrichments, providing yet more evidence of massive volcanism.
Some scientists initially rejected the volcanic eruption theory because the Newark Supergroup, a section of rock in eastern North America that records the Triassic–Jurassic boundary, contains no ash-fall horizons and because its oldest basalt flows were estimated to lie around 10 m above the transition zone, which they estimated to have occurred 610 kyr after the TJME. Also among their objections was that the Triassic-Jurassic boundary was poorly defined and the CAMP eruptions poorly constrained temporally. However, updated dating protocol and wider sampling has confirmed that the CAMP eruptions started in Morocco only a few thousand years before the extinction, preceding their onset in Nova Scotia and New Jersey, and that they continued in several more pulses for the next 600,000 years. Volcanic global warming has also been criticised as an explanation because some estimates have found that the amount of carbon dioxide emitted was only around 250 ppm, not enough to generate a mass extinction. In addition, at some sites, changes in carbon isotope ratios have been attributed to diagenesis and not any primary environmental changes.
Global warming
The flood basalts of the CAMP released gigantic quantities of carbon dioxide, a potent greenhouse gas causing intense global warming. Before the TJME, carbon dioxide levels were around 1,000 ppm as measured by the stomatal index of Lepidopteris ottonis, but this quantity jumped to 1,300 ppm at the onset of the extinction event. During the TJME, carbon dioxide concentrations increased fourfold. The record of CAMP degassing shows several distinct pulses of carbon dioxide immediately following each major pulse of magmatism, at least two of which amount to a doubling of atmospheric CO2. Carbon dioxide was emitted quickly and in enormous quantities compared to other periods of Earth's history, rate of carbon dioxide emissions was one of the most meteoric rises in carbon dioxide levels in Earth's entire history. It is estimated that a single volcanic pulse from the large igneous province would have emitted an amount of carbon dioxide roughly equivalent to projected anthropogenic carbon dioxide emissions for the 21st century. In addition, the flood basalts intruded through sediments that were rich in organic matter and combusted it, which led to the degassing of volatiles that further enhanced volcanic warming of the climate. Thermogenic carbon release through such contact metamorphism of carbon-rich deposits has been found to be a sensible hypothesis providing a coherent explanation for the magnitude of the negative carbon isotope excursions at the terminus of the Triassic. Global temperatures rose sharply by 3 to 4 °C. In some regions, the temperature rise was as great as 10 °C. Kaolinite-dominated clay mineral spectra reflect the extremely hot and humid greenhouse conditions engendered by the CAMP. Soil erosion occurred as the hydrological cycle was accelerated by the extreme global heat.
The catastrophic dissociation of gas hydrates as a positive feedback resulting from warming, which has been suggested as one possible cause of the PTME, the largest mass extinction of all time, may have exacerbated greenhouse conditions, although others suggest that methane hydrate release was temporally mismatched with the TJME and thus not a cause of it.
Global cooling
Besides the carbon dioxide-driven long-term global warming, CAMP volcanism had shorter term cooling effects resulting from the emission of sulphur dioxide aerosols. A 2022 study shows that high latitudes had colder climates with evidence of mild glaciation. The authors propose that cold periods ("ice ages") induced by volcanic ejecta clouding the atmosphere might have favoured endothermic animals, with dinosaurs, pterosaurs, and mammals being more capable at enduring these conditions than large pseudosuchians due to insulation.
Metal poisoning
CAMP volcanism released enormous amounts of toxic mercury. The appearance of high rates of mutaganesis of varying severity in fossil spores during the TJME coincides with mercury anomalies and is thus believed by researchers to have been caused by mercury poisoning. δ202Hg and Δ199Hg evidence suggests that volcanism caused the mercury loading directly at the Triassic-Jurassic boundary, but that there were later bouts of elevated mercury in the environment during the Early Jurassic caused by eccentricity-forced enhancement of hydrological cycling and erosion that resulted in remobilisation of volcanically injected mercury that had been deposited in wetlands.
Wildfires
The intense, rapid warming is believed to have resulted in increased storminess and lightning activity as a consequence of the more humid climate. The uptick in lightning activity is in turn implicated as a cause of an increase in wildfire activity. The combined presence of charcoal fragments and heightened levels of pyrolytic polycyclic aromatic hydrocarbons in Polish sedimentary facies straddling the Triassic-Jurassic boundary indicates wildfires were extremely commonplace during the earliest Jurassic, immediately after the Triassic-Jurassic transition. Elevated wildfire activity is also known from the Junggar Basin. In the Jiyuan Basin, two distinct pulses of drastically elevated wildfire activity are known: the first mainly affected canopies and occurred amidst relatively humid conditions while the second predominantly affected ground cover and was associated with aridity. Frequent wildfires, combined with increased seismic activity from CAMP emplacement, led to apocalyptic soil degradation.
Ocean acidification
In addition to these climatic effects, oceanic uptake of volcanogenic carbon and sulphur dioxide would have led to a significant decrease of seawater pH known as ocean acidification, which is discussed as a relevant driver of marine extinction. Evidence for ocean acidification as an extinction mechanism comes from the preferential extinction of marine organisms with thick aragonitic skeletons and little biotic control of biocalcification (e.g., corals, hypercalcifying sponges), which resulted in a coral reef collapse and an early Hettangian "coral gap". The decline of megalodontoid bivalves is also attributed to increased seawater acidity. Extensive fossil remains of malformed calcareous nannoplankton, a common sign of significant drops in pH, have also been extensively reported from the Triassic-Jurassic boundary. Global interruption of carbonate deposition at the Triassic-Jurassic boundary has been cited as additional evidence for catastrophic ocean acidification. Upwardly developing aragonite fans in the shallow subseafloor may also reflect decreased pH, these structures being speculated to have precipitated concomitantly with acidification. In some studied sections, the TJME biocalcification crisis is masked by emersion of carbonate platforms induced by marine regression.
Anoxia
Anoxia was another mechanism of extinction; the end-Triassic extinction was coeval with an uptick in black shale deposition and a pronounced negative δ238U excursion, indicating a major decrease in marine oxygen availability. Isorenieratane concentration increase reveals that populations of green sulphur bacteria, which photosynthesise using hydrogen sulphide instead of water, grew significantly across the Triassic-Jurassic boundary; these findings indicate that euxinia, a form of anoxia defined by not just the absence of dissolved oxygen but high concentrations of hydrogen sulphide, also developed in the oceans. A meteoric shift towards positive sulphur isotope ratios in reduced sulphur species indicates a complete utilisation of sulphate by sulphate reducing bacteria. Evidence of anoxia has been discovered at the Triassic-Jurassic boundary across the world's oceans; the western Tethys, eastern Tethys, and Panthalassa were all affected by a precipitous drop in seawater oxygen, although at a few sites, the TJME was associated with fully oxygenated waters. Positive δ15N excursions have also been interpreted as evidence of anoxia concomitant with increased denitrification in marine sediments in the TJME's aftermath.
In northeastern Panthalassa, episodes of anoxia and euxinia were already occurring during the Rhaetian before the TJME, making its marine ecosystems unstable even before the main crisis began. This early phase of environmental degradation in eastern Panthalassa may have been caused by an early phase of CAMP activity. Anoxic, reducing conditions were likewise present in western Panthalassa off the coast of what is now Japan for about a million years prior to the TJME. During the TJME, the rapid warming and increase in continental weathering led to the stagnation of ocean circulation and deoxygenation of seawater in many ocean regions, causing catastrophic marine environmental effects in conjunction with ocean acidification, which was enhanced and exacerbated by widespread photic zone euxinia through organic matter respiration and carbon dioxide release. Off the shores of the Wrangellia Terrane, the onset of photic zone euxinia was preceded by an interval of limited nitrogen availability and increased nitrogen fixation in surface waters while euxinia developed in bottom waters. In what is now northwestern Europe, shallow seas became salinity stratified, enabling easy development of anoxia. Reduced salinity, in conjunction with increased influx of terrestrial organic matter, enkindled anoxia in the Eiberg Basin. The persistence of anoxia into the Hettangian age may have helped delay the recovery of marine life in the extinction's aftermath, and recurrent hydrogen sulphide poisoning likely had the same retarding effect on biotic rediversification.
Ozone depletion
Research on the role of ozone shield deterioration during the Permian-Triassic mass extinction has suggested that it may have been a factor in the TJME as well. A spike in the abundance of unseparated tetrads of Kraeuselisporites reissingerii has been interpreted as evidence of increased ultraviolet radiation flux resulting from ozone layer damage caused by volcanic aerosols.
Gradual climate change
The extinctions at the end of the Triassic were initially attributed to gradually changing environments. Within his 1958 study recognizing biological turnover between the Triassic and Jurassic, Edwin H. Colbert's proposal was that this extinction was a result of geological processes decreasing the diversity of land biomes. He considered the Triassic period to be an era of the world experiencing a variety of environments, from towering highlands to arid deserts to tropical marshes. In contrast, the Jurassic period was much more uniform both in climate and elevation due to excursions by shallow seas.
Later studies noted a clear trend towards increased aridification towards the end of the Triassic. Although high-latitude areas like Greenland and Australia actually became wetter, most of the world experienced more drastic changes in climate as indicated by geological evidence. This evidence includes an increase in carbonate and evaporite deposits (which are most abundant in dry climates) and a decrease in coal deposits (which primarily form in humid environments such as coal forests). In addition, the climate may have become much more seasonal, with long droughts interrupted by severe monsoons. The world gradually got warmer over this time as well; from the late Norian to the Rhaetian, mean annual temperatures rose by 7 to 9 °C. The site of Hochalm in Austria preserves evidence of carbon cycle perturbations during the Rhaetian preceding the Triassic-Jurassic boundary, potentially having a role in the ecological crisis.
Sea level fall
Geological formations in Europe and the Middle East seem to indicate a drop in sea levels at the end of the Triassic associated with the TJME. Although falling sea levels have sometimes been considered a culprit for marine extinctions, evidence is inconclusive since many sea level drops in geological history are not correlated with increased extinctions. However, there is still some evidence that marine life was affected by secondary processes related to falling sea levels, such as decreased oxygenation (caused by sluggish circulation), or increased acidification. These processes do not seem to have been worldwide, with the sea level fall observed in European sediments believed to be not global but regional, but they may explain local extinctions in European marine fauna. However, it is not universally accepted that even this local diversity drop was caused by sea level fall. A pronounced sea level change in latest Triassic records from Lake Williston in northeastern British Columbia, which was then the northeastern margin of Panthalassa, resulted in an extinction event of infaunal (sediment-dwelling) bivalves, though not epifaunal ones.
Extraterrestrial impact
Some have hypothesized that an impact from an asteroid or comet caused the Triassic–Jurassic extinction, similar to the extraterrestrial object which was the main factor in the Cretaceous–Paleogene extinction about 66 million years ago, as evidenced by the Chicxulub crater in Mexico. However, so far no impact crater of sufficient size has been dated to precisely coincide with the Triassic–Jurassic boundary.
Nevertheless, the Late Triassic did experience several impacts, including the second-largest confirmed impact in the Mesozoic. The Manicouagan Reservoir in Quebec is one of the most visible large impact craters on Earth, and at in diameter it is tied with the Eocene Popigai impact structure in Siberia as the fourth-largest impact crater on Earth. Olsen et al. (1987) were the first scientists to link the Manicouagan crater to the Triassic–Jurassic extinction, citing its age which at the time was roughly considered to be Late Triassic. More precise radiometric dating by Hodych & Dunning (1992) has shown that the Manicouagan impact occurred about 214 million years ago, about 13 million years before the Triassic–Jurassic boundary. Therefore, it could not have been responsible for an extinction precisely at the Triassic–Jurassic boundary. Nevertheless, the Manicouagan impact did have a widespread effect on the planet; a 214-million-year-old ejecta blanket of shocked quartz has been found in rock layers as far away as England and Japan. There is still a possibility that the Manicouagan impact was responsible for a small extinction midway through the Late Triassic at the Carnian–Norian boundary, although the disputed age of this boundary (and whether an extinction actually occurred in the first place) makes it difficult to correlate the impact with extinction. Onoue et al. (2016) alternatively proposed that the Manicouagan impact was responsible for a marine extinction in the middle of the Norian which affected radiolarians, sponges, conodonts, and Triassic ammonoids. Thus, the Manicouagan impact may have been partially responsible for the gradual decline in the latter two groups which culminated in their extinction at the Triassic–Jurassic boundary. The boundary between the Adamanian and Revueltian land vertebrate faunal zones, which involved extinctions and faunal changes in tetrapods and plants, was possibly also caused by the Manicouagan impact, although discrepancies between magnetochronological and isotopic dating lead to some uncertainty.
Other Triassic craters are closer to the Triassic–Jurassic boundary but also much smaller than the Manicouagan reservoir. The eroded Rochechouart impact structure in France has most recently been dated to million years ago, but at across (possibly up to across originally), it appears to be too small to have affected the ecosystem, although it has been speculated to have played a role in an alleged much smaller extinction event at the Norian-Rhaetian boundary. The wide Saint Martin crater in Manitoba has been proposed as a candidate for a possible TJME-causing impact, but its has since been dated to be Carnian. Other putative or confirmed Triassic craters include the wide Puchezh-Katunki crater in Eastern Russia (though it may be Jurassic in age), the wide Obolon' crater in Ukraine, and the wide Red Wing Creek structure in North Dakota. Spray et al. (1998) noted an interesting phenomenon, that being how the Manicouagan, Rochechouart, and Saint Martin craters all seem to be at the same latitude, and that the Obolon' and Red Wing craters form parallel arcs with the Rochechouart and Saint Martin craters, respectively. Spray and his colleagues hypothesized that the Triassic experienced a "multiple impact event", a large fragmented asteroid or comet which broke up and impacted the earth in several places at the same time. Such an impact has been observed in the present day, when Comet Shoemaker-Levy 9 broke up and hit Jupiter in 1992. However, the "multiple impact event" hypothesis for Triassic impact craters has not been well-supported; Kent (1998) noted that the Manicouagan and Rochechouart craters were formed in eras of different magnetic polarity, and radiometric dating of the individual craters has shown that the impacts occurred millions of years apart.
Shocked quartz has been found in Rhaetian deposits from the Northern Apennines of Italy, providing possible evidence of an end-Triassic extraterrestrial impact. Certain trace metals indicative of a bolide impact have been found in the late Rhaetian, though not at the Triassic-Jurassic boundary itself; the discoverers of these trace metal anomalies purport that such a bolide impact could only have been an indirect cause of the TJME. The discovery of seismites two to four metres thick coeval with the carbon isotope fluctuations associated with the TJME has been interpreted as evidence of a possible bolide impact, although no definitive link between these seismites and any impact event has been found.
On the other hand, the dissimilarity between the isotopic perturbations characterising the TJME and those characterising the end-Cretaceous mass extinction makes an extraterrestrial impact highly unlikely to have been the cause of the TJME, according to many researchers. Various trace metal ratios, including palladium/iridium, platinum/iridium, and platinum/rhodium, in rocks deposited during the TJME have numerical values very different from what would be expected in an extraterrestrial impact scenario, providing further evidence against this hypothesis. The Triassic-Jurassic boundary furthermore lacks a fern spore spike akin to that observed at the terminus of the Cretaceous, inconsistent with an asteroid impact.
Comparisons to present climate change
The extremely rapid, centuries-long timescale of carbon emissions and global warming caused by pulses of CAMP volcanism has drawn comparisons between the Triassic-Jurassic mass extinction and anthropogenic global warming, currently causing the Holocene extinction. The current rate of carbon dioxide emissions is around 50 gigatonnes per year, hundreds of times faster than during the latest Triassic, although the lack of extremely detailed stratigraphic resolution and pulsed nature of CAMP volcanism means that individual pulses of greenhouse gas emissions likely occurred on comparable timescales to human release of warming gases since the Industrial Revolution. The degassing rate of the first pulse of CAMP volcanism is estimated to have been around half of the rate of modern anthropogenic emissions. Palaeontologists studying the TJME and its impacts warn that a major reduction in humanity's carbon dioxide emissions to slow down climate change is of critical importance for preventing a catastrophe similar to the TJME from befalling the modern biosphere. If human-induced climate change persists as is, predictions can be made as to how various aspects of the biosphere will respond based on records of the TJME. For example, current conditions such the increased carbon dioxide levels, ocean acidification, and ocean deoxygenation create a similar climate to that of the Triassic-Jurassic boundary for marine life, so it is the common assumption that should the trends continue, modern reef-building taxa and skeletal benthic organisms will be preferentially impacted. The end-Triassic reef crisis has been specifically cited as a possible analogue for the fate of present coral reefs should anthropogenic global warming continue.
| Physical sciences | Geological history | null |
31199 | https://en.wikipedia.org/wiki/Trireme | Trireme | A trireme ( ; ; cf. ) was an ancient vessel and a type of galley that was used by the ancient maritime civilizations of the Mediterranean Sea, especially the Phoenicians, ancient Greeks and Romans.
The trireme derives its name from its three rows of oars, manned with one man per oar. The early trireme was a development of the penteconter, an ancient warship with a single row of 25 oars on each side (i.e., a single-banked boat), and of the bireme (, ), a warship with two banks of oars, of Phoenician origin. The word dieres does not appear until the Roman period. According to Morrison and Williams, "It must be assumed the term pentekontor covered the two-level type". As a ship, it was fast and agile and was the dominant warship in the Mediterranean from the 7th to the 4th centuries BC, after which it was largely superseded by the larger quadriremes and quinqueremes. Triremes played a vital role in the Persian Wars, the creation of the Athenian maritime empire and its downfall during the Peloponnesian War.
Medieval and early modern galleys with three files of oarsmen per side are sometimes referred to as triremes.
History
Origins
Depictions of two-banked ships (biremes), with or without the parexeiresia (the outriggers, see below), are common in 8th century BC and later vases and pottery fragments, and it is at the end of that century that the first references to three-banked ships are found. Fragments from an 8th-century relief at the Assyrian capital of Nineveh depicting the fleets of Tyre and Sidon show ships with rams, and fitted with oars pivoted at two levels. They have been interpreted as two-decked warships, and also as triremes.
Modern scholarship is divided on the provenance of the trireme, Greece or Phoenicia, and the exact time it developed into the foremost ancient fighting ship. According to Thucydides, the trireme was introduced to Greece by the Corinthians in the late 8th century BC, and the Corinthian Ameinocles built four such ships for the Samians. This was interpreted by later writers, Pliny and Diodorus, to mean that triremes were invented in Corinth. Clement of Alexandria in the 2nd century, drawing on earlier works, explicitly attributes the invention of the trireme (trikrotos naus, "three-banked ship") to the Sidonians the possibility remains that the earliest three-banked warships originated in Phoenicia.
Early use and development
Herodotus mentions that the Egyptian pharaoh Necho II (610–595 BC) built triremes on the Nile, for service in the Mediterranean, and in the Red Sea, but this reference is disputed by modern historians, and attributed to a confusion, since "triērēs" was by the 5th century used in the generic sense of "warship", regardless its type. The first definite reference to the use of triremes in naval combat dates to , when, according to Herodotus, the tyrant Polycrates of Samos was able to contribute 40 triremes to a Persian invasion of Egypt (Battle of Pelusium). Thucydides meanwhile clearly states that in the time of the Persian Wars, the majority of the Greek navies consisted of (probably two-tiered) penteconters and ploia makrá ("long ships").
In any case, by the early 5th century, the trireme was becoming the dominant warship type of the eastern Mediterranean, with minor differences between the "Greek" and "Phoenician" types, as literary references and depictions of the ships on coins make clear. The first large-scale naval battle where triremes participated was the Battle of Lade during the Ionian Revolt, where the combined fleets of the Greek Ionian cities were defeated by the Persian fleet, composed of squadrons from their Phoenician, Carian, and Egyptian subjects.
The Persian Wars
Athens was at that time embroiled in a conflict with the neighbouring island of Aegina, which possessed a formidable navy. In order to counter this, and possibly with an eye already at the mounting Persian preparations, in 483/2 BC the Athenian statesman Themistocles used his political skills and influence to persuade the Athenian assembly to start the construction of 200 triremes, using the income of the newly discovered silver mines at Laurion. The first clash with the Persian navy was at the Battle of Artemisium, where both sides suffered great casualties. However, the decisive naval clash occurred at Salamis, where Xerxes' invasion fleet was decisively defeated.
After Salamis and another Greek victory over the Persian fleet at Mycale, the Ionian cities were freed, and the Delian League was formed under the aegis of Athens. Gradually, the predominance of Athens turned the League effectively into an Athenian Empire. The source and foundation of Athens' power was her strong fleet, composed of over 200 triremes. It not only secured control of the Aegean Sea and the loyalty of her allies, but also safeguarded the trade routes and the grain shipments from the Black Sea, which fed the city's burgeoning population. In addition, as it provided permanent employment for the city's poorer citizens, the fleet played an important role in maintaining and promoting the radical Athenian form of democracy. Athenian maritime power is the first example of thalassocracy in world history. Aside from Athens, other major naval powers of the era included Syracuse, Corfu and Corinth.
In the subsequent Peloponnesian War, naval battles fought by triremes were crucial in the power balance between Athens and Sparta. Despite numerous land engagements, Athens was finally defeated through the destruction of her fleet during the Sicilian Expedition, and finally, at the Battle of Aegospotami, at the hands of Sparta and her allies.
Design
Based on all archeological evidence, the design of the trireme most likely pushed the technological limits of the ancient world. After gathering the proper timbers and materials it was time to consider the fundamentals of the trireme design. These fundamentals included accommodations, propulsion, weight and waterline, centre of gravity and stability, strength, and feasibility. All of these variables are dependent on one another; however a certain area may be more important than another depending on the purpose of the ship.
The arrangement and number of oarsmen is the first deciding factor in the size of the ship. For a ship to travel at high speeds would require a high oar-gearing, which is the ratio between the outboard length of an oar and the inboard length; it is this arrangement of the oars which is unique and highly effective for the trireme. The ports would house the oarsmen with a minimal waste of space. There would be three files of oarsmen on each side tightly but workably packed by placing each man outboard of, and in height overlapping, the one below, provided that thalamian tholes were set inboard and their ports enlarged to allow oar movement.
Thalamian, zygian, and thranite are the English terms for (), (), and (), the Greek words for the oarsmen in, respectively the lowest, middle, and uppermost files of the triereis.
The holes were pins that acted as fulcrums to the oars that allowed them to move. The center of gravity of the ship is low because of the overlapping formation of the files that allow the ports to remain closer to the ships walls. A lower center of gravity would provide adequate stability.
The trireme was constructed to maximize all traits of the ship to the point where if any changes were made the design would be compromised. Speed was maximized to the point where any less weight would have resulted in considerable losses to the ship's integrity. The center of gravity was placed at the lowest possible position where the Thalamian tholes were just above the waterline which retained the ship's resistance to waves and the possible rollover. If the center of gravity were placed any higher, the additional beams needed to restore stability would have resulted in the exclusion of the Thalamian the holes due to the reduced hull space. The purpose of the area just below the center of gravity and the waterline known as the () was to allow bending of the hull when faced with up to 90 kN of force. The calculations of forces that could have been absorbed by the ship are arguable because there is not enough evidence to confirm the exact process of jointing used in ancient times. In a modern reconstruction of the ship, a polysulphide sealant was used to compare to the caulking that evidence suggests was used; however this is also contentious because there is simply not enough evidence to authentically reproduce the triereis seams.
Triremes required a great deal of upkeep in order to stay afloat, as references to the replacement of ropes, sails, rudders, oars and masts in the middle of campaigns suggest. They also would become waterlogged if left in the sea for too long. In order to prevent this from happening, ships would have to be pulled from the water during the night. The use of lightwoods meant that the ship could be carried ashore by as few as 140 men. Beaching the ships at night, however, would leave the troops vulnerable to surprise attacks. While well-maintained triremes would last up to 25 years, during the Peloponnesian War, Athens had to build nearly 20 triremes a year to maintain their fleet of 300.
The Athenian trireme had two great cables of about 47 mm in diameter and twice the ship's length called hypozomata (undergirding), and carried two spares. They were possibly rigged fore and aft from end to end along the middle line of the hull just under the main beams and tensioned to 13.5 tonnes force. The hypozomata were considered important and secret: their export from Athens was a capital offense. This cable would act as a stretched tendon straight down the middle of the hull, and would have prevented hogging. Additionally, hull plank butts would remain in compression in all but the most severe sea conditions, reducing working of joints and consequent leakage. The hypozomata would also have significantly braced the structure of the trireme against the stresses of ramming, giving it an important advantage in combat. According to material scientist J.E. Gordon: "The hupozoma was therefore an essential part of the hulls of these ships; they were unable to fight, or even to go to sea at all, without it. Just as it used to be the practice to disarm modern warships by removing the breech-blocks from the guns, so, in classical times, disarmament commissioners used to disarm triremes by removing the hupozomata."
Dimensions
Excavations of the ship sheds (neōsoikoi, νεώσοικοι) at the harbour of Zea in Piraeus, which was the main war harbour of ancient Athens, were first carried out by Dragatsis and Wilhelm Dörpfeld in the 1880s. These have provided us with a general outline of the Athenian trireme. The sheds were ca. 40 m long and just 6 m wide. These dimensions are corroborated by the evidence of Vitruvius, whereby the individual space allotted to each rower was 2 cubits. With the Doric cubit of 0.49 m, this results in an overall ship length of just under 37 m. The height of the sheds' interior was established as 4.026 metres, leading to estimates that the height of the hull above the water surface was ca. 2.15 metres. Its draught was relatively shallow, about 1 metre, which, in addition to the relatively flat keel and low weight, allowed it to be beached easily.
Construction
Construction of the trireme differed from modern practice. The construction of a trireme was expensive and required around 6,000 man-days of labour to complete. The ancient Mediterranean practice was to build the outer hull first, and the ribs afterwards. To secure and add strength to the hull, cables (hypozōmata) were employed, fitted in the keel and stretched by means of windlasses. Hence the triremes were often called "girded" when in commission.
The materials from which the trireme was constructed were an important aspect of its design. The three principal timbers included fir, pine, and cedar. Primarily the choice in timber depended on where the construction took place. For example, in Syria and Phoenicia, triereis were made of cedar, because pine was not readily available. Pine is stronger and more resistant to decay, but it is heavy, unlike fir, which was used because it was lightweight. The frame and internal structure would consist of pine and fir for a compromise between durability and weight.
Another very strong type of timber is oak; this was primarily used for the hulls of triereis, to withstand the force of hauling ashore. Other ships would usually have their hulls made of pine, because they would usually come ashore via a port or with the use of an anchor. It was necessary to ride the triereis onto the shores because there simply was no time to anchor a ship during war and gaining control of enemy shores was crucial in the advancement of an invading army. (Petersen) The joints of the ship required finding wood that was capable of absorbing water but was not completely dried out to the point where no water absorption could occur. There would be gaps between the planks of the hull when the ship was new, but, once submerged, the planks would absorb the water and expand, thus forming a watertight hull.
Problems would occur, for example, when shipbuilders would use green wood for the hull; when green timber is allowed to dry, it loses moisture, which causes cracks in the wood that could cause catastrophic damage to the ship. The sailyards and masts were preferably made from fir, because fir trees were naturally tall, and provided these parts in usually a single piece. Making durable rope consisted of using both papyrus and white flax; the idea to use such materials is suggested by evidence to have originated in Egypt. In addition, ropes began being made from a variety of esparto grass in the later third century BC.
The use of light woods meant that the ship could be carried ashore by as few as 140 men, but also that the hull soaked up water, which adversely affected its speed and maneuverability. But it was still faster than other warships.
Once the triremes were seaworthy, it is argued that they were highly decorated with, "eyes, nameplates, painted figureheads, and various ornaments". These decorations were used both to show the wealth of the patrician and to make the ship frightening to the enemy. The home port of each trireme was signaled by the wooden statue of a deity located above the bronze ram on the front of the ship. In the case of Athens, since most of the fleet's triremes were paid for by wealthy citizens, there was a natural sense of competition among the patricians to create the "most impressive" trireme, both to intimidate the enemy and to attract the best oarsmen. Of all military expenditure, triremes were the most labor- and (in terms of men and money) investment-intensive.
Propulsion and capabilities
The ship's primary propulsion came from the 170 oars (kōpai), arranged in three rows, with one man per oar. Evidence for this is provided by Thucydides, who records that the Corinthian oarsmen carried "each his oar, cushion (hypersion) and oarloop". The ship also had two masts, a main (histos megas) and a small foremast (histos akateios), with square sails, while steering was provided by two steering oars at the stern (one at the port side, one to starboard).
Classical sources indicate that the trireme was capable of sustained speeds of ca. 6 knots at relatively leisurely oaring. There is also a reference by Xenophon of a single day's voyage from Byzantium to Heraclea Pontica, which translates as an average speed of 7.37 knots. These figures seem to be corroborated by the tests conducted with the reconstructed Olympias: a maximum speed of 8 knots and a steady speed of 4 knots could be maintained, with half the crew resting at a time. Given the imperfect nature of the reconstructed ship, as well as the fact that it was manned by totally untrained modern men and women, it is reasonable to suggest that ancient triremes, expertly built and navigated by trained men, would attain higher speeds.
The distance a trireme could cover in a given day depended much on the weather. On a good day, the oarsmen, rowing for 6–8 hours, could propel the ship between . There were rare instances, however, when experienced crews and new ships were able to cover nearly twice that distance (Thucydides mentions a trireme travelling 300 kilometres in one day). The commanders of the triremes also had to stay aware of the condition of their men. They had to keep their crews comfortably paced, so as not to exhaust them before battle.
Crew
The total complement (plērōma) of the ship was about 200. These were divided into the 170 rowers (eretai), who provided the ship's motive power, the deck crew headed by the trierarch and a marine detachment. The trierarch would be situated in the rear of the ship, and relay orders to the rest of the crew via the rowmaster. For the crew of Athenian triremes, the ships were an extension of their democratic beliefs. Rich and poor rowed alongside each other. Victor Davis Hanson argues that this "served the larger civic interest of acculturating thousands as they worked together in cramped conditions and under dire circumstances."
During the Peloponnesian War, there were a few variations to the typical crew layout of a trireme. One was a drastically reduced number of oarsmen, so as to use the ship as a troop transport. The thranites would row from the top benches while the rest of the space, below, would be filled with hoplites. In another variation, the Athenians used 10 or so trireme for transporting horses. Such triremes had 60 oarsmen, and rest of the ship was for horses.
The trireme was designed for day-long journeys, with no capacity to stay at sea overnight, or to carry the provisions needed to sustain its crew overnight. Each crewman required 2 gallons (7.6 l) of fresh drinking water to stay hydrated each day, but it is unknown quite how this was stored and distributed. This meant that all those aboard were dependent upon the land and peoples of wherever they landed each night for supplies. Sometimes this would entail traveling up to eighty kilometres in order to procure provisions. In the Peloponnesian War, the beached Athenian fleet was caught unawares on more than one occasion, while out looking for food (Battle of Syracuse and Battle of Aegospotami). Cities visited, which suddenly found themselves needing to provide for large numbers of sailors, usually did not mind the extra business, though those in charge of the fleet had to be careful not to deplete them of resources.
Trierarch
In Athens, the ship's patron was known as the trierarch (triērarchos). He was a wealthy Athenian citizen (usually from the class of the pentakosiomedimnoi), responsible for manning, fitting out and maintaining the ship for his liturgical year at least; the ship itself belonged to Athens. The triērarchia was one of the liturgies of ancient Athens; although it afforded great prestige, it constituted a great financial burden, so that in the 4th century, it was often shared by two citizens, and after 397 BC it was assigned to special boards.
Deck crew
The deck and command crew (hypēresia) was headed by the helmsman, the kybernētēs, who was always an experienced seaman and was often the commander of the vessel. These experienced sailors were to be found on the upper levels of the triremes. Other officers were the bow lookout (prōreus or prōratēs), the boatswain (keleustēs), the quartermaster (pentēkontarchos), the shipwright (naupēgos), the piper (aulētēs) who gave the rowers' rhythm and two superintendents (toicharchoi), in charge of the rowers on each side of the ship. What constituted these sailors' experience was a combination of superior rowing skill (physical stamina and/or consistency in hitting with a full stroke) and previous battle experience. The sailors were likely in their thirties and forties. In addition, there were ten sailors handling the masts and the sails.
Rowers
In the ancient navies, crews were composed not of galley slaves but of free men. In the Athenian case in particular, service in ships was the integral part of the military service provided by the lower classes, the thētai, although metics and hired foreigners were also accepted. Although it has been argued that slaves formed part of the rowing crew in the Sicilian Expedition, a typical Athenian trireme crew during the Peloponnesian War consisted of 80 citizens, 60 metics and 60 foreign hands. Indeed, in the few emergency cases where slaves were used to crew ships, these were deliberately set free, usually before being employed. For instance, the tyrant Dionysius I of Syracuse once set all slaves of Syracuse free to man his galleys, employing thus freedmen, but otherwise relied on citizens and foreigners as oarsmen.
In the Athenian navy, the crews enjoyed long practice in peacetime, becoming skilled professionals and ensuring Athens' supremacy in naval warfare. The rowers were divided according to their positions in the ship into thranitai, zygitai, and thalamitai. According to the excavated Naval Inventories, lists of ships' equipment compiled by the Athenian naval boards, there were:
62 thranitai in the top row (thranos means "deck"). They rowed through the parexeiresia, an outrigger which enabled the inclusion of the third row of oars without significant increase to the height and loss of stability of the ship. Greater demands were placed upon their strength and synchronization than on those of the other two rows.
54 zygitai in the middle row, named after the beams (zygoi) on which they sat.
54 thalamitai or thalamioi in the lowest row, (thalamos means "hold"). Their position was certainly the most uncomfortable, being underneath their colleagues and also exposed to the water entering through the oarholes, despite the use of the askōma, a leather sleeve through which the oar emerged.
Most of the rowers (108 of the 170 – the zygitai and thalamitai), due to the design of the ship, were unable to see the water and therefore, rowed blindly, therefore coordinating the rowing required great skill and practice. It is not known exactly how this was done, but there are literary and visual references to the use of gestures and pipe playing to convey orders to rowers. In the sea trials of the reconstruction Olympias, it was evident that this was a difficult problem to solve, given the amount of noise that a full rowing crew generated. In Aristophanes' play The Frogs two different rowing chants can be found: "ryppapai" and "o opop", both corresponding quite well to the sound and motion of the oar going through its full cycle.
Marines
A varying number of marines (epibatai), usually 10–20, were carried aboard for boarding actions. At the Battle of Salamis, each Athenian ship was recorded to have 14 hoplites and 4 archers (usually Scythian mercenaries) on board, but Herodotus narrates that the Chiots had 40 hoplites on board at Lade and that the Persian ships carried a similar number. This reflects the different practices between the Athenians and other, less professional navies. Whereas the Athenians relied on speed and maneuverability, where their highly trained crews had the advantage, other states favored boarding, in a situation that closely mirrored the one that developed during the First Punic War. Grappling hooks would be used both as a weapon and for towing damaged ships (ally or enemy) back to shore. When the triremes were alongside each other, marines would either spear the enemy or jump across and cut the enemy down with their swords. As the presence of too many heavily armed hoplites on deck tended to destabilize the ship, the epibatai were normally seated, only rising to carry out any boarding action. The hoplites belonged to the middle social classes, so that they came immediately next to the trierarch in status aboard the ship.
Tactics
In the ancient world, naval combat relied on two methods: boarding and ramming. Artillery in the form of ballistas and catapults was widespread, especially in later centuries, but its inherent technical limitations meant that it could not play a decisive role in combat. The method for boarding was to brush alongside the enemy ship, with oars drawn in, in order to break the enemy's oars and render the ship immobile (which disables the enemy ship from simply getting away), then to board the ship and engage in hand-to-hand combat.
Rams (embola) were fitted to the prows of warships, and were used to rupture the hull of the enemy ship. The preferred method of attack was to come in from astern, with the aim not of creating a single hole, but of rupturing as big a length of the enemy vessel as possible. The speed necessary for a successful impact depended on the angle of attack; the greater the angle, the lesser the speed required. At 60 degrees, 4 knots was enough to penetrate the hull, while it increased to 8 knots at 30 degrees. If the target for some reason was in motion in the direction of the attacker, even less speed was required, and especially if the hit came amidships. The Athenians especially became masters in the art of ramming, using light, un-decked (aphraktai) triremes.
In either case, the masts and railings of the ship were taken down prior to engagement to reduce the opportunities for opponents' grappling hooks.
On-board forces
Unlike the naval warfare of other eras, boarding an enemy ship was not the primary offensive action of triremes. Triremes' small size allowed for a limited number of marines to be carried aboard. During the 5th and 4th centuries, the trireme's strength was in its maneuverability and speed, not its armor or boarding force. That said, fleets less confident in their ability to ram were prone to load more marines onto their ships.
On the deck of a typical trireme in the Peloponnesian War there were 4 or 5 archers and 10 or so marines. These few troops were peripherally effective in an offensive sense, but critical in providing defense for the oarsmen. Should the crew of another trireme board, the marines were all that stood between the enemy troops and the slaughter of the men below. It has also been recorded that if a battle were to take place in the calmer water of a harbor, oarsmen would join the offensive and throw stones (from a stockpile aboard) to aid the marines in harassing/attacking other ships.
Naval strategy in the Peloponnesian War
Squadrons of triremes employed a variety of tactics. The periplous (Gk., "sailing around") involved outflanking or encircling the enemy so as to attack them in the vulnerable rear; the diekplous (Gk., "Sailing out through") involved a concentrated charge so as to break a hole in the enemy line, allowing galleys to break through and then wheel to attack the enemy line from behind; and the kyklos (Gk., "circle") and the mēnoeidēs kyklos (Gk. "half-circle"; literally, "moon-shaped (i.e. crescent-shaped) circle"), were defensive tactics to be employed against these manoeuvres. In all of these manoeuvres, the ability to accelerate faster, row faster, and turn more sharply than one's enemy was very important.
Athens' strength in the Peloponnesian War came from its navy, whereas Sparta's came from its land-based Hoplite army. As the war progressed however the Spartans came to realize that if they were to undermine Pericles' strategy of outlasting the Peloponnesians by remaining within the walls of Athens indefinitely (a strategy made possible by Athens' Long Walls and fortified port of Piraeus), they were going to have to do something about Athens superior naval force. Once Sparta gained Persia as an ally, they had the funds necessary to construct the new naval fleets necessary to combat the Athenians. Sparta was able to build fleet after fleet, eventually destroying the Athenian fleet at the Battle of Aegospotami. The Spartan General Brasidas summed up the difference in approach to naval warfare between the Spartans and the Athenians: "Athenians relied on speed and maneuverability on the open seas to ram at will clumsier ships; in contrast, a Peloponnesian armada might win only when it fought near land in calm and confined waters, had the greater number of ships in a local theater, and if its better-trained marines on deck and hoplites on shore could turn a sea battle into a contest of infantry." In addition, compared to the high-finesse of the Athenian navy (superior oarsmen who could outflank and ram enemy triremes from the side), the Spartans (as well as their allies and other enemies of Athens) would focus mainly on ramming Athenian triremes head on. It would be these tactics, in combination with those outlined by Brasidas, that led to the defeat of the Athenian fleet at the Second Battle of Syracuse during the Sicilian Expedition.
Casualties
Once a naval battle was under way, for the men involved, there were numerous ways for them to meet their end. Drowning was perhaps the most common way for a crew member to perish. Once a trireme had been rammed, the ensuing panic that engulfed the men trapped below deck no doubt extended the amount of time it took the men to escape. Inclement weather would greatly decrease the crew's odds of survival, leading to a situation like that off Cape Athos in 411 (12 of 10,000 men were saved). An estimated 40,000 Persians died in the Battle of Salamis. In the Peloponnesian War, after the Battle of Arginusae, six Athenian generals were executed for failing to rescue several hundred of their men clinging to wreckage in the water.
If the men did not drown, they might be taken prisoner by the enemy. In the Peloponnesian War, "Sometimes captured crews were brought ashore and either cut down or maimed – often grotesquely, by cutting off the right hand or thumb to guarantee that they could never row again." The image found on an early-5th-century black-figure, depicting prisoners bound and thrown into the sea being pushed and prodded under water with poles and spears, shows that enemy treatment of captured sailors in the Peloponnesian War was often brutal. Being speared amid the wreckage of destroyed ships was likely a common cause of death for sailors in the Peloponnesian War.
Naval battles were far more of a spectacle than the hoplite battles on land. Sometimes the battles raging at sea were watched by thousands of spectators on shore. Along with this greater spectacle, came greater consequences for the outcome of any given battle. Whereas the average percentage of fatalities from a land battle were between 10 and 15%, in a sea battle, the forces engaged ran the risk of losing their entire fleet. The number of ships and men in battles was sometimes very high. At the Battle of Arginusae for example, 263 ships were involved, making for a total of 55,000 men, and at the Battle of Aegospotami more than 300 ships and 60,000 seamen were involved. In Battle of Aegospotami, the city-state of Athens lost what was left of its navy: the once 'invincible' thalassocracy lost 170 ships (costing some 400 talents), and the majority of the crews were either killed, captured or lost.
Changes of engagement and construction
During the Hellenistic period, the light trireme was supplanted by larger warships in dominant navies, especially the pentere/quinquereme. The maximum practical number of oar banks a ship could have was three. So the number in the type name did not refer to the banks of oars any more (as for biremes and triremes), but to the number of rowers per vertical section, with several men on each oar. The reason for this development was the increasing use of armour on the bows of warships against ramming attacks, which again required heavier ships for a successful attack. This increased the number of rowers per ship, and also made it possible to use less well-trained personnel for moving these new ships. This change was accompanied by an increased reliance on tactics like boarding, missile skirmishes and using warships as platforms for artillery.
Triremes continued to be the mainstay of all smaller navies. While the Hellenistic kingdoms did develop the quinquereme and even larger ships, most navies of the Greek homeland and the smaller colonies could only afford triremes. They were used by the Diadochi Empires and sea powers like Syracuse, Carthage and later Rome. The difference to the classical 5th century Athenian ships was that they were armoured against ramming and carried significantly more marines. Lightened versions of the trireme and smaller vessels were often used as auxiliaries, and still performed quite effectively against the heavier ships, thanks to their greater manoeuvrability.
With the rise of Rome the biggest fleet of quinqueremes temporarily ruled the Mediterranean, but during the civil wars after Caesar's death the fleet was on the wrong side and a new warfare with light liburnas was developed. By Imperial times, Rome controlled the entirety of the Mediterranean and thus the need to maintain a powerful navy was minimal, as the only enemy they would be facing is pirates. As a result, the fleet was relatively small and had mostly political influence, controlling the grain supply and fighting pirates, who usually employed light biremes and liburnians. But instead of the successful liburnians of the Greek Civil War, it was again centred around light triremes, but still with many marines. Out of this type of ship, the dromon developed.
Reconstruction
In 1985–1987 a shipbuilder in Piraeus, financed by Frank Welsh (an author, Suffolk banker, writer and trireme enthusiast), advised by historian J. S. Morrison and naval architect John F. Coates (who with Welsh founded the Trireme Trust that initiated and managed the project), and informed by evidence from underwater archaeology, built an Athenian-style trireme, Olympias.
Crewed by 170 volunteer oarsmen, Olympias in 1988 achieved 9 knots (17 km/h or 10.5 mph). These results, achieved with inexperienced crew, suggest that the ancient writers were not exaggerating about straight-line performance. In addition, Olympias was able to execute a 180 degree turn in one minute and in an arc no wider than two and one half (2.5) ship-lengths. Additional sea trials took place in 1987, 1990, 1992 and 1994. In 2004 Olympias was used ceremonially to transport the Olympic Flame from the port of Keratsini to the main port of Piraeus as the 2004 Olympic Torch Relay entered its final stages in the run-up to the 2004 Summer Olympics opening ceremony.
The builders of the reconstruction project concluded that it effectively proved what had previously been in doubt, i.e., that Athenian triremes were arranged with the crew positioned in a staggered arrangement on three levels with one person per oar. This architecture would have made optimum use of the available internal dimensions. However, since modern humans are on average approximately 6 cm (2 inches) taller than Ancient Greeks (and the same relative dimensions can be presumed for oarsmen and other athletes), the construction of a craft which followed the precise dimensions of the ancient vessel led to cramped rowing conditions and consequent restrictions on the modern crew's ability to propel the vessel with full efficiency, which perhaps explains why the ancient speed records stand unbroken.
| Technology | Naval warfare | null |
31248 | https://en.wikipedia.org/wiki/Travelling%20salesman%20problem | Travelling salesman problem | In the theory of computational complexity, the travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.
The travelling purchaser problem, the vehicle routing problem and the ring star problem are three generalizations of TSP.
The decision version of the TSP (where given a length L, the task is to decide whether the graph has a tour whose length is at most L) belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially (but no more than exponentially) with the number of cities.
The problem was first formulated in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Even though the problem is computationally difficult, many heuristics and exact algorithms are known, so that some instances with tens of thousands of cities can be solved completely, and even problems with millions of cities can be approximated within a small fraction of 1%.
The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. The TSP also appears in astronomy, as astronomers observing many sources want to minimize the time spent moving the telescope between the sources; in such problems, the TSP can be embedded inside an optimal control problem. In many applications, additional constraints such as limited resources or time windows may be imposed.
History
The origins of the travelling salesman problem are unclear. A handbook for travelling salesmen from 1832 mentions the problem and includes example tours through Germany and Switzerland, but contains no mathematical treatment.
The TSP was mathematically formulated in the 19th century by the Irish mathematician William Rowan Hamilton and by the British mathematician Thomas Kirkman. Hamilton's icosian game was a recreational puzzle based on finding a Hamiltonian cycle. The general form of the TSP appears to have been first studied by mathematicians during the 1930s in Vienna and at Harvard, notably by Karl Menger, who defines the problem, considers the obvious brute-force algorithm, and observes the non-optimality of the nearest neighbour heuristic:
It was first considered mathematically in the 1930s by Merrill M. Flood who was looking to solve a school bus routing problem. Hassler Whitney at Princeton University generated interest in the problem, which he called the "48 states problem". The earliest publication using the phrase "travelling [or traveling] salesman problem" was the 1949 RAND Corporation report by Julia Robinson, "On the Hamiltonian game (a traveling salesman problem)."
In the 1950s and 1960s, the problem became increasingly popular in scientific circles in Europe and the United States after the RAND Corporation in Santa Monica offered prizes for steps in solving the problem. Notable contributions were made by George Dantzig, Delbert Ray Fulkerson, and Selmer M. Johnson from the RAND Corporation, who expressed the problem as an integer linear program and developed the cutting plane method for its solution. They wrote what is considered the seminal paper on the subject in which, with these new methods, they solved an instance with 49 cities to optimality by constructing a tour and proving that no other tour could be shorter. Dantzig, Fulkerson, and Johnson, however, speculated that, given a near-optimal solution, one may be able to find optimality or prove optimality by adding a small number of extra inequalities (cuts). They used this idea to solve their initial 49-city problem using a string model. They found they only needed 26 cuts to come to a solution for their 49 city problem. While this paper did not give an algorithmic approach to TSP problems, the ideas that lay within it were indispensable to later creating exact solution methods for the TSP, though it would take 15 years to find an algorithmic approach in creating these cuts. As well as cutting plane methods, Dantzig, Fulkerson, and Johnson used branch-and-bound algorithms perhaps for the first time.
In 1959, Jillian Beardwood, J.H. Halton, and John Hammersley published an article entitled "The Shortest Path Through Many Points" in the journal of the Cambridge Philosophical Society. The Beardwood–Halton–Hammersley theorem provides a practical solution to the travelling salesman problem. The authors derived an asymptotic formula to determine the length of the shortest route for a salesman who starts at a home or office and visits a fixed number of locations before returning to the start.
In the following decades, the problem was studied by many researchers from mathematics, computer science, chemistry, physics, and other sciences. In the 1960s, however, a new approach was created that, instead of seeking optimal solutions, would produce a solution whose length is provably bounded by a multiple of the optimal length, and in doing so would create lower bounds for the problem; these lower bounds would then be used with branch-and-bound approaches. One method of doing this was to create a minimum spanning tree of the graph and then double all its edges, which produces the bound that the length of an optimal tour is at most twice the weight of a minimum spanning tree.
In 1976, Christofides and Serdyukov (independently of each other) made a big advance in this direction: the Christofides-Serdyukov algorithm yields a solution that, in the worst case, is at most 1.5 times longer than the optimal solution. As the algorithm was simple and quick, many hoped it would give way to a near-optimal solution method. However, this hope for improvement did not immediately materialize, and Christofides-Serdyukov remained the method with the best worst-case scenario until 2011, when a (very) slightly improved approximation algorithm was developed for the subset of "graphical" TSPs. In 2020 this tiny improvement was extended to the full (metric) TSP.
Richard M. Karp showed in 1972 that the Hamiltonian cycle problem was NP-complete, which implies the NP-hardness of TSP. This supplied a mathematical explanation for the apparent computational difficulty of finding optimal tours.
Great progress was made in the late 1970s and 1980, when Grötschel, Padberg, Rinaldi and others managed to exactly solve instances with up to 2,392 cities, using cutting planes and branch-and-bound.
In the 1990s, Applegate, Bixby, Chvátal, and Cook developed the program Concorde that has been used in many recent record solutions. Gerhard Reinelt published the TSPLIB in 1991, a collection of benchmark instances of varying difficulty, which has been used by many research groups for comparing results. In 2006, Cook and others computed an optimal tour through an 85,900-city instance given by a microchip layout problem, currently the largest solved TSPLIB instance. For many other instances with millions of cities, solutions can be found that are guaranteed to be within 2–3% of an optimal tour.
Description
As a graph problem
TSP can be modeled as an undirected weighted graph, such that cities are the graph's vertices, paths are the graph's edges, and a path's distance is the edge's weight. It is a minimization problem starting and finishing at a specified vertex after having visited each other vertex exactly once. Often, the model is a complete graph (i.e., each pair of vertices is connected by an edge). If no path exists between two cities, then adding a sufficiently long edge will complete the graph without affecting the optimal tour.
Asymmetric and symmetric
In the symmetric TSP, the distance between two cities is the same in each opposite direction, forming an undirected graph. This symmetry halves the number of possible solutions. In the asymmetric TSP, paths may not exist in both directions or the distances might be different, forming a directed graph. Traffic congestion, one-way streets, and airfares for cities with different departure and arrival fees are real-world considerations that could yield a TSP problem in asymmetric form.
Related problems
An equivalent formulation in terms of graph theory is: Given a complete weighted graph (where the vertices would represent the cities, the edges would represent the roads, and the weights would be the cost or distance of that road), find a Hamiltonian cycle with the least weight. This is more general than the Hamiltonian path problem, which only asks if a Hamiltonian path (or cycle) exists in a non-complete unweighted graph.
The requirement of returning to the starting city does not change the computational complexity of the problem; see Hamiltonian path problem.
Another related problem is the bottleneck travelling salesman problem: Find a Hamiltonian cycle in a weighted graph with the minimal weight of the weightiest edge. A real-world example is avoiding narrow streets with big buses. The problem is of considerable practical importance, apart from evident transportation and logistics areas. A classic example is in printed circuit manufacturing: scheduling of a route of the drill machine to drill holes in a PCB. In robotic machining or drilling applications, the "cities" are parts to machine or holes (of different sizes) to drill, and the "cost of travel" includes time for retooling the robot (single-machine job sequencing problem).
The generalized travelling salesman problem, also known as the "travelling politician problem", deals with "states" that have (one or more) "cities", and the salesman must visit exactly one city from each state. One application is encountered in ordering a solution to the cutting stock problem in order to minimize knife changes. Another is concerned with drilling in semiconductor manufacturing; see e.g., . Noon and Bean demonstrated that the generalized travelling salesman problem can be transformed into a standard TSP with the same number of cities, but a modified distance matrix.
The sequential ordering problem deals with the problem of visiting a set of cities, where precedence relations between the cities exist.
A common interview question at Google is how to route data among data processing nodes; routes vary by time to transfer the data, but nodes also differ by their computing power and storage, compounding the problem of where to send data.
The travelling purchaser problem deals with a purchaser who is charged with purchasing a set of products. He can purchase these products in several cities, but at different prices, and not all cities offer the same products. The objective is to find a route between a subset of the cities that minimizes total cost (travel cost + purchasing cost) and enables the purchase of all required products.
Integer linear programming formulations
The TSP can be formulated as an integer linear program. Several formulations are known. Two notable formulations are the Miller–Tucker–Zemlin (MTZ) formulation and the Dantzig–Fulkerson–Johnson (DFJ) formulation. The DFJ formulation is stronger, though the MTZ formulation is still useful in certain settings.
Common to both these formulations is that one labels the cities with the numbers and takes to be the cost (distance) from city to city . The main variables in the formulations are:
It is because these are 0/1 variables that the formulations become integer programs; all other constraints are purely linear. In particular, the objective in the program is to minimize the tour length
Without further constraints, the will effectively range over all subsets of the set of edges, which is very far from the sets of edges in a tour, and allows for a trivial minimum where all . Therefore, both formulations also have the constraints that, at each vertex, there is exactly one incoming edge and one outgoing edge, which may be expressed as the linear equations
for and for
These ensure that the chosen set of edges locally looks like that of a tour, but still allow for solutions violating the global requirement that there is one tour which visits all vertices, as the edges chosen could make up several tours, each visiting only a subset of the vertices; arguably, it is this global requirement that makes TSP a hard problem. The MTZ and DFJ formulations differ in how they express this final requirement as linear constraints.
Miller–Tucker–Zemlin formulation
In addition to the variables as above, there is for each a dummy variable that keeps track of the order in which the cities are visited, counting from city the interpretation is that implies city is visited before city For a given tour (as encoded into values of the variables), one may find satisfying values for the variables by making equal to the number of edges along that tour, when going from city to city
Because linear programming favors non-strict inequalities () over strict we would like to impose constraints to the effect that
if
Merely requiring would not achieve that, because this also requires when which is not correct. Instead MTZ use the linear constraints
for all distinct
where the constant term provides sufficient slack that does not impose a relation between and
The way that the variables then enforce that a single tour visits all cities is that they increase by at least for each step along a tour, with a decrease only allowed where the tour passes through city That constraint would be violated by every tour which does not pass through city so the only way to satisfy it is that the tour passing city also passes through all other cities.
The MTZ formulation of TSP is thus the following integer linear programming problem:
The first set of equalities requires that each city is arrived at from exactly one other city, and the second set of equalities requires that from each city there is a departure to exactly one other city. The last constraint enforces that there is only a single tour covering all cities, and not two or more disjointed tours that only collectively cover all cities.
Dantzig–Fulkerson–Johnson formulation
Label the cities with the numbers 1, ..., n and define:
Take to be the distance from city i to city j. Then TSP can be written as the following integer linear programming problem:
The last constraint of the DFJ formulation—called a subtour elimination constraint—ensures that no proper subset Q can form a sub-tour, so the solution returned is a single tour and not the union of smaller tours. Intuitively, for each proper subset Q of the cities, the constraint requires that there be fewer edges than cities in Q: if there were to be as many edges in Q as cities in Q, that would represent a subtour of the cities of Q. Because this leads to an exponential number of possible constraints, in practice it is solved with row generation.
Computing a solution
The traditional lines of attack for the NP-hard problems are the following:
Devising exact algorithms, which work reasonably fast only for small problem sizes.
Devising "suboptimal" or heuristic algorithms, i.e., algorithms that deliver approximated solutions in a reasonable time.
Finding special cases for the problem ("subproblems") for which either better or exact heuristics are possible.
Exact algorithms
The most direct solution would be to try all permutations (ordered combinations) and see which one is cheapest (using brute-force search). The running time for this approach lies within a polynomial factor of , the factorial of the number of cities, so this solution becomes impractical even for only 20 cities.
One of the earliest applications of dynamic programming is the Held–Karp algorithm, which solves the problem in time . This bound has also been reached by Exclusion-Inclusion in an attempt preceding the dynamic programming approach.
Improving these time bounds seems to be difficult. For example, it has not been determined whether a classical exact algorithm for TSP that runs in time exists. The currently best quantum exact algorithm for TSP due to Ambainis et al. runs in time .
Other approaches include:
Various branch-and-bound algorithms, which can be used to process TSPs containing thousands of cities.
Progressive improvement algorithms, which use techniques reminiscent of linear programming. This works well for up to 200 cities.
Implementations of branch-and-bound and problem-specific cut generation (branch-and-cut); this is the method of choice for solving large instances. This approach holds the current record, solving an instance with 85,900 cities, see .
An exact solution for 15,112 German towns from TSPLIB was found in 2001 using the cutting-plane method proposed by George Dantzig, Ray Fulkerson, and Selmer M. Johnson in 1954, based on linear programming. The computations were performed on a network of 110 processors located at Rice University and Princeton University. The total computation time was equivalent to 22.6 years on a single 500 MHz Alpha processor. In May 2004, the travelling salesman problem of visiting all 24,978 towns in Sweden was solved: a tour of length approximately 72,500 kilometres was found, and it was proven that no shorter tour exists. In March 2005, the travelling salesman problem of visiting all 33,810 points in a circuit board was solved using Concorde TSP Solver: a tour of length 66,048,945 units was found, and it was proven that no shorter tour exists. The computation took approximately 15.7 CPU-years (Cook et al. 2006). In April 2006 an instance with 85,900 points was solved using Concorde TSP Solver, taking over 136 CPU-years; see .
Heuristic and approximation algorithms
Various heuristics and approximation algorithms, which quickly yield good solutions, have been devised. These include the multi-fragment algorithm. Modern methods can find solutions for extremely large problems (millions of cities) within a reasonable time which are, with a high probability, just 2–3% away from the optimal solution.
Several categories of heuristics are recognized.
Constructive heuristics
The nearest neighbour (NN) algorithm (a greedy algorithm) lets the salesman choose the nearest unvisited city as his next move. This algorithm quickly yields an effectively short route. For N cities randomly distributed on a plane, the algorithm on average yields a path 25% longer than the shortest possible path; however, there exist many specially-arranged city distributions which make the NN algorithm give the worst route. This is true for both asymmetric and symmetric TSPs. Rosenkrantz et al. showed that the NN algorithm has the approximation factor for instances satisfying the triangle inequality. A variation of the NN algorithm, called nearest fragment (NF) operator, which connects a group (fragment) of nearest unvisited cities, can find shorter routes with successive iterations. The NF operator can also be applied on an initial solution obtained by the NN algorithm for further improvement in an elitist model, where only better solutions are accepted.
The bitonic tour of a set of points is the minimum-perimeter monotone polygon that has the points as its vertices; it can be computed efficiently with dynamic programming.
Another constructive heuristic, Match Twice and Stitch (MTS), performs two sequential matchings, where the second matching is executed after deleting all the edges of the first matching, to yield a set of cycles. The cycles are then stitched to produce the final tour.
The Algorithm of Christofides and Serdyukov
The algorithm of Christofides and Serdyukov follows a similar outline but combines the minimum spanning tree with a solution of another problem, minimum-weight perfect matching. This gives a TSP tour which is at most 1.5 times the optimal. It was one of the first approximation algorithms, and was in part responsible for drawing attention to approximation algorithms as a practical approach to intractable problems. As a matter of fact, the term "algorithm" was not commonly extended to approximation algorithms until later; the Christofides algorithm was initially referred to as the Christofides heuristic.
This algorithm looks at things differently by using a result from graph theory which helps improve on the lower bound of the TSP which originated from doubling the cost of the minimum spanning tree. Given an Eulerian graph, we can find an Eulerian tour in time, so if we had an Eulerian graph with cities from a TSP as vertices, then we can easily see that we could use such a method for finding an Eulerian tour to find a TSP solution. By the triangle inequality, we know that the TSP tour can be no longer than the Eulerian tour, and we therefore have a lower bound for the TSP. Such a method is described below.
Find a minimum spanning tree for the problem.
Create duplicates for every edge to create an Eulerian graph.
Find an Eulerian tour for this graph.
Convert to TSP: if a city is visited twice, then create a shortcut from the city before this in the tour to the one after this.
To improve the lower bound, a better way of creating an Eulerian graph is needed. By the triangle inequality, the best Eulerian graph must have the same cost as the best travelling salesman tour; hence, finding optimal Eulerian graphs is at least as hard as TSP. One way of doing this is by minimum weight matching using algorithms with a complexity of .
Making a graph into an Eulerian graph starts with the minimum spanning tree; all the vertices of odd order must then be made even, so a matching for the odd-degree vertices must be added, which increases the order of every odd-degree vertex by 1. This leaves us with a graph where every vertex is of even order, which is thus Eulerian. Adapting the above method gives the algorithm of Christofides and Serdyukov:
Find a minimum spanning tree for the problem.
Create a matching for the problem with the set of cities of odd order.
Find an Eulerian tour for this graph.
Convert to TSP using shortcuts.
Pairwise exchange
The pairwise exchange or 2-opt technique involves iteratively removing two edges and replacing them with two different edges that reconnect the fragments created by edge removal into a new and shorter tour. Similarly, the 3-opt technique removes 3 edges and reconnects them to form a shorter tour. These are special cases of the k-opt method. The label Lin–Kernighan is an often heard misnomer for 2-opt; Lin–Kernighan is actually the more general k-opt method.
For Euclidean instances, 2-opt heuristics give on average solutions that are about 5% better than those yielded by Christofides' algorithm. If we start with an initial solution made with a greedy algorithm, then the average number of moves greatly decreases again and is ; however, for random starts, the average number of moves is . While this is a small increase in size, the initial number of moves for small problems is 10 times as big for a random start compared to one made from a greedy heuristic. This is because such 2-opt heuristics exploit 'bad' parts of a solution such as crossings. These types of heuristics are often used within vehicle routing problem heuristics to re-optimize route solutions.
k-opt heuristic, or Lin–Kernighan heuristics
The Lin–Kernighan heuristic is a special case of the V-opt or variable-opt technique. It involves the following steps:
Given a tour, delete k mutually disjoint edges.
Reassemble the remaining fragments into a tour, leaving no disjoint subtours (that is, do not connect a fragment's endpoints together). This in effect simplifies the TSP under consideration into a much simpler problem.
Each fragment endpoint can be connected to other possibilities: of 2k total fragment endpoints available, the two endpoints of the fragment under consideration are disallowed. Such a constrained 2k-city TSP can then be solved with brute-force methods to find the least-cost recombination of the original fragments.
The most popular of the k-opt methods are 3-opt, as introduced by Shen Lin of Bell Labs in 1965. A special case of 3-opt is where the edges are not disjoint (two of the edges are adjacent to one another). In practice, it is often possible to achieve substantial improvement over 2-opt without the combinatorial cost of the general 3-opt by restricting the 3-changes to this special subset where two of the removed edges are adjacent. This so-called two-and-a-half-opt typically falls roughly midway between 2-opt and 3-opt, both in terms of the quality of tours achieved and the time required to achieve those tours.
V-opt heuristic
The variable-opt method is related to, and a generalization of, the k-opt method. Whereas the k-opt methods remove a fixed number (k) of edges from the original tour, the variable-opt methods do not fix the size of the edge set to remove. Instead, they grow the set as the search process continues. The best-known method in this family is the Lin–Kernighan method (mentioned above as a misnomer for 2-opt). Shen Lin and Brian Kernighan first published their method in 1972, and it was the most reliable heuristic for solving travelling salesman problems for nearly two decades. More advanced variable-opt methods were developed at Bell Labs in the late 1980s by David Johnson and his research team. These methods (sometimes called Lin–Kernighan–Johnson) build on the Lin–Kernighan method, adding ideas from tabu search and evolutionary computing. The basic Lin–Kernighan technique gives results that are guaranteed to be at least 3-opt. The Lin–Kernighan–Johnson methods compute a Lin–Kernighan tour, and then perturb the tour by what has been described as a mutation that removes at least four edges and reconnects the tour in a different way, then V-opting the new tour. The mutation is often enough to move the tour from the local minimum identified by Lin–Kernighan. V-opt methods are widely considered the most powerful heuristics for the problem, and are able to address special cases, such as the Hamilton Cycle Problem and other non-metric TSPs that other heuristics fail on. For many years, Lin–Kernighan–Johnson had identified optimal solutions for all TSPs where an optimal solution was known and had identified the best-known solutions for all other TSPs on which the method had been tried.
Randomized improvement
Optimized Markov chain algorithms which use local searching heuristic sub-algorithms can find a route extremely close to the optimal route for 700 to 800 cities.
TSP is a touchstone for many general heuristics devised for combinatorial optimization such as genetic algorithms, simulated annealing, tabu search, ant colony optimization, river formation dynamics (see swarm intelligence), and the cross entropy method.
Constricting Insertion Heuristic
This starts with a sub-tour such as the convex hull and then inserts other vertices.
Ant colony optimization
Artificial intelligence researcher Marco Dorigo described in 1993 a method of heuristically generating "good solutions" to the TSP using a simulation of an ant colony called ACS (ant colony system). It models behavior observed in real ants to find short paths between food sources and their nest, an emergent behavior resulting from each ant's preference to follow trail pheromones deposited by other ants.
ACS sends out a large number of virtual ant agents to explore many possible routes on the map. Each ant probabilistically chooses the next city to visit based on a heuristic combining the distance to the city and the amount of virtual pheromone deposited on the edge to the city. The ants explore, depositing pheromone on each edge that they cross, until they have all completed a tour. At this point the ant which completed the shortest tour deposits virtual pheromone along its complete tour route (global trail updating). The amount of pheromone deposited is inversely proportional to the tour length: the shorter the tour, the more it deposits.
Special cases
Metric
In the metric TSP, also known as delta-TSP or Δ-TSP, the intercity distances satisfy the triangle inequality.
A very natural restriction of the TSP is to require that the distances between cities form a metric to satisfy the triangle inequality; that is, the direct connection from A to B is never farther than the route via intermediate C:
.
The edges then build a metric on the set of vertices. When the cities are viewed as points in the plane, many natural distance functions are metrics, and so many natural instances of TSP satisfy this constraint.
The following are some examples of metric TSPs for various metrics.
In the Euclidean TSP (see below), the distance between two cities is the Euclidean distance between the corresponding points.
In the rectilinear TSP, the distance between two cities is the sum of the absolute values of the differences of their x- and y-coordinates. This metric is often called the Manhattan distance or city-block metric.
In the maximum metric, the distance between two points is the maximum of the absolute values of differences of their x- and y-coordinates.
The last two metrics appear, for example, in routing a machine that drills a given set of holes in a printed circuit board. The Manhattan metric corresponds to a machine that adjusts first one coordinate, and then the other, so the time to move to a new point is the sum of both movements. The maximum metric corresponds to a machine that adjusts both coordinates simultaneously, so the time to move to a new point is the slower of the two movements.
In its definition, the TSP does not allow cities to be visited twice, but many applications do not need this constraint. In such cases, a symmetric, non-metric instance can be reduced to a metric one. This replaces the original graph with a complete graph in which the inter-city distance is replaced by the shortest path length between A and B in the original graph.
Euclidean
For points in the Euclidean plane, the optimal solution to the travelling salesman problem forms a simple polygon through all of the points, a polygonalization of the points. Any non-optimal solution with crossings can be made into a shorter solution without crossings by local optimizations. The Euclidean distance obeys the triangle inequality, so the Euclidean TSP forms a special case of metric TSP. However, even when the input points have integer coordinates, their distances generally take the form of square roots, and the length of a tour is a sum of radicals, making it difficult to perform the symbolic computation needed to perform exact comparisons of the lengths of different tours.
Like the general TSP, the exact Euclidean TSP is NP-hard, but the issue with sums of radicals is an obstacle to proving that its decision version is in NP, and therefore NP-complete. A discretized version of the problem with distances rounded to integers is NP-complete. With rational coordinates and the actual Euclidean metric, Euclidean TSP is known to be in the Counting Hierarchy, a subclass of PSPACE. With arbitrary real coordinates, Euclidean TSP cannot be in such classes, since there are uncountably many possible inputs. Despite these complications, Euclidean TSP is much easier than the general metric case for approximation. For example, the minimum spanning tree of the graph associated with an instance of the Euclidean TSP is a Euclidean minimum spanning tree, and so can be computed in expected O(n log n) time for n points (considerably less than the number of edges). This enables the simple 2-approximation algorithm for TSP with triangle inequality above to operate more quickly.
In general, for any c > 0, where d is the number of dimensions in the Euclidean space, there is a polynomial-time algorithm that finds a tour of length at most (1 + 1/c) times the optimal for geometric instances of TSP in
time; this is called a polynomial-time approximation scheme (PTAS). Sanjeev Arora and Joseph S. B. Mitchell were awarded the Gödel Prize in 2010 for their concurrent discovery of a PTAS for the Euclidean TSP.
In practice, simpler heuristics with weaker guarantees continue to be used.
Asymmetric
In most cases, the distance between two nodes in the TSP network is the same in both directions. The case where the distance from A to B is not equal to the distance from B to A is called asymmetric TSP. A practical application of an asymmetric TSP is route optimization using street-level routing (which is made asymmetric by one-way streets, slip-roads, motorways, etc.).
The stacker crane problem can be viewed as a special case of the asymmetric TSP. In this problem, the input consists of ordered pairs of points in a metric space, which must be visited consecutively in order by the tour. These pairs of points can be viewed as the nodes of an asymmetric TSP, with asymmetric distances reflecting the combined cost of traveling from the first point of a pair to its second and then from the second point of a pair to the first point of the next pair.
Conversion to symmetric
Solving an asymmetric TSP graph can be somewhat complex. The following is a 3×3 matrix containing all possible path weights between the nodes A, B and C. One option is to turn an asymmetric matrix of size N into a symmetric matrix of size 2N.
{| class="wikitable"
|- style="text-align:center;"
|+ Asymmetric path weights
! !! A !! B !! C
|- style="text-align:center;"
! A
| || 1 || 2
|- style="text-align:center;"
! B
| 6 || || 3
|- style="text-align:center;"
! C
| 5 || 4 ||
|}
To double the size, each of the nodes in the graph is duplicated, creating a second ghost node, linked to the original node with a "ghost" edge of very low (possibly negative) weight, here denoted −w. (Alternatively, the ghost edges have weight 0, and weight w is added to all other edges.) The original 3×3 matrix shown above is visible in the bottom left and the transpose of the original in the top-right. Both copies of the matrix have had their diagonals replaced by the low-cost hop paths, represented by −w. In the new graph, no edge directly links original nodes and no edge directly links ghost nodes.
{| class="wikitable"
|- style="text-align:center;" class="wikitable"
|+ Symmetric path weights
! !! A !! B !! C !! A′ !! B′ !! C′
|- style="text-align:center;"
! A
| || || || −w || 6 || 5
|- style="text-align:center;"
! B
| || || || 1 || −w || 4
|- style="text-align:center;"
! C
| || || || 2 || 3 || −w
|- style="text-align:center;"
! A′
| −w || 1 || 2 || || ||
|- style="text-align:center;"
! B′
| 6 || −w || 3 || || ||
|- style="text-align:center;"
! C′
| 5 || 4 || −w || || ||
|}
The weight −w of the "ghost" edges linking the ghost nodes to the corresponding original nodes must be low enough to ensure that all ghost edges must belong to any optimal symmetric TSP solution on the new graph (w = 0 is not always low enough). As a consequence, in the optimal symmetric tour, each original node appears next to its ghost node (e.g. a possible path is ), and by merging the original and ghost nodes again we get an (optimal) solution of the original asymmetric problem (in our example, ).
Analyst's problem
There is an analogous problem in geometric measure theory which asks the following: under what conditions may a subset E of Euclidean space be contained in a rectifiable curve (that is, when is there a curve with finite length that visits every point in E)? This problem is known as the analyst's travelling salesman problem.
Path length for random sets of points in a square
Suppose are independent random variables with uniform distribution in the square , and let be the shortest path length (i.e. TSP solution) for this set of points, according to the usual Euclidean distance. It is known that, almost surely,
where is a positive constant that is not known explicitly. Since (see below), it follows from bounded convergence theorem that , hence lower and upper bounds on follow from bounds on .
The almost-sure limit as may not exist if the independent locations are replaced with observations from a stationary ergodic process with uniform marginals.
Upper bound
One has , and therefore , by using a naïve path which visits monotonically the points inside each of slices of width in the square.
Few proved , hence , later improved by Karloff (1987): .
Fietcher empirically suggested an upper bound of .
Lower bound
By observing that is greater than times the distance between and the closest point , one gets (after a short computation)
A better lower bound is obtained by observing that is greater than times the sum of the distances between and the closest and second closest points , which gives
The currently-best lower bound is
Held and Karp gave a polynomial-time algorithm that provides numerical lower bounds for , and thus for , which seem to be good up to more or less 1%. In particular, David S. Johnson obtained a lower bound by computer experiment:
where 0.522 comes from the points near the square boundary which have fewer neighbours, and Christine L. Valenzuela and Antonia J. Jones obtained the following other numerical lower bound:
.
Computational complexity
The problem has been shown to be NP-hard (more precisely, it is complete for the complexity class FPNP; see function problem), and the decision problem version ("given the costs and a number x, decide whether there is a round-trip route cheaper than x") is NP-complete. The bottleneck travelling salesman problem is also NP-hard. The problem remains NP-hard even for the case when the cities are in the plane with Euclidean distances, as well as in a number of other restrictive cases. Removing the condition of visiting each city "only once" does not remove the NP-hardness, since in the planar case there is an optimal tour that visits each city only once (otherwise, by the triangle inequality, a shortcut that skips a repeated visit would not increase the tour length).
Complexity of approximation
In the general case, finding a shortest travelling salesman tour is NPO-complete. If the distance measure is a metric (and thus symmetric), the problem becomes APX-complete, and the algorithm of Christofides and Serdyukov approximates it within 1.5.
If the distances are restricted to 1 and 2 (but still are a metric), then the approximation ratio becomes 8/7. In the asymmetric case with triangle inequality, in 2018, a constant factor approximation was developed by Svensson, Tarnawski, and Végh. An algorithm by Vera Traub and achieves a performance ratio of . The best known inapproximability bound is 75/74.
The corresponding maximization problem of finding the longest travelling salesman tour is approximable within 63/38. If the distance function is symmetric, then the longest tour can be approximated within 4/3 by a deterministic algorithm and within by a randomized algorithm.
Human and animal performance
The TSP, in particular the Euclidean variant of the problem, has attracted the attention of researchers in cognitive psychology. It has been observed that humans are able to produce near-optimal solutions quickly, in a close-to-linear fashion, with performance that ranges from 1% less efficient, for graphs with 10–20 nodes, to 11% less efficient for graphs with 120 nodes. The apparent ease with which humans accurately generate near-optimal solutions to the problem has led researchers to hypothesize that humans use one or more heuristics, with the two most popular theories arguably being the convex-hull hypothesis and the crossing-avoidance heuristic. However, additional evidence suggests that human performance is quite varied, and individual differences as well as graph geometry appear to affect performance in the task. Nevertheless, results suggest that computer performance on the TSP may be improved by understanding and emulating the methods used by humans for these problems, and have also led to new insights into the mechanisms of human thought. The first issue of the Journal of Problem Solving was devoted to the topic of human performance on TSP, and a 2011 review listed dozens of papers on the subject.
A 2011 study in animal cognition titled "Let the Pigeon Drive the Bus," named after the children's book Don't Let the Pigeon Drive the Bus!, examined spatial cognition in pigeons by studying their flight patterns between multiple feeders in a laboratory in relation to the travelling salesman problem. In the first experiment, pigeons were placed in the corner of a lab room and allowed to fly to nearby feeders containing peas. The researchers found that pigeons largely used proximity to determine which feeder they would select next. In the second experiment, the feeders were arranged in such a way that flying to the nearest feeder at every opportunity would be largely inefficient if the pigeons needed to visit every feeder. The results of the second experiment indicate that pigeons, while still favoring proximity-based solutions, "can plan several steps ahead along the route when the differences in travel costs between efficient and less efficient routes based on proximity become larger." These results are consistent with other experiments done with non-primates, which have proven that some non-primates were able to plan complex travel routes. This suggests non-primates may possess a relatively sophisticated spatial cognitive ability.
Natural computation
When presented with a spatial configuration of food sources, the amoeboid Physarum polycephalum adapts its morphology to create an efficient path between the food sources, which can also be viewed as an approximate solution to TSP.
Benchmarks
For benchmarking of TSP algorithms, TSPLIB is a library of sample instances of the TSP and related problems is maintained; see the TSPLIB external reference. Many of them are lists of actual cities and layouts of actual printed circuits.
Popular culture
Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the U.S. government to solve the most elusive problem in computer-science history: P vs. NP.
Solutions to the problem are used by mathematician Robert A. Bosch in a subgenre called TSP art.
| Mathematics | Graph theory | null |
31260 | https://en.wikipedia.org/wiki/Titration | Titration | Titration (also known as titrimetry and volumetric analysis) is a common laboratory method of quantitative chemical analysis to determine the concentration of an identified analyte (a substance to be analyzed). A reagent, termed the titrant or titrator, is prepared as a standard solution of known concentration and volume. The titrant reacts with a solution of analyte (which may also be termed the titrand) to determine the analyte's concentration. The volume of titrant that reacted with the analyte is termed the titration volume.
History and etymology
The word "titration" descends from the French word titrer (1543), meaning the proportion of gold or silver in coins or in works of gold or silver; i.e., a measure of fineness or purity. Tiltre became titre, which thus came to mean the "fineness of alloyed gold", and then the "concentration of a substance in a given sample". In 1828, the French chemist Joseph Louis Gay-Lussac first used titre as a verb (titrer), meaning "to determine the concentration of a substance in a given sample".
Volumetric analysis originated in late 18th-century France. French chemist François-Antoine-Henri Descroizilles (fr) developed the first burette (which was similar to a graduated cylinder) in 1791. Gay-Lussac developed an improved version of the burette that included a side arm, and invented the terms "pipette" and "burette" in an 1824 paper on the standardization of indigo solutions. The first true burette was invented in 1845 by the French chemist Étienne-Ossian Henry (1798–1873). A major improvement of the method and popularization of volumetric analysis was due to Karl Friedrich Mohr, who redesigned the burette into a simple and convenient form, and who wrote the first textbook on the topic, Lehrbuch der chemisch-analytischen Titrirmethode (Textbook of analytical chemistry titration methods), published in 1855.
Procedure
A typical titration begins with a beaker or Erlenmeyer flask containing a very precise amount of the analyte and a small amount of indicator (such as phenolphthalein) placed underneath a calibrated burette or chemistry pipetting syringe containing the titrant. Small volumes of the titrant are then added to the analyte and indicator until the indicator changes color in reaction to the titrant saturation threshold, representing arrival at the endpoint of the titration, meaning the amount of titrant balances the amount of analyte present, according to the reaction between the two. Depending on the endpoint desired, single drops or less than a single drop of the titrant can make the difference between a permanent and temporary change in the indicator.
Preparation techniques
Typical titrations require titrant and analyte to be in a liquid (solution) form. Though solids are usually dissolved into an aqueous solution, other solvents such as glacial acetic acid or ethanol are used for special purposes (as in petrochemistry, which specializes in petroleum.) Concentrated analytes are often diluted to improve accuracy.
Many non-acid–base titrations require a constant pH during the reaction. Therefore, a buffer solution may be added to the titration chamber to maintain the pH.
In instances where two reactants in a sample may react with the titrant and only one is the desired analyte, a separate masking solution may be added to the reaction chamber which eliminates the effect of the unwanted ion.
Some reduction-oxidation (redox) reactions may require heating the sample solution and titrating while the solution is still hot to increase the reaction rate. For instance, the oxidation of some oxalate solutions requires heating to to maintain a reasonable rate of reaction.
Titration curves
A titration curve is a curve in graph the x-coordinate of which represents the volume of titrant added since the beginning of the titration, and the y-coordinate of which represents the concentration of the analyte at the corresponding stage of the titration (in an acid–base titration, the y-coordinate usually represents the pH of the solution).
In an acid–base titration, the titration curve represents the strength of the corresponding acid and base. For a strong acid and a strong base, the curve will be relatively smooth and very steep near the equivalence point. Because of this, a small change in titrant volume near the equivalence point results in a large pH change and many indicators would be appropriate (for instance litmus, phenolphthalein or bromothymol blue).
If one reagent is a weak acid or base and the other is a strong acid or base, the titration curve is irregular and the pH shifts less with small additions of titrant near the equivalence point. For example, the titration curve for the titration between oxalic acid (a weak acid) and sodium hydroxide (a strong base) is pictured. The equivalence point occurs between pH 8-10, indicating the solution is basic at the equivalence point and an indicator such as phenolphthalein would be appropriate. Titration curves corresponding to weak bases and strong acids are similarly behaved, with the solution being acidic at the equivalence point and indicators such as methyl orange and bromothymol blue being most appropriate.
Titrations between a weak acid and a weak base have titration curves which are very irregular. Because of this, no definite indicator may be appropriate and a pH meter is often used to monitor the reaction.
The type of function that can be used to describe the curve is termed a sigmoid function.
Types of titrations
There are many types of titrations with different procedures and goals. The most common types of qualitative titration are acid–base titrations and redox titrations.
Acid–base titration
Acid–base titrations depend on the neutralization between an acid and a base when mixed in solution. In addition to the sample, an appropriate pH indicator is added to the titration chamber, representing the pH range of the equivalence point. The acid–base indicator indicates the endpoint of the titration by changing color. The endpoint and the equivalence point are not exactly the same because the equivalence point is determined by the stoichiometry of the reaction while the endpoint is just the color change from the indicator. Thus, a careful selection of the indicator will reduce the indicator error. For example, if the equivalence point is at a pH of 8.4, then the phenolphthalein indicator would be used instead of Alizarin Yellow because phenolphthalein would reduce the indicator error. Common indicators, their colors, and the pH range in which they change color are given in the table above. When more precise results are required, or when the reagents are a weak acid and a weak base, a pH meter or a conductance meter are used.
For very strong bases, such as organolithium reagent, metal amides, and hydrides, water is generally not a suitable solvent and indicators whose pKa are in the range of aqueous pH changes are of little use. Instead, the titrant and indicator used are much weaker acids, and anhydrous solvents such as THF are used.
The approximate pH during titration can be approximated by three kinds of calculations. Before beginning of titration, the concentration of [H+] is calculated in an aqueous solution of weak acid before adding any base. When the number of moles of bases added equals the number of moles of initial acid or so called equivalence point, one of hydrolysis and the pH is calculated in the same way that the conjugate bases of the acid titrated was calculated. Between starting and end points, [H+] is obtained from the Henderson-Hasselbalch equation and titration mixture is considered as buffer. In Henderson-Hasselbalch equation the and are said to be the molarities that would have been present even with dissociation or hydrolysis. In a buffer, [H+] can be calculated exactly but the dissociation of , the hydrolysis of A- and self-ionization of water must be taken into account. Four independent equations must be used:
In the equations, and are the moles of acid () and salt ( where X is the cation), respectively, used in the buffer, and the volume of solution is . The law of mass action is applied to the ionization of water and the dissociation of acid to derived the first and second equations. The mass balance is used in the third equation, where the sum of and must equal to the number of moles of dissolved acid and base, respectively. Charge balance is used in the fourth equation, where the left hand side represents the total charge of the cations and the right hand side represents the total charge of the anions: is the molarity of the cation (e.g. sodium, if sodium salt of the acid or sodium hydroxide is used in making the buffer).
Redox titration
Redox titrations are based on a reduction-oxidation reaction between an oxidizing agent and a reducing agent. A potentiometer or a redox indicator is usually used to determine the endpoint of the titration, as when one of the constituents is the oxidizing agent potassium dichromate. The color change of the solution from orange to green is not definite, therefore an indicator such as sodium diphenylamine is used. Analysis of wines for sulfur dioxide requires iodine as an oxidizing agent. In this case, starch is used as an indicator; a blue starch-iodine complex is formed in the presence of excess iodine, signalling the endpoint.
Some redox titrations do not require an indicator, due to the intense color of the constituents. For instance, in permanganometry a slight persisting pink color signals the endpoint of the titration because of the color of the excess oxidizing agent potassium permanganate. In iodometry, at sufficiently large concentrations, the disappearance of the deep red-brown triiodide ion can itself be used as an endpoint, though at lower concentrations sensitivity is improved by adding starch indicator, which forms an intensely blue complex with triiodide.
Gas phase titration
Gas phase titrations are titrations done in the gas phase, specifically as methods for determining reactive species by reaction with an excess of some other gas, acting as the titrant. In one common gas phase titration, gaseous ozone is titrated with nitrogen oxide according to the reaction
O3 + NO → O2 + NO2.
After the reaction is complete, the remaining titrant and product are quantified (e.g., by Fourier transform spectroscopy) (FT-IR); this is used to determine the amount of analyte in the original sample.
Gas phase titration has several advantages over simple spectrophotometry. First, the measurement does not depend on path length, because the same path length is used for the measurement of both the excess titrant and the product. Second, the measurement does not depend on a linear change in absorbance as a function of analyte concentration as defined by the Beer–Lambert law. Third, it is useful for samples containing species which interfere at wavelengths typically used for the analyte.
Complexometric titration
Complexometric titrations rely on the formation of a complex between the analyte and the titrant. In general, they require specialized complexometric indicators that form weak complexes with the analyte. The most common example is the use of starch indicator to increase the sensitivity of iodometric titration, the dark blue complex of starch with iodine and iodide being more visible than iodine alone. Other complexometric indicators are Eriochrome Black T for the titration of calcium and magnesium ions, and the chelating agent EDTA used to titrate metal ions in solution.
Zeta potential titration
Zeta potential titrations are titrations in which the completion is monitored by the zeta potential, rather than by an indicator, in order to characterize heterogeneous systems, such as colloids. One of the uses is to determine the iso-electric point when surface charge becomes zero, achieved by changing the pH or adding surfactant. Another use is to determine the optimum dose for flocculation or stabilization.
Assay
An assay is a type of biological titration used to determine the concentration of a virus or bacterium. Serial dilutions are performed on a sample in a fixed ratio (such as 1:1, 1:2, 1:4, 1:8, etc.) until the last dilution does not give a positive test for the presence of the virus. The positive or negative value may be determined by inspecting the infected cells visually under a microscope or by an immunoenzymetric method such as enzyme-linked immunosorbent assay (ELISA). This value is known as the titer.
Measuring the endpoint of a titration
Different methods to determine the endpoint include:
Indicator: A substance that changes color in response to a chemical change. An acid–base indicator (e.g., phenolphthalein) changes color depending on the pH. Redox indicators are also used. A drop of indicator solution is added to the titration at the beginning; the endpoint has been reached when the color changes.
Potentiometer: An instrument that measures the electrode potential of the solution. These are used for redox titrations; the potential of the working electrode will suddenly change as the endpoint is reached.
pH meter: A potentiometer with an electrode whose potential depends on the amount of H+ ion present in the solution. (This is an example of an ion-selective electrode.) The pH of the solution is measured throughout the titration, more accurately than with an indicator; at the endpoint there will be a sudden change in the measured pH.
Conductivity: A measurement of ions in a solution. Ion concentration can change significantly in a titration, which changes the conductivity. (For instance, during an acid–base titration, the H+ and OH− ions react to form neutral H2O.) As total conductance depends on all ions present in the solution and not all ions contribute equally (due to mobility and ionic strength), predicting the change in conductivity is more difficult than measuring it.
Color change: In some reactions, the solution changes color without any added indicator. This is often seen in redox titrations when the different oxidation states of the product and reactant produce different colors.
Precipitation: If a reaction produces a solid, a precipitate will form during the titration. A classic example is the reaction between Ag+ and Cl− to form the insoluble salt AgCl. Cloudy precipitates usually make it difficult to determine the endpoint precisely. To compensate, precipitation titrations often have to be done as "back" titrations (see below).
Isothermal titration calorimeter: An instrument that measures the heat produced or consumed by the reaction to determine the endpoint. Used in biochemical titrations, such as the determination of how substrates bind to enzymes.
Thermometric titrimetry: Differentiated from calorimetric titrimetry because the heat of the reaction (as indicated by temperature rise or fall) is not used to determine the amount of analyte in the sample solution. Instead, the endpoint is determined by the rate of temperature change.
Spectroscopy: Used to measure the absorption of light by the solution during titration if the spectrum of the reactant, titrant or product is known. The concentration of the material can be determined by Beer's Law.
Amperometry: Measures the current produced by the titration reaction as a result of the oxidation or reduction of the analyte. The endpoint is detected as a change in the current. This method is most useful when the excess titrant can be reduced, as in the titration of halides with Ag+.
Endpoint and equivalence point
Though the terms equivalence point and endpoint are often used interchangeably, they are different terms. Equivalence point is the theoretical completion of the reaction: the volume of added titrant at which the number of moles of titrant is equal to the number of moles of analyte, or some multiple thereof (as in polyprotic acids). Endpoint is what is actually measured, a physical change in the solution as determined by an indicator or an instrument mentioned above.
There is a slight difference between the endpoint and the equivalence point of the titration. This error is referred to as an indicator error, and it is indeterminate.
Back titration
Back titration is a titration done in reverse; instead of titrating the original sample, a known excess of standard reagent is added to the solution, and the excess is titrated. A back titration is useful if the endpoint of the reverse titration is easier to identify than the endpoint of the normal titration, as with precipitation reactions. Back titrations are also useful if the reaction between the analyte and the titrant is very slow, or when the analyte is in a non-soluble solid.
Graphical methods
The titration process creates solutions with compositions ranging from pure acid to pure base. Identifying the pH associated with any stage in the titration process is relatively simple for monoprotic acids and bases. The presence of more than one acid or base group complicates these computations. Graphical methods, such as the equiligraph, have long been used to account for the interaction of coupled equilibria.
Particular uses
Acid–base titrations
For biodiesel fuel: waste vegetable oil (WVO) must be neutralized before a batch may be processed. A portion of WVO is titrated with a base to determine acidity, so the rest of the batch may be neutralized properly. This removes free fatty acids from the WVO that would normally react to make soap instead of biodiesel fuel.
Kjeldahl method: a measure of nitrogen content in a sample. Organic nitrogen is digested into ammonia with sulfuric acid and potassium sulfate. Finally, ammonia is back titrated with boric acid and then sodium carbonate.
Acid value: the mass in milligrams of potassium hydroxide (KOH) required to titrate fully an acid in one gram of sample. An example is the determination of free fatty acid content.
Saponification value: the mass in milligrams of KOH required to saponify a fatty acid in one gram of sample. Saponification is used to determine average chain length of fatty acids in fat.
Ester value (or ester index): a calculated index. Ester value = Saponification value – Acid value.
Amine value: the mass in milligrams of KOH equal to the amine content in one gram of sample.
Hydroxyl value: the mass in milligrams of KOH corresponding to hydroxyl groups in one gram of sample. The analyte is acetylated using acetic anhydride then titrated with KOH.
Redox titrations
Winkler test for dissolved oxygen: Used to determine oxygen concentration in water. Oxygen in water samples is reduced using manganese(II) sulfate, which reacts with potassium iodide to produce iodine. The iodine is released in proportion to the oxygen in the sample, thus the oxygen concentration is determined with a redox titration of iodine with thiosulfate using a starch indicator.
Vitamin C: Also known as ascorbic acid, vitamin C is a powerful reducing agent. Its concentration can easily be identified when titrated with the blue dye Dichlorophenolindophenol (DCPIP) which becomes colorless when reduced by the vitamin.
Benedict's reagent: Excess glucose in urine may indicate diabetes in a patient. Benedict's method is the conventional method to quantify glucose in urine using a prepared reagent. During this type of titration, glucose reduces cupric ions to cuprous ions which react with potassium thiocyanate to produce a white precipitate, indicating the endpoint.
Bromine number: A measure of unsaturation in an analyte, expressed in milligrams of bromine absorbed by 100 grams of sample.
Iodine number: A measure of unsaturation in an analyte, expressed in grams of iodine absorbed by 100 grams of sample.
Miscellaneous
Karl Fischer titration: A potentiometric method to analyze trace amounts of water in a substance. A sample is dissolved in methanol, and titrated with Karl Fischer reagent (consists of iodine, sulfur dioxide, a base and a solvent, such as alcohol). The reagent contains iodine, which reacts proportionally with water. Thus, the water content can be determined by monitoring the electric potential of excess iodine.
| Physical sciences | Chemical methods | Chemistry |
31296 | https://en.wikipedia.org/wiki/Tachyon | Tachyon | A tachyon () or tachyonic particle is a hypothetical particle that always travels faster than light. Physicists believe that faster-than-light particles cannot exist because they are inconsistent with the known laws of physics. If such particles did exist they could be used to send signals faster than light and into the past. According to the theory of relativity this would violate causality, leading to logical paradoxes such as the grandfather paradox. Tachyons would exhibit the unusual property of increasing in speed as their energy decreases, and would require infinite energy to slow to the speed of light. No verifiable experimental evidence for the existence of such particles has been found.
In the 1967 paper that coined the term, Gerald Feinberg proposed that tachyonic particles could be made from excitations of a quantum field with imaginary mass. However, it was soon realized that Feinberg's model did not in fact allow for superluminal (faster-than-light) particles or signals and that tachyonic fields merely give rise to instabilities, not causality violations. The term tachyonic field refers to imaginary mass fields rather than to faster-than-light particles.
Etymology
The term tachyon comes from the , tachus, meaning swift. The complementary particle types are called luxons (which always move at the speed of light) and bradyons (which always move slower than light); both of these particle types are known to exist.
History
The first hypothesis regarding faster-than-light particles is sometimes attributed to physicist Arnold Sommerfeld, who, in 1904, named them "meta-particles". The possibility of existence of faster-than-light particles was also proposed by in 1923.
The term tachyon was coined by Gerald Feinberg in a 1967 paper titled "Possibility of faster-than-light particles". He had been inspired by the science-fiction story "Beep" by James Blish. Feinberg studied the kinematics of such particles according to special relativity. In his paper, he also introduced fields with imaginary mass (now also referred to as tachyons) in an attempt to understand the microphysical origin such particles might have.
Oleksa-Myron Bilanuik, Vijay Deshpande and E. C. George Sudarshan discussed this more recently in their 1962 paper on the topic and in 1969.
In September 2011, it was reported that a tau neutrino had traveled faster than the speed of light; however, later updates from CERN on the OPERA experiment indicate that the faster-than-light readings were due to a faulty element of the experiment's fibre optic timing system.
Special relativity
In special relativity, a faster-than-light particle would have spacelike four-momentum, unlike ordinary particles that have time-like four-momentum. While some theories suggest the mass of tachyons is imaginary, modern formulations often consider their mass to be real, with redefined formulas for momentum and energy. Additionally, since tachyons are confined to the spacelike portion of the energy–momentum graph, they cannot slow down to subluminal (slower-than-light) speeds.
Mass
In a Lorentz invariant theory, the same formulas that apply to ordinary slower-than-light particles (sometimes called bradyons in discussions of tachyons) must also apply to tachyons. In particular, the energy–momentum relation:
(where p is the relativistic momentum of the bradyon and m is its rest mass) should still apply, along with the formula for the total energy of a particle:
This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the "rest mass–energy") and a contribution from its motion, the kinetic energy. When (the particle's velocity) is larger than (the speed of light), the denominator in the equation for the energy is imaginary, as the value under the square root is negative. Because the total energy of the particle must be real (and not a complex or imaginary number) in order to have any practical meaning as a measurement, the numerator must also be imaginary (i.e. the rest mass m must be imaginary, as a pure imaginary number divided by another pure imaginary number is a real number).
In some modern formulations of the theory, the mass of tachyons is regarded as real.
Speed
One curious effect is that, unlike ordinary particles, the speed of a tachyon increases as its energy decreases. In particular, approaches zero when approaches infinity. (For ordinary bradyonic matter, increases with increasing speed, becoming arbitrarily large as approaches , the speed of light.) Therefore, just as bradyons are forbidden to break the light-speed barrier, so are tachyons forbidden from slowing down to below c, because infinite energy is required to reach the barrier from either above or below.
As noted by Albert Einstein, Richard C. Tolman, and others, special relativity implies that faster-than-light particles, if they existed, could be used to communicate backwards in time.
Neutrinos
In 1985, Chodos proposed that neutrinos can have a tachyonic nature. The possibility of standard model particles moving at faster-than-light speeds can be modeled using Lorentz invariance violating terms, for example in the Standard-Model Extension. In this framework, neutrinos experience Lorentz-violating oscillations and can travel faster than light at high energies. This proposal was strongly criticized.
Superluminal information
If tachyons can transmit information faster than light, then, according to relativity, they violate causality, leading to logical paradoxes of the "kill your own grandfather" type. This is often illustrated with thought experiments such as the "tachyon telephone paradox" or "logically pernicious self-inhibitor."
The problem can be understood in terms of the relativity of simultaneity in special relativity, which says that different inertial reference frames will disagree on whether two events at different locations happened "at the same time" or not, and they can also disagree on the order of the two events. (Technically, these disagreements occur when the spacetime interval between the events is 'space-like', meaning that neither event lies in the future light cone of the other.)
If one of the two events represents the sending of a signal from one location and the second event represents the reception of the same signal at another location, then, as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity ensures that all reference frames agree that the transmission-event happened before the reception-event. However, in the case of a hypothetical signal moving faster than light, there would always be some frames in which the signal was received before it was sent, so that the signal could be said to have moved backward in time. Because one of the two fundamental postulates of special relativity says that the laws of physics should work the same way in every inertial frame, if it is possible for signals to move backward in time in any one frame, it must be possible in all frames. This means that if observer A sends a signal to observer B which moves faster than light in A's frame but backwards in time in B's frame, and then B sends a reply which moves faster than light in B's frame but backwards in time in A's frame, it could work out that A receives the reply before sending the original signal, challenging causality in every frame and opening the door to severe logical paradoxes. This is known as the tachyonic antitelephone.
Reinterpretation principle
The reinterpretation principle asserts that a tachyon sent back in time can always be reinterpreted as a tachyon traveling forward in time, because observers cannot distinguish between the emission and absorption of tachyons. The attempt to detect a tachyon from the future (and violate causality) would actually create the same tachyon and send it forward in time (which is causal).
However, this principle is not widely accepted as resolving the paradoxes. Instead, what would be required to avoid paradoxes is that, unlike any known particle, tachyons do not interact in any way and can never be detected or observed, because otherwise a tachyon beam could be modulated and used to create an anti-telephone or a "logically pernicious self-inhibitor". All forms of energy are believed to interact at least gravitationally, and many authors state that superluminal propagation in Lorentz invariant theories always leads to causal paradoxes.
Fundamental models
In modern physics, all fundamental particles are regarded as excitations of quantum fields. There are several distinct ways in which tachyonic particles could be embedded into a field theory.
Fields with imaginary mass
In the paper that coined the term "tachyon", Gerald Feinberg studied Lorentz invariant quantum fields with imaginary mass. Because the group velocity for such a field is superluminal, naively it appears that its excitations propagate faster than light. However, it was quickly understood that the superluminal group velocity does not correspond to the speed of propagation of any localized excitation (like a particle). Instead, the negative mass represents an instability to tachyon condensation, and all excitations of the field propagate subluminally and are consistent with causality. Despite having no faster-than-light propagation, such fields are referred to simply as "tachyons" in many sources.
Tachyonic fields play an important role in modern physics. Perhaps the most famous is the Higgs boson of the Standard Model of particle physics, which has an imaginary mass in its uncondensed phase. In general, the phenomenon of spontaneous symmetry breaking, which is closely related to tachyon condensation, plays an important role in many aspects of theoretical physics, including the Ginzburg–Landau and BCS theories of superconductivity. Another example of a tachyonic field is the tachyon of bosonic string theory.
Tachyons are predicted by bosonic string theory and also the Neveu-Schwarz (NS) and NS-NS sectors, which are respectively the open bosonic sector and closed bosonic sector, of RNS superstring theory prior to the GSO projection. However such tachyons are not possible due to the Sen conjecture, also known as tachyon condensation. This resulted in the necessity for the GSO projection.
Lorentz-violating theories
In theories that do not respect Lorentz invariance, the speed of light is not (necessarily) a barrier, and particles can travel faster than the speed of light without infinite energy or causal paradoxes. A class of field theories of that type is the so-called Standard Model extensions. However, the experimental evidence for Lorentz invariance is extremely good, so such theories are very tightly constrained.
Fields with non-canonical kinetic term
By modifying the kinetic energy of the field, it is possible to produce Lorentz invariant field theories with excitations that propagate superluminally. However, such theories, in general, do not have a well-defined Cauchy problem (for reasons related to the issues of causality discussed above), and are probably inconsistent quantum mechanically.
In fiction
Tachyons have appeared in many works of fiction. They have been used as a standby mechanism upon which many science fiction authors rely to establish faster-than-light communication, with or without reference to causality issues. The word tachyon has become widely recognized to such an extent that it can impart a science-fictional connotation even if the subject in question has no particular relation to superluminal travel (a form of technobabble, akin to positronic brain).
| Physical sciences | Subatomic particles: General | Physics |
31302 | https://en.wikipedia.org/wiki/Taiga | Taiga | Taiga or tayga ( ; , ), also known as boreal forest or snow forest, is a biome characterized by coniferous forests consisting mostly of pines, spruces, and larches. The taiga, or boreal forest, is the world's largest land biome. In North America, it covers most of inland Canada, Alaska, and parts of the northern contiguous United States. In Eurasia, it covers most of Sweden, Finland, much of Russia from Karelia in the west to the Pacific Ocean (including much of Siberia), much of Norway and Estonia, some of the Scottish Highlands, some lowland/coastal areas of Iceland, and areas of northern Kazakhstan, northern Mongolia, and northern Japan (on the island of Hokkaidō).
The principal tree species, depending on the length of the growing season and summer temperatures, vary across the world. The taiga of North America is mostly spruce; Scandinavian and Finnish taiga consists of a mix of spruce, pines and birch; Russian taiga has spruces, pines and larches depending on the region; and the Eastern Siberian taiga is a vast larch forest.
Taiga in its current form is a relatively recent phenomenon, having only existed for the last 12,000 years since the beginning of the Holocene epoch, covering land that had been mammoth steppe or under the Scandinavian Ice Sheet in Eurasia and under the Laurentide Ice Sheet in North America during the Late Pleistocene.
Although at high elevations taiga grades into alpine tundra through Krummholz, it is not exclusively an alpine biome, and unlike subalpine forest, much of taiga is lowlands.
The term "taiga" is not used consistently by all cultures. In the English language, "boreal forest" is used in the United States and Canada in referring to more southerly regions, while "taiga" is used to describe the more northern, barren areas approaching the tree line and the tundra. Hoffman (1958) discusses the origin of this differential use in North America and how this differentiation distorts established Russian usage.
Climate change is a threat to taiga, and how the carbon dioxide absorbed or emitted should be treated by carbon accounting is controversial.
Climate and geography
Taiga covers or 11.5% of the Earth's land area, second only to deserts and xeric shrublands. The largest areas are located in Russia and Canada. In Sweden taiga is associated with the Norrland terrain.
Temperature
After the permanent ice caps and tundra, taiga is the terrestrial biome with the lowest annual average temperatures, with mean annual temperature generally varying from . Extreme winter minimums in the northern taiga are typically lower than those of the tundra. There are taiga areas of eastern Siberia and interior Alaska-Yukon where the mean annual temperature reaches down to , and the lowest reliably recorded temperatures in the Northern Hemisphere were recorded in the taiga of northeastern Russia.
Taiga has a subarctic climate with very large temperature range between seasons. would be a typical winter day temperature and an average summer day, but the long, cold winter is the dominant feature. This climate is classified as Dfc, Dwc, Dsc, Dfd and Dwd in the Köppen climate classification scheme, meaning that the short summers (24 h average or more), although generally warm and humid, only last 1–3 months, while winters, with average temperatures below freezing, last 5–7 months.
In Siberian taiga the average temperature of the coldest month is between and . There are also some much smaller areas grading towards the oceanic Cfc climate with milder winters, whilst the extreme south and (in Eurasia) west of the taiga reaches into humid continental climates (Dfb, Dwb) with longer summers.
According to some sources, the boreal forest grades into a temperate mixed forest when mean annual temperature reaches about . Discontinuous permafrost is found in areas with mean annual temperature below freezing, whilst in the Dfd and Dwd climate zones continuous permafrost occurs and restricts growth to very shallow-rooted trees like Siberian larch.
Growing season
The growing season, when the vegetation in the taiga comes alive, is usually slightly longer than the climatic definition of summer as the plants of the boreal biome have a lower temperature threshold to trigger growth than other plants. Some sources claim 130 days growing season as typical for the taiga.
In Canada and Scandinavia, the growing season is often estimated by using the period of the year when the 24-hour average temperature is or more. For the Taiga Plains in Canada, growing season varies from 80 to 150 days, and in the Taiga Shield from 100 to 140 days.
Other sources define growing season by frost-free days. Data for locations in southwest Yukon gives 80–120 frost-free days. The closed canopy boreal forest in Kenozersky National Park near Plesetsk, Arkhangelsk Province, Russia, on average has 108 frost-free days.
The longest growing season is found in the smaller areas with oceanic influences; in coastal areas of Scandinavia and Finland, the growing season of the closed boreal forest can be 145–180 days. The shortest growing season is found at the northern taiga–tundra ecotone, where the northern taiga forest no longer can grow and the tundra dominates the landscape when the growing season is down to 50–70 days, and the 24-hr average of the warmest month of the year usually is or less.
High latitudes mean that the sun does not rise far above the horizon, and less solar energy is received than further south. But the high latitude also ensures very long summer days, as the sun stays above the horizon nearly 20 hours each day, or up to 24 hours, with only around 6 hours of daylight, or none, occurring in the dark winters, depending on latitude. The areas of the taiga inside the Arctic Circle have midnight sun in mid-summer and polar night in mid-winter.
Precipitation
The taiga experiences relatively low precipitation throughout the year (generally annually, in some areas), primarily as rain during the summer months, but also as snow or fog. Snow may remain on the ground for as long as nine months in the northernmost extensions of the taiga biome.
The fog, especially predominant in low-lying areas during and after the thawing of frozen Arctic seas, stops sunshine from getting through to plants even during the long summer days. As evaporation is consequently low for most of the year, annual precipitation exceeds evaporation, and is sufficient to sustain the dense vegetation growth including large trees. This explains the striking difference in biomass per square metre between the Taiga and the Steppe biomes, (in warmer climates), where evapotranspiration exceeds precipitation, restricting vegetation to mostly grasses.
In general, taiga grows to the south of the July isotherm, occasionally as far north as the July isotherm, with the southern limit more variable. Depending on rainfall, and taiga may be replaced by forest steppe south of the July isotherm where rainfall is very low, but more typically extends south to the July isotherm, and locally where rainfall is higher, such as in eastern Siberia and adjacent Outer Manchuria, south to the July isotherm.
In these warmer areas the taiga has higher species diversity, with more warmth-loving species such as Korean pine, Jezo spruce, and Manchurian fir, and merges gradually into mixed temperate forest or, more locally (on the Pacific Ocean coasts of North America and Asia), into coniferous temperate rainforests where oak and hornbeam appear and join the conifers, birch and Populus tremula.
Glaciation
The area currently classified as taiga in Europe and North America (except Alaska) was recently glaciated. As the glaciers receded they left depressions in the topography that have since filled with water, creating lakes and bogs (especially muskeg soil) found throughout the taiga.
Soils
Taiga soil tends to be young and poor in nutrients, lacking the deep, organically enriched profile present in temperate deciduous forests. The colder climate hinders development of soil, and the ease with which plants can use its nutrients. The relative lack of deciduous trees, which drop huge volumes of leaves annually, and grazing animals, which contribute significant manure, are also factors. The diversity of soil organisms in the boreal forest is high, comparable to the tropical rainforest.
Fallen leaves and moss can remain on the forest floor for a long time in the cool, moist climate, which limits their organic contribution to the soil. Acids from evergreen needles further leach the soil, creating spodosol, also known as podzol, and the acidic forest floor often has only lichens and some mosses growing on it. In clearings in the forest and in areas with more boreal deciduous trees, there are more herbs and berries growing, and soils are consequently deeper.
Flora
Since North America and Eurasia were originally connected by the Bering land bridge, a number of animal and plant species, more animals than plants, were able to colonize both land masses, and are globally-distributed throughout the taiga biome (see Circumboreal Region). Others differ regionally, typically with each genus having several distinct species, each occupying different regions of the taiga. Taigas also have some small-leaved deciduous trees, like birch, alder, willow, and poplar. These grow mostly in areas further south of the most extreme winter weather.
The Dahurian larch tolerates the coldest winters of the Northern Hemisphere, in eastern Siberia. The very southernmost parts of the taiga may have trees such as oak, maple, elm and lime scattered among the conifers, and there is usually a gradual transition into a temperate, mixed forest, such as the eastern forest-boreal transition of eastern Canada. In the interior of the continents, with the driest climates, the boreal forests might grade into temperate grassland.
There are two major types of taiga. The southern part is the closed canopy forest, consisting of many closely-spaced trees and mossy groundcover. In clearings in the forest, shrubs and wildflowers are common, such as the fireweed and lupine. The other type is the lichen woodland or sparse taiga, with trees that are farther-spaced and lichen groundcover; the latter is common in the northernmost taiga. In the northernmost taiga, the forest cover is not only more sparse, but often stunted in growth form; moreover, ice-pruned, asymmetric black spruce (in North America) are often seen, with diminished foliage on the windward side.
In Canada, Scandinavia and Finland, the boreal forest is usually divided into three subzones: The high boreal (northern boreal/taiga zone), the middle boreal (closed forest), and the southern boreal, a closed-canopy, boreal forest with some scattered temperate, deciduous trees among the conifers. Commonly seen are species such as maple, elm and oak. This southern boreal forest experiences the longest and warmest growing season of the biome. In some regions, including Scandinavia and western Russia, this subzone is commonly used for agricultural purposes.
The boreal forest is home to many types of berries. Some species are confined to the southern and middle closed-boreal forest (such as wild strawberry and partridgeberry); others grow in most areas of the taiga (such as cranberry and cloudberry). Some berries can grow in both the taiga and the lower arctic (southern regions) tundra, such as bilberry, bunchberry and lingonberry.
The forests of the taiga are largely coniferous, dominated by larch, spruce, fir and pine. The woodland mix varies according to geography and climate; for example, the Eastern Canadian forests ecoregion (of the higher elevations of the Laurentian Mountains and the northern Appalachian Mountains) in Canada is dominated by balsam fir Abies balsamea, while further north, the Eastern Canadian Shield taiga (of northern Quebec and Labrador) is mostly black spruce Picea mariana and tamarack larch Larix laricina.
Evergreen species in the taiga (spruce, fir, and pine) have a number of adaptations specifically for survival in harsh taiga winters, although larch, which is extremely cold-tolerant, is deciduous. Taiga trees tend to have shallow roots to take advantage of the thin soils, while many of them seasonally alter their biochemistry to make them more resistant to freezing, called "hardening". The narrow conical shape of northern conifers, and their downward-drooping limbs, also help them shed snow.
Because the sun is low in the horizon for most of the year, it is difficult for plants to generate energy from photosynthesis. Pine, spruce and fir do not lose their leaves seasonally and are able to photosynthesize with their older leaves in late winter and spring when light is good but temperatures are still too low for new growth to commence. The adaptation of evergreen needles limits the water lost due to transpiration and their dark green color increases their absorption of sunlight. Although precipitation is not a limiting factor, the ground freezes during the winter months and plant roots are unable to absorb water, so desiccation can be a severe problem in late winter for evergreens.
Although the taiga is dominated by coniferous forests, some broadleaf trees also occur, including birch, aspen, willow, and rowan. Many smaller herbaceous plants, such as ferns and occasionally ramps grow closer to the ground. Periodic stand-replacing wildfires (with return times of between 20 and 200 years) clear out the tree canopies, allowing sunlight to invigorate new growth on the forest floor. For some species, wildfires are a necessary part of the life cycle in the taiga; some, e.g. jack pine have cones which only open to release their seed after a fire, dispersing their seeds onto the newly cleared ground; certain species of fungi (such as morels) are also known to do this. Grasses grow wherever they can find a patch of sun; mosses and lichens thrive on the damp ground and on the sides of tree trunks. In comparison with other biomes, however, the taiga has low botanical diversity.
Coniferous trees are the dominant plants of the taiga biome. Very few species, in four main genera, are found: the evergreen spruce, fir and pine, and the deciduous larch. In North America, one or two species of fir, and one or two species of spruce, are dominant. Across Scandinavia and western Russia, the Scots pine is a common component of the taiga, while taiga of the Russian Far East and Mongolia is dominated by larch. Rich in spruce and Scots pine (in the western Siberian plain), the taiga is dominated by larch in Eastern Siberia, before returning to its original floristic richness on the Pacific shores. Two deciduous trees mingle throughout southern Siberia: birch and Populus tremula.
Fauna
The boreal forest/taiga supports a relatively small variety of highly specialized and adapted animals, due to the harshness of the climate. Canada's boreal forest includes 85 species of mammals, 130 species of fish, and an estimated 32,000 species of insects. Insects play a critical role as pollinators, decomposers, and as a part of the food web. Many nesting birds, rodents, and small carnivorous mammals rely on them for food in the summer months.
The cold winters and short summers make the taiga a challenging biome for reptiles and amphibians, which depend on environmental conditions to regulate their body temperatures. There are only a few species in the boreal forest, including red-sided garter snake, common European adder, blue-spotted salamander, northern two-lined salamander, Siberian salamander, wood frog, northern leopard frog, boreal chorus frog, American toad, and Canadian toad. Most hibernate underground in winter.
Fish of the taiga must be able to withstand cold water conditions and be able to adapt to life under ice-covered water. Species in the taiga include Alaska blackfish, northern pike, walleye, longnose sucker, white sucker, various species of cisco, lake whitefish, round whitefish, pygmy whitefish, Arctic lamprey, various grayling species, brook trout (including sea-run brook trout in the Hudson Bay area), chum salmon, Siberian taimen, lenok and lake chub.
The taiga is mainly home to a number of large herbivorous mammals, such as Alces alces (moose), and a few subspecies of Rangifer tarandus (reindeer in Eurasia; caribou in North America). Some areas of the more southern closed boreal forest have populations of other Cervidae species, such as the maral, elk, Sitka black-tailed deer, and roe deer. While normally a polar species, some southern herds of muskoxen reside in the taiga of Russia's Far East and North America. The Amur-Kamchatka region of far eastern Russia also supports the snow sheep, the Russian relative of the American bighorn sheep, wild boar, and long-tailed goral. The largest animal in the taiga is the wood bison of northern Canada/Alaska; additionally, some numbers of the American plains bison have been introduced into the Russian far-east, as part of the taiga regeneration project called Pleistocene Park, in addition to Przewalski's horse.
Small mammals of the taiga biome include rodent species such as the beaver, squirrel, chipmunk, marmot, lemming, North American porcupine and vole, as well as a small number of lagomorph species, such as the pika, snowshoe hare and mountain hare. These species have adapted to survive the harsh winters in their native ranges. Some larger mammals, such as bears, eat heartily during the summer in order to gain weight, and then go into hibernation during the winter. Other animals have adapted layers of fur or feathers to insulate them from the cold.
Predatory mammals of the taiga must be adapted to travel long distances in search of scattered prey, or be able to supplement their diet with vegetation or other forms of food (such as raccoons). Mammalian predators of the taiga include Canada lynx, Eurasian lynx, stoat, Siberian weasel, least weasel, sable, American marten, North American river otter, European otter, American mink, wolverine, Asian badger, fisher, timber wolf, Mongolian wolf, coyote, red fox, Arctic fox, grizzly bear, American black bear, Asiatic black bear, Ussuri brown bear, polar bear (only small areas of northern taiga), Siberian tiger, and Amur leopard.
More than 300 species of birds have their nesting grounds in the taiga. Siberian thrush, white-throated sparrow, and black-throated green warbler migrate to this habitat to take advantage of the long summer days and abundance of insects found around the numerous bogs and lakes. Of the 300 species of birds that summer in the taiga, only 30 stay for the winter. These are either carrion-feeding or large raptors that can take live mammal prey, such as the golden eagle, rough-legged buzzard (also known as the rough-legged hawk), Steller's sea eagle (in coastal northeastern Russia-Japan), great gray owl, snowy owl, barred owl, great horned owl, crow and raven. The only other viable adaptation is seed-eating birds, which include several species of grouse, capercaillie and crossbills.
Fire
Fire has been one of the most important factors shaping the composition and development of boreal forest stands; it is the dominant stand-renewing disturbance through much of the Canadian boreal forest. The fire history that characterizes an ecosystem is its fire regime, which has 3 elements: (1) fire type and intensity (e.g., crown fires, severe surface fires, and light surface fires), (2) size of typical fires of significance, and (3) frequency or return intervals for specific land units. The average time within a fire regime to burn an area equivalent to the total area of an ecosystem is its fire rotation (Heinselman 1973) or fire cycle (Van Wagner 1978). However, as Heinselman (1981) noted, each physiographic site tends to have its own return interval, so that some areas are skipped for long periods, while others might burn two-times or more often during a nominal fire rotation.
The dominant fire regime in the boreal forest is high-intensity crown fires or severe surface fires of very large size, often more than 10,000 ha (100 km2), and sometimes more than 400,000 ha (4000 km2). Such fires kill entire stands. Fire rotations in the drier regions of western Canada and Alaska average 50–100 years, shorter than in the moister climates of eastern Canada, where they may average 200 years or more. Fire cycles also tend to be long near the tree line in the subarctic spruce-lichen woodlands. The longest cycles, possibly 300 years, probably occur in the western boreal in floodplain white spruce.
Amiro et al. (2001) calculated the mean fire cycle for the period 1980 to 1999 in the Canadian boreal forest (including taiga) at 126 years. Increased fire activity has been predicted for western Canada, but parts of eastern Canada may experience less fire in future because of greater precipitation in a warmer climate.
The mature boreal forest pattern in the south shows balsam fir dominant on well-drained sites in eastern Canada changing centrally and westward to a prominence of white spruce, with black spruce and tamarack forming the forests on peats, and with jack pine usually present on dry sites except in the extreme east, where it is absent. The effects of fires are inextricably woven into the patterns of vegetation on the landscape, which in the east favour black spruce, paper birch, and jack pine over balsam fir, and in the west give the advantage to aspen, jack pine, black spruce, and birch over white spruce. Many investigators have reported the ubiquity of charcoal under the forest floor and in the upper soil profile. Charcoal in soils provided Bryson et al. (1965) with clues about the forest history of an area 280 km north of the then-current tree line at Ennadai Lake, District Keewatin, Northwest Territories.
Two lines of evidence support the thesis that fire has always been an integral factor in the boreal forest: (1) direct, eye-witness accounts and forest-fire statistics, and (2) indirect, circumstantial evidence based on the effects of fire, as well as on persisting indicators. The patchwork mosaic of forest stands in the boreal forest, typically with abrupt, irregular boundaries circumscribing homogenous stands, is indirect but compelling testimony to the role of fire in shaping the forest. The fact is that most boreal forest stands are less than 100 years old, and only in the rather few areas that have escaped burning are there stands of white spruce older than 250 years.
The prevalence of fire-adaptive morphologic and reproductive characteristics of many boreal plant species is further evidence pointing to a long and intimate association with fire. Seven of the ten most common trees in the boreal forest—jack pine, lodgepole pine, aspen, balsam poplar (Populus balsamifera), paper birch, tamarack, black spruce – can be classed as pioneers in their adaptations for rapid invasion of open areas. White spruce shows some pioneering abilities, too, but is less able than black spruce and the pines to disperse seed at all seasons. Only balsam fir and alpine fir seem to be poorly adapted to reproduce after fire, as their cones disintegrate at maturity, leaving no seed in the crowns.
The oldest forests in the northwest boreal region, some older than 300 years, are of white spruce occurring as pure stands on moist floodplains. Here, the frequency of fire is much less than on adjacent uplands dominated by pine, black spruce and aspen. In contrast, in the Cordilleran region, fire is most frequent in the valley bottoms, decreasing upward, as shown by a mosaic of young pioneer pine and broadleaf stands below, and older spruce–fir on the slopes above. Without fire, the boreal forest would become more and more homogeneous, with the long-lived white spruce gradually replacing pine, aspen, balsam poplar, and birch, and perhaps even black spruce, except on the peatlands.
Climate change
During the last quarter of the twentieth century, the zone of latitude occupied by the boreal forest experienced some of the greatest temperature increases on Earth. Winter temperatures have increased more than summer temperatures. In summer, the daily low temperature has increased more than the daily high temperature. The number of days with extremely cold temperatures (e.g., ) has decreased irregularly but systematically in nearly all the boreal region, allowing better survival for tree-damaging insects. In Fairbanks, Alaska, the length of the frost-free season has increased from 60 to 90 days in the early twentieth century to about 120 days a century later.
It has been hypothesized that the boreal environments have only a few states which are stable in the long term - a treeless tundra/steppe, a forest with >75% tree cover and an open woodland with ~20% and ~45% tree cover. Thus, continued climate change would be able to force at least some of the presently existing taiga forests into one of the two woodland states or even into a treeless steppe - but it could also shift tundra areas into woodland or forest states as they warm and become more suitable for tree growth.
In keeping with this hypothesis, several studies published in the early 2010s found that there was already a substantial drought-induced tree loss in the western Canadian boreal forests since the 1960s: although this trend was weak or even non-existent in the eastern forests, it was particularly pronounced in the western coniferous forests. However, in 2016, a study found no overall Canadian boreal forest trend between 1950 and 2012: while it also found improved growth in some southern boreal forests and dampened growth in the north (contrary to what the hypothesis would suggest), those patterns were statistically weak.
A 2018 Landsat reanalysis confirmed that there was a drying trend and a loss of forest in western Canadian forests and some greening in the wetter east, but it had also concluded that most of the forest loss attributed to climate change in the earlier studies had instead constituted a delayed response to anthropogenic disturbance. Subsequent research found that even in the forests where biomass trends did not change, there was a substantial shift towards the deciduous broad-leaved trees with higher drought tolerance over the past 65 years, and another Landsat analysis of 100,000 undisturbed sites found that the areas with low tree cover became greener in response to warming, but tree mortality (browning) became the dominant response as the proportion of existing tree cover increased.
While the majority of studies on boreal forest transitions have been done in Canada, similar trends have been detected in the other countries. Summer warming has been shown to increase water stress and reduce tree growth in dry areas of the southern boreal forest in central Alaska and portions of far eastern Russia. In Siberia, the taiga is converting from predominantly needle-shedding larch trees to evergreen conifers in response to a warming climate. This is likely to further accelerate warming, as the evergreen trees will absorb more of the sun's rays. Given the vast size of the area, such a change has the potential to affect areas well outside of the region. In much of the boreal forest in Alaska, the growth of white spruce trees are stunted by unusually warm summers, while trees on some of the coldest fringes of the forest are experiencing faster growth than previously. Lack of moisture in the warmer summers are also stressing the birch trees of central Alaska.
In addition to these observations, there has also been work on projecting future forest trends. A 2018 study of the seven tree species dominant in the Eastern Canadian forests found that while 2 °C warming alone increases their growth by around 13% on average, water availability is much more important than temperature and further warming of up to 4 °C would result in substantial declines unless matched by increases in precipitation. A 2019 study suggested that the forest plots commonly used to evaluate boreal forest response to climate change tend to have less evolutionary competition between trees than the typical forest, and that with strong competition, there was little net growth in response to warming.
Climatic change only stimulated growth for trees under weak competition in central boreal forests. A 2021 paper had confirmed that the boreal forests are much more strongly affected by climate change than the other forest types in Canada and projected that most of the eastern Canadian boreal forests would reach a tipping point around 2080 under the RCP 8.5 scenario which represents the largest potential increase in anthropogenic emissions. Another 2021 study projected that under the "moderate" SSP2-4.5 scenario, boreal forests would experience a 15% worldwide increase in biomass by the end of the century, but this would be more than offset by the 41% biomass decline in the tropics.
In 2022, the results of a 5-year warming experiment in North America had shown that the juveniles of tree species which currently dominate the southern margins of the boreal forests fare the worst in response to even 1.5 °C or +3.1 °C of warming and the associated reductions in precipitation. While the temperate species which would benefit from such conditions are also present in the southern boreal forests, they are both rare and have slower growth rates.
A 2022 assessment of tipping points in the climate system designated two inter-related tipping points associated with climate change - the die-off of taiga at its southern edge and the area's consequent reversion to grassland (similar to the Amazon rainforest dieback) and the opposite process to the north, where the rapid warming of the adjacent tundra areas converts them to taiga. While both of these processes can already be observed today, the assessment believes that they would likely not become unstoppable (and thus meet the definition of a tipping point) until global warming of around 4 °C. However, the certainty level is still limited and it is possible that 1.5 °C would be sufficient for either tipping point; on the other hand, the southern die-off may not be inevitable until 5 °C, while the replacement of tundra with taiga may require 7.2 °C.
Once the "right" level of warming is met, either process would take at least 40–50 years to finish, and is more likely to unfold over a century or more. While the southern die-off would involve the loss of around 52 billion tons of carbon, the net result is cooling of around 0.18 °C globally and between 0.5 °C to 2 °C regionally. Likewise, boreal forest expansion into tundra has a net global warming effect of around 0.14 °C globally and 0.5 °C to 1 °C regionally, even though new forest growth captures around 6 billion tons of carbon. In both cases, this is due to the snow-covered ground having a much greater albedo than the forests. According to a later study, disappearing of boreal forests can also increase warming despite the effect on albedo, while the conclusion about cooling from deforestation in these areas made by previous studies results from the failure of models to properly capture the effects of evapotranspiration.
Primary boreal forests hold 1,042 billion tonnes of carbon, more than currently found in the atmosphere, 2 times more than all human caused GHG emissions since the year 1870. In a warmer climate their ability to store carbon will be reduced.
Other threats
Human activities
Some of the larger cities situated in this biome are Murmansk, Arkhangelsk, Yakutsk, Anchorage, Yellowknife, Tromsø, Luleå, and Oulu.
Large areas of Siberia's taiga have been harvested for lumber since the collapse of the Soviet Union. Previously, the forest was protected by the restrictions of the Soviet Ministry of Forestry, but with the collapse of the Union, the restrictions regarding trade with Western nations have vanished. Trees are easy to harvest and sell well, so loggers have begun harvesting Russian taiga evergreen trees for sale to nations previously forbidden by Soviet law.
Insects
Recent years have seen outbreaks of insect pests in forest-destroying plagues: the spruce-bark beetle (Dendroctonus rufipennis) in Yukon and Alaska; the mountain pine beetle in British Columbia; the aspen-leaf miner; the larch sawfly; the spruce budworm (Choristoneura fumiferana); the spruce coneworm.
Pollution
The effect of sulphur dioxide on woody boreal forest species was investigated by Addison et al. (1984), who exposed plants growing on native soils and tailings to 15.2 μmol/m3 (0.34 ppm) of SO2 on CO2 assimilation rate (NAR). The Canadian maximum acceptable limit for atmospheric SO2 is 0.34 ppm. Fumigation with SO2 significantly reduced NAR in all species and produced visible symptoms of injury in 2–20 days. The decrease in NAR of deciduous species (trembling aspen [Populus tremuloides], willow [Salix], green alder [Alnus viridis], and white birch [Betula papyrifera]) was significantly more rapid than of conifers (white spruce, black spruce [Picea mariana], and jack pine [Pinus banksiana]) or an evergreen angiosperm (Labrador tea) growing on a fertilized Brunisol.
These metabolic and visible injury responses seemed to be related to the differences in S uptake owing in part to higher gas exchange rates for deciduous species than for conifers. Conifers growing in oil sands tailings responded to SO2 with a significantly more rapid decrease in NAR compared with those growing in the Brunisol, perhaps because of predisposing toxic material in the tailings. However, sulphur uptake and visible symptom development did not differ between conifers growing on the 2 substrates.
Acidification of precipitation by anthropogenic, acid-forming emissions has been associated with damage to vegetation and reduced forest productivity, but 2-year-old white spruce that were subjected to simulated acid rain (at pH 4.6, 3.6, and 2.6) applied weekly for 7 weeks incurred no statistically significant (P 0.05) reduction in growth during the experiment compared with the background control (pH 5.6) (Abouguendia and Baschak 1987). However, symptoms of injury were observed in all treatments, the number of plants and the number of needles affected increased with increasing rain acidity and with time. Scherbatskoy and Klein (1983) found no significant effect of chlorophyll concentration in white spruce at pH 4.3 and 2.8, but Abouguendia and Baschak (1987) found a significant reduction in white spruce at pH 2.6, while the foliar sulphur content significantly greater at pH 2.6 than any of the other treatments.
Protection
The taiga stores enormous quantities of carbon, more than the world's temperate and tropical forests combined, much of it in wetlands and peatland. In fact, current estimates place boreal forests as storing twice as much carbon per unit area as tropical forests. Wildfires could use up a significant part of the global carbon budget, so fire management at about 12 dollars per tonne of carbon not released is very cheap compared to the social cost of carbon.
Some nations are discussing protecting areas of the taiga by prohibiting logging, mining, oil and gas production, and other forms of development. Responding to a letter signed by 1,500 scientists calling on political leaders to protect at least half of the boreal forest, two Canadian provincial governments, Ontario and Quebec, offered election promises to discuss measures in 2008 that might eventually classify at least half of their northern boreal forest as "protected". Although both provinces admitted it would take decades to plan, working with Aboriginal and local communities and ultimately mapping out precise boundaries of the areas off-limits to development, the measures were touted to create some of the largest protected areas networks in the world once completed. Since then, however, very little action has been taken.
For instance, in February 2010 the Canadian government established limited protection for 13,000 square kilometres of boreal forest by creating a new 10,700-square-kilometre park reserve in the Mealy Mountains area of eastern Canada and a 3,000-square-kilometre waterway provincial park that follows alongside the Eagle River from headwaters to sea.
Natural disturbance
One of the biggest areas of research and a topic still full of unsolved questions is the recurring disturbance of fire and the role it plays in propagating the lichen woodland. The phenomenon of wildfire by lightning strike is the primary determinant of understory vegetation, and because of this, it is considered to be the predominant force behind community and ecosystem properties in the lichen woodland. The significance of fire is clearly evident when one considers that understory vegetation influences tree seedling germination in the short term and decomposition of biomass and nutrient availability in the long term.
The recurrent cycle of large, damaging fire occurs approximately every 70 to 100 years. Understanding the dynamics of this ecosystem is entangled with discovering the successional paths that the vegetation exhibits after a fire. Trees, shrubs, and lichens all recover from fire-induced damage through vegetative reproduction as well as invasion by propagules. Seeds that have fallen and become buried provide little help in re-establishment of a species. The reappearance of lichens is reasoned to occur because of varying conditions and light/nutrient availability in each different microstate. Several different studies have been done that have led to the formation of the theory that post-fire development can be propagated by any of four pathways: self replacement, species-dominance relay, species replacement, or gap-phase self replacement.
Self-replacement is simply the re-establishment of the pre-fire dominant species. Species-dominance relay is a sequential attempt of tree species to establish dominance in the canopy. Species replacement is when fires occur in sufficient frequency to interrupt species dominance relay. Gap-Phase Self-Replacement is the least common and so far has only been documented in Western Canada. It is a self replacement of the surviving species into the canopy gaps after a fire kills another species. The particular pathway taken after fire disturbance depends on how the landscape is able to support trees as well as fire frequency. Fire frequency has a large role in shaping the original inception of the lower forest line of the lichen woodland taiga.
It has been hypothesized by Serge Payette that the spruce-moss forest ecosystem was changed into the lichen woodland biome due to the initiation of two compounded strong disturbances: large fire and the appearance and attack of the spruce budworm. The spruce budworm is a deadly insect to the spruce populations in the southern regions of the taiga. J.P. Jasinski confirmed this theory five years later stating, "Their [lichen woodlands] persistence, along with their previous moss forest histories and current occurrence adjacent to closed moss forests, indicate that they are an alternative stable state to the spruce–moss forests".
Taiga ecoregions
| Physical sciences | Forests | null |
31306 | https://en.wikipedia.org/wiki/Tritium | Tritium | Tritium () or hydrogen-3 (symbol T or H) is a rare and radioactive isotope of hydrogen with a half-life of ~12.3 years. The tritium nucleus (t, sometimes called a triton) contains one proton and two neutrons, whereas the nucleus of the common isotope hydrogen-1 (protium) contains one proton and no neutrons, and that of non-radioactive hydrogen-2 (deuterium) contains one proton and one neutron. Tritium is the heaviest particle-bound isotope of hydrogen. It is one of the few nuclides with a distinct name. The use of the name hydrogen-3, though more systematic, is much less common.
Naturally occurring tritium is extremely rare on Earth. The atmosphere has only trace amounts, formed by the interaction of its gases with cosmic rays. It can be produced artificially by irradiation of lithium or lithium-bearing ceramic pebbles in a nuclear reactor and is a low-abundance byproduct in normal operations of nuclear reactors.
Tritium is used as the energy source in radioluminescent lights for watches, night sights for firearms, numerous instruments and tools, and novelty items such as self-illuminating key chains. It is used in a medical and scientific setting as a radioactive tracer. Tritium is also used as a nuclear fusion fuel, along with more abundant deuterium, in tokamak reactors and in hydrogen bombs. Tritium has also been used commercially in betavoltaic devices such as NanoTritium batteries.
History
Tritium was first detected in 1934 by Ernest Rutherford, Mark Oliphant and Paul Harteck after bombarding deuterium with deuterons (deuterium nuclei). Deuterium is another isotope of hydrogen, which occurs naturally with an abundance of 0.015%. Their experiment could not isolate tritium, which was first accomplished in 1939 by Luis Alvarez and Robert Cornog, who also realized tritium's radioactivity. Willard Libby recognized in 1954 that tritium could be used for radiometric dating of water and wine.
Decay
The half life of tritium is listed by the National Institute of Standards and Technology as () – an annualized rate of approximately 5.5% per year. Tritium decays into helium-3 by beta-minus decay as shown in this nuclear equation:
{| border="0"
|- style="height:2em;"
| ||→ || ||+ || ||+ ||
|}
releasing 18.6 keV of energy in the process. The electron's kinetic energy varies, with an average of 5.7 keV, while the remaining energy is carried off by the nearly undetectable electron antineutrino. Beta particles from tritium can penetrate only about of air, and they are incapable of passing through the dead outermost layer of human skin. Because of their low energy compared to other beta particles, the amount of bremsstrahlung generated is also lower. The unusually low energy released in the tritium beta decay makes the decay (along with that of rhenium-187) useful for absolute neutrino mass measurements in the laboratory.
The low energy of tritium's radiation makes it difficult to detect tritium-labeled compounds except by using liquid scintillation counting.
Production
Lithium
Tritium is most often produced in nuclear reactors by neutron activation of lithium-6. The release and diffusion of tritium and helium produced by the fission of lithium can take place within ceramics known as breeder ceramics. Production of tritium from lithium-6 in such breeder ceramics is possible with neutrons of any energy, though the cross section is higher when the incident neutrons have lower energy, reaching more than 900 barns for thermal neutrons. This is an exothermic reaction, yielding 4.8 MeV. In comparison, fusion of deuterium with tritium releases about 17.6 MeV. For applications in proposed fusion energy reactors, such as ITER, pebbles consisting of lithium bearing ceramics including LiTiO and LiSiO, are being developed for tritium breeding within a helium-cooled pebble bed, also known as a breeder blanket.
+ n → (2.05 MeV) + (2.75 MeV)
High-energy neutrons can also produce tritium from lithium-7 in an endothermic reaction, consuming 2.466 MeV. This was discovered when the 1954 Castle Bravo nuclear test produced an unexpectedly high yield. Prior to this test, it was incorrectly assumed that would absorb a neutron to become , which would beta-decay to , which in turn would decay to two nuclei on a total timeframe much longer than the duration of the explosion.
+ n → + + n
Boron
High-energy neutrons irradiating boron-10, also occasionally produce tritium:
+ n → 2 +
A more common result of boron-10 neutron capture is Li and a single alpha particle.
Especially in pressurized water reactors which only partially thermalize neutrons, the interaction between relatively fast neutrons and the boric acid added as a chemical shim produces small but non-negligible quantities of tritium.
Deuterium
Tritium is also produced in heavy water-moderated reactors whenever a deuterium nucleus captures a neutron. This reaction has a small absorption cross section, making heavy water a good neutron moderator, and relatively little tritium is produced. Even so, cleaning tritium from the moderator may be desirable after several years to reduce the risk of its escaping to the environment. Ontario Power Generation's "Tritium Removal Facility" is capable of processing up to of heavy water a year, and it separates out about of tritium, making it available for other uses.
CANDU reactors typically produce of tritium per year, which is recovered at the Darlington Tritium Recovery Facility (DTRF) attached to the 3,512 MW Darlington Nuclear Generating Station in Ontario. The total production at DTRF between 1989 and 2011 was – with an activity of : an average of about per year.
Deuterium's absorption cross section for thermal neutrons is about 0.52 millibarn, whereas that of oxygen-16 (O) is about 0.19 millibarn and that of oxygen-17 (O) is about 240 millibarns. While O is by far the most common isotope of oxygen in both natural oxygen and heavy water; depending on the method of isotope separation, heavy water may be slightly richer in O and O. Due to both neutron capture and (n,α) reactions (the latter of which produce C, an undesirable long-lived beta emitter, from O) they are net "neutron consumers" and are thus undesirable in a moderator of a natural uranium reactor which needs to keep neutron absorption outside the fuel as low as feasible. Some facilities that remove tritium also remove (or at least reduce the content of) O and O, which can – at least in principle – be used for isotope labeling.
India, which also has a large fleet of pressurized heavy water reactors (initially CANDU technology but since indigenized and further developed IPHWR technology), also removes at least some of the tritium produced in the moderator/coolant of its reactors but due to the dual use nature of tritium and the Indian nuclear bomb program, less information about this is publicly available than for Canada.
Fission
Tritium is an uncommon product of the nuclear fission of uranium-235, plutonium-239, and uranium-233, with a production of about one atom per 10 fissions. The main pathways of tritium production include ternary fission. The release or recovery of tritium needs to be considered in the operation of nuclear reactors, especially in the reprocessing of nuclear fuel and storage of spent nuclear fuel. The production of tritium is not a goal, but a side-effect. It is discharged to the atmosphere in small quantities by some nuclear power plants. Voloxidation is an optional additional step in nuclear reprocessing that removes volatile fission products (such as all isotopes of hydrogen) before an aqueous process begins. This would in principle enable economic recovery of the produced tritium but even if the tritium is only disposed and not used, it has the potential to reduce tritium contamination in the water used, reducing radioactivity released when the water is discharged since tritiated water cannot be removed from "ordinary" water except by isotope separation.
Given the specific activity of tritium at , one TBq is equivalent to roughly .
Fukushima Daiichi
In June 2016 the Tritiated Water Task Force released a report on the status of tritium in tritiated water at Fukushima Daiichi nuclear plant, as part of considering options for final disposal of the stored contaminated cooling water. This identified that the March 2016 holding of tritium on-site was 760 TBq (equivalent to 2.1 g of tritium or 14 mL of pure tritiated water) in a total of 860,000 m of stored water. This report also identified the reducing concentration of tritium in the water extracted from the buildings etc. for storage, seeing a factor of ten decrease over the five years considered (2011–2016), 3.3 MBq/L to 0.3 MBq/L (after correction for the 5% annual decay of tritium).
According to a report by an expert panel considering the best approach to dealing with this issue, "Tritium could be separated theoretically, but there is no practical separation technology on an industrial scale. Accordingly, a controlled environmental release is said to be the best way to treat low-tritium-concentration water." After a public information campaign sponsored by the Japanese government, the gradual release into the sea of the tritiated water began on 24 August 2023 and is the first of four releases through March 2024. The entire process will take "decades" to complete. China reacted with protest.
The IAEA has endorsed the plan. The water released is diluted to reduce the tritium concentration to less than 1500 Bq/L, far below the limit recommended in drinking water by the WHO.
Helium-3
Tritium's decay product helium-3 has a very large cross section (5330 barns) for reacting with thermal neutrons, expelling a proton; hence, it is rapidly converted back to tritium in nuclear reactors.
+ n → +
Cosmic rays
Tritium occurs naturally due to cosmic rays interacting with atmospheric gases. In the most important reaction for natural production, a fast neutron (which must have energy greater than 4.0 MeV) interacts with atmospheric nitrogen:
+ n → +
Worldwide, the production of tritium from natural sources is 148 petabecquerels per year. The global equilibrium inventory of tritium created by natural sources remains approximately constant at 2,590 petabecquerels. This is due to a fixed production rate, and losses proportional to the inventory.
Production history
USA
Tritium for American nuclear weapons was produced in special heavy water reactors at the Savannah River Site until their closures in 1988. With the Strategic Arms Reduction Treaty (START) after the end of the Cold War, the existing supplies were sufficient for the new, smaller number of nuclear weapons for some time.
of tritium was produced in the United States from 1955 to 1996. Since it continually decays into helium-3, the total amount remaining was about at the time of the report,
and about as of 2023.
Tritium production was resumed with irradiation of rods containing lithium (replacing the usual control rods containing boron, cadmium, or hafnium), at the reactors of the commercial Watts Bar Nuclear Plant from 2003 to 2005 followed by extraction of tritium from the rods at the Tritium Extraction Facility at the Savannah River Site beginning in November 2006. Tritium leakage from the rods during reactor operations limits the number that can be used in any reactor without exceeding the maximum allowed tritium levels in the coolant.
Properties
Tritium has an atomic mass of . Diatomic tritium ( or ) is a gas at standard temperature and pressure. Combined with oxygen, it forms tritiated water ().
Compared to hydrogen in its natural composition on Earth, tritium has a higher melting point (20.62 K vs. 13.99 K), a higher boiling point (25.04 K vs. 20.27 K), a higher critical temperature (40.59 K vs. 32.94 K) and a higher critical pressure (1.8317 MPa vs. 1.2858 MPa).
Tritium's specific activity is .
Tritium figures prominently in studies of nuclear fusion due to its favorable reaction cross section and the large amount of energy (17.6 MeV) produced through its reaction with deuterium:
+ → + n
All atomic nuclei contain protons as their only charged particles. They therefore repel one another because like charges repel (Coulomb's law). However, if the atoms have a high enough temperature and pressure (for example, in the core of the Sun), then their random motions can overcome such repulsion, and they can come close enough for the strong nuclear force to take effect, fusing them into heavier atoms.
A tritium nucleus (triton), containing one proton and two neutrons, has the same charge as any hydrogen nucleus, and it experiences the same electrostatic repulsion when close to another nucleus. However, the neutrons in the triton increase the attractive strong nuclear force when close enough to another nucleus. As a result, tritium can fuse more easily with other light atoms, than ordinary hydrogen can.
The same is true, albeit to a lesser extent, of deuterium. This is why brown dwarfs ("failed" stars) cannot fuse normal hydrogen, but they do fuse a small minority of deuterium nuclei.
Like the other isotopes of hydrogen, tritium is difficult to confine. Rubber, plastic, and some kinds of steel are all somewhat permeable. This has raised concerns that if tritium were used in large quantities, in particular for fusion reactors, it might contribute to radioactive contamination, though its short half-life should prevent significant long-term accumulation in the atmosphere.
The high levels of atmospheric nuclear weapons testing that took place prior to the enactment of the Partial Nuclear Test Ban Treaty proved to be unexpectedly useful to oceanographers. The high levels of tritium oxide introduced into upper layers of the oceans have been used in the years since then to measure the rate of mixing of the upper layers of the oceans with their lower levels.
Health risks
Since tritium is a low energy beta (β) emitter, it is not dangerous externally (its β particles cannot penetrate the skin), but it can be a radiation hazard if inhaled, ingested via food or water, or absorbed through the skin.
Organisms can take up HHO, as they would HO. Plants convert HHO into organically bound tritium (OBT), and are consumed by animals. HHO is retained in humans for around 12 days, with a small portion of it remaining in the body. Tritium can be passed along the food chain as one organism feeds on another, though the metabolism of OBT is less understood than that of HHO. Tritium can incorporate to RNA and DNA molecules within organisms which can lead to somatic and genetic impacts. These can emerge in later generations.
HHO has a short biological half-life in the human body of 7 to 14 days, which both reduces the total effects of single-incident ingestion and precludes long-term bioaccumulation of HHO from the environment. The biological half-life of tritiated water in the human body, which is a measure of body water turn-over, varies with the season. Studies on the biological half-life of occupational radiation workers for free water tritium in a coastal region of Karnataka, India, show that the biological half-life in winter is twice that of the summer. If tritium exposure is suspected or known, drinking uncontaminated water will help replace the tritium from the body. Increasing sweating, urination or breathing can help the body expel water and thereby the tritium contained in it. However, care should be taken that neither dehydration nor a depletion of the body's electrolytes results, as the health consequences of those things (particularly in the short term) can be more severe than those of tritium exposure.
Environmental contamination
Tritium has leaked from 48 of 65 nuclear sites in the US. In one case, leaking water contained of tritium per liter, which is 375 times the current EPA limit for drinking water, and 28 times the World Health Organization's recommended limit. This is equivalent to or roughly 0.8 parts per trillion.
The US Nuclear Regulatory Commission states that in normal operation in 2003, 56 pressurized water reactors released of tritium (maximum: ; minimum: ; average: ) and 24 boiling water reactors released (maximum: ; minimum: 0 Ci; average: ), in liquid effluents. of tritium weigh about .
Regulatory limits
The legal limits for tritium in drinking water vary widely from country to country. Some figures are given below:
{| class="wikitable"
|+ Tritium drinking water limits by country
!valign="bottom"| Country
!valign="bottom"| Tritium limit(Bq/L)
!valign="bottom"| Equivalent dose(μSv/year)
|-
| Australia
|align="right"| 76,103
|1,000
|-
| Japan
|align="right"| 60,000
|
|-
| Finland
|align="right"| 30,000
|
|-
| World Health Organization
|align="right"| 10,000
|
|-
| Switzerland
|align="right"| 10,000
|
|-
| Russia
|align="right"| 7,700
|
|-
| Canada (Ontario)
|align="right"| 7,000
|
|-
| United States
|align="right"| 740
|
|-
| Norway
|align="right"| 100
|
|-
|}
The American limit results in a dose of 4.0 millirems (or 40 microsieverts in SI units) per year per EPA regulation 40CFR141, and is based on outdated dose calculation standards of National Bureau of Standards Handbook 69 circa 1963. Four millirem per year is about 1.3% of the natural background radiation (~3 mSv). For comparison, the banana equivalent dose (BED) is set at 0.1 μSv, so the statutory limit in the US is set at 400 BED. Updated dose calculation standards based on International Commission on Radiological Protection Report 30 and used in the NRC Regulation 10CFR20 results in a dose of 0.9 millirem (9 μSv) per year at 740 Bq/L (20 nCi/L).
Use
Radiometric assays in biology and medicine
Tritiation of drug candidates allows detailed analysis of their absorption and metabolism. Tritium has also been used for biological radiometric assays, in a process akin to radiocarbon dating. For example, [3H] retinyl acetate was traced through the bodies of rats.
Self-powered lighting
The beta particles from small amounts of tritium cause chemicals called phosphors to glow. This radioluminescence is used in self-powered lighting devices called betalights, which are used for night illumination of firearm sights, watches, exit signs, map lights, navigational compasses (such as current-use M-1950 U.S. military compasses), knives and a variety of other devices. , commercial demand for tritium is per year and the cost is or more.
Nuclear weapons
Tritium is an important component in nuclear weapons; it is used to enhance the efficiency and yield of fission bombs and the fission stages of hydrogen bombs in a process known as "boosting" as well as in external neutron initiators for such weapons.
Neutron initiator
These are devices incorporated in nuclear weapons which produce a pulse of neutrons when the bomb is detonated to initiate the fission reaction in the fissionable core (pit) of the bomb, after it is compressed to a critical mass by explosives. Actuated by an ultrafast switch like a krytron, a small particle accelerator drives ions of tritium and deuterium to energies above the 15 keV or so needed for deuterium-tritium fusion and directs them into a metal target where the tritium and deuterium are adsorbed as hydrides. High-energy fusion neutrons from the resulting fusion radiate in all directions. Some of these strike plutonium or uranium nuclei in the primary's pit, initiating a nuclear chain reaction. The quantity of neutrons produced is large in absolute numbers, allowing the pit to quickly achieve neutron levels that would otherwise need many more generations of chain reaction, though still small compared to the total number of nuclei in the pit.
Boosting
Before detonation, a few grams of tritium–deuterium gas are injected into the hollow "pit" of fissile material. The early stages of the fission chain reaction supply enough heat and compression to start deuterium–tritium fusion; then both fission and fusion proceed in parallel, the fission assisting the fusion by continuing heating and compression, and the fusion assisting the fission with highly energetic (14.1-MeV) neutrons. As the fission fuel depletes and also explodes outward, it falls below the density needed to stay critical by itself, but the fusion neutrons make the fission process progress faster and continue longer than it would without boosting. Increased yield comes overwhelmingly from the increased fission. The energy from the fusion itself is much smaller because the amount of fusion fuel is much smaller. Effects of boosting include:
increased yield (for the same amount of fission fuel, compared to unboosted)
the possibility of variable yield by varying the amount of fusion fuel
allowing the bomb to require a smaller amount of the very expensive fissile material
eliminating the risk of predetonation by nearby nuclear explosions
not so stringent requirements on the implosion setup, allowing for a smaller and lighter amount of high explosives to be used
The tritium in a warhead is continually undergoing radioactive decay, becoming unavailable for fusion. Also, its decay product, helium-3, absorbs neutrons. This can offset or reverse the intended effect of the tritium, which was to generate many free neutrons, if too much helium-3 has accumulated. Therefore, boosted bombs need fresh tritium periodically. The estimated quantity needed is per warhead. To maintain constant levels of tritium, about per warhead per year must be supplied to the bomb.
One mole of deuterium-tritium gas contains about of tritium and of deuterium. In comparison, the 20 moles of plutonium in a nuclear bomb consists of about of plutonium-239.
Tritium in hydrogen bomb secondaries
Since tritium undergoes radioactive decay, and is also difficult to confine physically, the much larger secondary charge of heavy hydrogen isotopes needed in a true hydrogen bomb uses solid lithium deuteride as its source of deuterium and tritium, producing the tritium in situ during secondary ignition.
During the detonation of the primary fission bomb stage in a thermonuclear weapon (Teller–Ulam staging), the sparkplug, a cylinder of U/Pu at the center of the fusion stage(s), begins to fission in a chain reaction, from excess neutrons channeled from the primary. The neutrons released from the fission of the sparkplug split lithium-6 into tritium and helium-4, while lithium-7 is split into helium-4, tritium, and one neutron. As these reactions occur, the fusion stage is compressed by photons from the primary and fission of the U or U/U jacket surrounding the fusion stage. Therefore, the fusion stage breeds its own tritium as the device detonates. In the extreme heat and pressure of the explosion, some of the tritium is then forced into fusion with deuterium, and that reaction releases even more neutrons.
Since this fusion process requires an extremely high temperature for ignition, and it produces fewer and less energetic neutrons (only fission, deuterium-tritium fusion, and splitting are net neutron producers), lithium deuteride is not used in boosted bombs, but rather for multi-stage hydrogen bombs.
Controlled nuclear fusion
Tritium is an important fuel for controlled nuclear fusion in both magnetic confinement and inertial confinement fusion reactor designs. The National Ignition Facility (NIF) uses deuterium–tritium fuel, and the experimental fusion reactor ITER will also do so. The deuterium–tritium reaction is favorable since it has the largest fusion cross section (about 5.0 barns) and it reaches this maximum cross section at the lowest energy (about 65 keV center-of-mass) of any potential fusion fuel. As tritium is very rare on earth, concepts for fusion reactors often include the breeding of tritium. During the operation of envisioned breeder fusion reactors, Breeding blankets, often containing lithium as part of ceramic pebbles, are subjected to neutron fluxes to generate tritium to complete the fuel cycle.
The Tritium Systems Test Assembly (TSTA) was a facility at the Los Alamos National Laboratory dedicated to the development and demonstration of technologies required for fusion-relevant deuterium–tritium processing.
Electrical power source
Tritium can be used in a betavoltaic device to create an atomic battery to generate electricity.
Use as an oceanic transient tracer
Aside from chlorofluorocarbons, tritium can act as a transient tracer and can "outline" the biological, chemical, and physical paths throughout the world's oceans because of its evolving distribution. Tritium has thus been used as a tool to examine ocean circulation and ventilation and, for such purposes, is usually measured in tritium units, where 1 TU is defined as 1 tritium atom per 10 hydrogen atoms, equal to about 0.118 Bq/liter. As noted earlier, nuclear tests, mainly in the Northern Hemisphere at high latitudes, throughout the late 1950s and early 1960s introduced lots of tritium into the atmosphere, especially the stratosphere. Before these nuclear tests, there were only about 3-4 kg of tritium on the Earth's surface; but these amounts rose by 2-3 orders of magnitude during the post-test period. Some sources reported natural background levels were exceeded by about 1,000 TU in 1963 and 1964 and the isotope is used in the northern hemisphere to estimate the age of groundwater and construct hydrogeologic simulation models. Estimated atmospheric levels at the height of weapons testing to approach 1,000 TU and pre-fallout levels of rainwater to be between 5 and 10 TU. In 1963 Valentia Island Ireland recorded 2,000 TU in precipitation.
North Atlantic Ocean
While in the stratosphere (post-test period), the tritium interacted with and oxidized to water molecules and was present in much of the rapidly produced rainfall, making tritium a prognostic tool for studying the evolution and structure of the water cycle as well as the ventilation and formation of water masses in the North Atlantic.
Bomb-tritium data were used from the Transient Tracers in the Ocean (TTO) program in order to quantify the replenishment and overturning rates for deep water located in the North Atlantic.
Bomb-tritium also enters the deep ocean around the Antarctic. Most of the bomb tritiated water (HHO) throughout the atmosphere can enter the ocean through the following processes:
precipitation
vapor exchange
river runoff
These processes make HHO a great tracer for time scales of up to a few decades.
Using the data from these processes for 1981, the 1-TU isosurface lies between 500 and 1,000 meters deep in the subtropical regions and then extends to 1,500–2,000 meters south of the Gulf Stream due to recirculation and ventilation in the upper portion of the Atlantic Ocean. To the north, the isosurface deepens and reaches the floor of the abyssal plain which is directly related to the ventilation of the ocean floor over 10–20 year time-scales.
Also evident in the Atlantic Ocean is the tritium profile near Bermuda between the late 1960s and late 1980s. There is a downward propagation of the tritium maximum from the surface (1960s) to 400 meters (1980s), which corresponds to a deepening rate of about 18 meters per year. There are also tritium increases at 1,500 m depth in the late 1970s and 2,500 m in the middle of the 1980s, both of which correspond to cooling events in the deep water and associated deep water ventilation.
From a study in 1991, the tritium profile was used as a tool for studying the mixing and spreading of newly formed North Atlantic Deep Water (NADW), corresponding to tritium increases to 4 TU. This NADW tends to spill over sills that divide the Norwegian Sea from the North Atlantic Ocean and then flows to the west and equatorward in deep boundary currents. This process was explained via the large-scale tritium distribution in the deep North Atlantic between 1981 and 1983. The sub-polar gyre tends to be freshened (ventilated) by the NADW and is directly related to the high tritium values (>1.5 TU). Also evident was the decrease in tritium in the deep western boundary current by a factor of 10 from the Labrador Sea to the Tropics, which is indicative of loss to ocean interior due to turbulent mixing and recirculation.
Pacific and Indian oceans
In a 1998 study, tritium concentrations in surface seawater and atmospheric water vapor (10 meters above the surface) were sampled at the following locations: the Sulu Sea, Fremantle Bay, the Bay of Bengal, Penang Bay, and the Strait of Malacca. Results indicated that the tritium concentration in surface seawater was highest at the Fremantle Bay (about 0.40 Bq/liter), which could be accredited to the mixing of runoff of freshwater from nearby lands due to large amounts found in coastal waters. Typically, lower concentrations were found between 35 and 45° south, and near the equator. Results also indicated that (in general) tritium has decreased over the years (up to 1997) due to the physical decay of bomb tritium in the Indian Ocean. As for water vapor, the tritium concentration was about one order of magnitude greater than surface seawater concentrations (ranging from 0.46 to 1.15 Bq/L). Therefore, the water vapor tritium is not affected by the surface seawater concentration; thus, the high tritium concentrations in the vapor were concluded to be a direct consequence of the downward movement of natural tritium from the stratosphere to the troposphere (therefore, the ocean air showed a dependence on latitudinal change).
In the North Pacific Ocean, the tritium (introduced as bomb tritium in the Northern Hemisphere) spread in three dimensions. There were subsurface maxima in the middle and low latitude regions, which is indicative of lateral mixing (advection) and diffusion processes along lines of constant potential density (isopycnals) in the upper ocean. Some of these maxima even correlate well with salinity extrema. In order to obtain the structure for ocean circulation, the tritium concentrations were mapped on 3 surfaces of constant potential density (23.90, 26.02, and 26.81). Results indicated that the tritium was well-mixed (at 6 to 7 TU) on the 26.81 isopycnal in the subarctic cyclonic gyre and there appeared to be a slow exchange of tritium (relative to shallower isopycnals) between this gyre and the anticyclonic gyre to the south; also, the tritium on the 23.90 and 26.02 surfaces appeared to be exchanged at a slower rate between the central gyre of the North Pacific and the equatorial regions.
The depth penetration of bomb tritium can be separated into three distinct layers:
Layer 1 Layer 1 is the shallowest layer and includes the deepest, ventilated layer in winter; it has received tritium via radioactive fallout and lost some due to advection and/or vertical diffusion and contains about 28% of the total amount of tritium.
Layer 2 Layer 2 is below the first layer but above the 26.81 isopycnal and is no longer part of the mixed layer. Its two sources are diffusion downward from the mixed layer and lateral expansions outcropping strata (poleward); it contains about 58% of the total tritium.
Layer 3 Layer 3 is representative of waters that are deeper than the outcrop isopycnal and can only receive tritium via vertical diffusion; it contains the remaining 14% of the total tritium.
Mississippi River system
Trace amounts of radioactive materials from atomic weapons testing settled throughout the Mississippi River System. Tritium concentrations have been used to understand the residence times of continental hydrologic systems such as lakes, streams, and rivers.
In a 2004 study, several rivers were taken into account during the examination of tritium concentrations (starting in the 1960s) throughout the Mississippi River Basin: Ohio River (largest input to the Mississippi River flow), Missouri River, and Arkansas River. The highest tritium concentrations were found in 1963 across locations throughout these rivers. The peak correlates with implementation of the US & Soviet atmospheric test ban treaty in 1962. The overall highest concentrations occurred in the Missouri River (1963) and were greater than 1,200 TU while the lowest concentrations were found in the Arkansas River (never greater than 850 TU and less than 10 TU in the mid-1980s).
As for the mass flux of tritium through the main stem of the Mississippi River into the Gulf of Mexico, data indicated that approximately 780 grams of tritium has flowed out of the River and into the Gulf between 1961 and 1997, an average of 21.7 grams/yr and 7.7 PBq/yr. Current fluxes through the Mississippi River are 1 to 2 grams, per year as opposed to the pre-bomb period fluxes of roughly 0.4 grams per year.
| Physical sciences | s-Block | Chemistry |
31307 | https://en.wikipedia.org/wiki/Tau%20Ceti | Tau Ceti | Tau Ceti, Latinized from τ Ceti, is a single star in the constellation Cetus that is spectrally similar to the Sun, although it has only about 78% of the Sun's mass. At a distance of just under from the Solar System, it is a relatively nearby star and the closest solitary G-class star. The star appears stable, with little stellar variation, and is metal-deficient (low in elements other than hydrogen and helium) relative to the Sun.
It can be seen with the unaided eye with an apparent magnitude of 3.5. As seen from Tau Ceti, the Sun would be in the northern hemisphere constellation Boötes with an apparent magnitude of about 2.6.
Observations have detected more than ten times as much dust surrounding Tau Ceti as is present in the Solar System. Since December 2012, there has been evidence of at least four planets—all likely super-Earths—orbiting Tau Ceti, and two of these are potentially in the habitable zone. There is evidence of up to an additional four unconfirmed planets, one of which would be a Jovian planet between 3 and 20 AU from the star. Because of its debris disk, any planet orbiting Tau Ceti would face far more impact events than present day Earth. Note that those planetary candidates have been contested recently and recent discoveries about the stellar inclination cast doubt about the terrestrial nature of these worlds. Despite this hurdle to habitability, its solar analog (Sun-like) characteristics have led to widespread interest in the star. Given its stability, similarity and relative proximity to the Sun, Tau Ceti is consistently listed as a target for the search for extraterrestrial intelligence (SETI).
Name
The name "Tau Ceti" is the Bayer designation for this star, established in 1603 as part of German celestial cartographer Johann Bayer's Uranometria star catalogue: it is "number T" in Bayer's sequence of constellation Cetus. In the catalogue of stars in the Calendarium of Al Achsasi al Mouakket, written at Cairo about 1650, this star was designated Thālith al Naʽāmāt (ثالث النعامات - thālith al-naʽāmāt), which was translated into Latin as Tertia Struthionum, meaning the third of the ostriches. This star, along with η Cet (Deneb Algenubi), θ Cet (Thanih Al Naamat), ζ Cet (Baten Kaitos), and υ Cet, were Al Naʽāmāt (النعامات), the Hen Ostriches.
In Chinese astronomy, the "Square Celestial Granary" () refers to an asterism consisting of τ Ceti, ι Ceti, η Ceti, ζ Ceti, θ Ceti and 57 Ceti. Consequently, the Chinese name for τ Ceti itself is "the Fifth Star of Square Celestial Granary" ().
Motion
The proper motion of a star is its rate of movement across the celestial sphere, determined by comparing its position relative to more distant background objects. Tau Ceti is considered to be a high-proper-motion star, although it only has an annual traverse of just under 2 arc seconds. Thus it will require about 2000 years before the location of this star shifts by more than a degree. A high proper motion is an indicator of closeness to the Sun. Nearby stars can traverse an angle of arc across the sky more rapidly than the distant background stars and are good candidates for parallax studies. In the case of Tau Ceti, the parallax measurements indicate a distance of . This makes it one of the closest star systems to the Sun and the next-closest spectral class-G star after Alpha Centauri A.
The radial velocity of a star is the component of its motion that is toward or away from the Sun. Unlike proper motion, a star's radial velocity cannot be directly observed, but can be determined by measuring its spectrum. Due to the Doppler shift, the absorption lines in the spectrum of a star will be shifted slightly toward the red (or longer wavelengths) if the star is moving away from the observer, or toward blue (or shorter wavelengths) when it moves toward the observer. In the case of Tau Ceti, the radial velocity is about −17 km/s, with the negative value indicating that it is moving toward the Sun. The star will make its closest approach to the Sun in about 43,000 years, when it comes to within .
The distance to Tau Ceti, along with its proper motion and radial velocity, together give the motion of the star through space. The space velocity relative to the Sun is . This result can then be used to compute an orbital path of Tau Ceti through the Milky Way. It has a mean galacto-centric distance of () and an orbital eccentricity of 0.22.
Physical properties
The Tau Ceti system is believed to have only one stellar component. A dim optical companion has been observed with magnitude 13.1. As of 2000, it was distant from the primary. It may be gravitationally bound, but it is considered more likely to be a line-of-sight coincidence.
Most of what is known about the physical properties of Tau Ceti and its system has been determined through spectroscopic measurements. By comparing the spectrum to computed models of stellar evolution, the age, mass, radius and luminosity of Tau Ceti can be estimated. However, using an astronomical interferometer, measurements of the radius of the star can be made directly to an accuracy of 0.5%. Through such means, the radius of Tau Ceti has been measured to be of the solar radius. This is about the size that is expected for a star with somewhat lower mass than the Sun.
Rotation
The rotation period for Tau Ceti was measured by periodic variations in the classic H and K absorption lines of singly ionized calcium (Ca II). These lines are closely associated with surface magnetic activity, so the period of variation measures the time required for the activity sites to complete a full rotation about the star. By this means the rotation period for Tau Ceti is estimated to be . Due to the Doppler effect, the rotation rate of a star affects the width of the absorption lines in the spectrum (light from the side of the star moving away from the observer will be shifted to a longer wavelength; light from the side moving towards the observer will be shifted toward a shorter wavelength). By analyzing the width of these lines, the rotational velocity of a star can be estimated. The projected rotation velocity for Tau Ceti is
where veq is the velocity at the equator, and i is the inclination angle of the rotation axis to the line of sight. For a typical G8 star, the rotation velocity is about . The relatively low rotational velocity measurements may indicate that Tau Ceti is being viewed from nearly the direction of its pole.
More recently, a 2023 study has estimated a rotation period of and a veq sin i of , corresponding to a pole-on inclination of .
Metallicity
The chemical composition of a star provides important clues to its evolutionary history, including the age at which it formed. The interstellar medium of dust and gas from which stars form is primarily composed of hydrogen and helium with trace amounts of heavier elements. As nearby stars continually evolve and die, they seed the interstellar medium with an increasing portion of heavier elements. Thus younger stars tend to have a higher portion of heavy elements in their atmospheres than do the older stars. These heavy elements are termed "metals" by astronomers, and the portion of heavy elements is the metallicity. The amount of metallicity in a star is given in terms of the ratio of iron (Fe), an easily observed heavy element, to hydrogen. A logarithm of the relative iron abundance is compared to the Sun. In the case of Tau Ceti, the atmospheric metallicity is
dex,
equivalent to about a third the solar abundance. Past measurements have varied from −0.13 to −0.60.
This lower abundance of iron indicates that Tau Ceti is almost certainly older than the Sun. Its age had previously been estimated to be , but is now thought to be around . This compares with for the Sun. However, age estimates for Tau Ceti can range from 4.4 to , depending on the model adopted.
Besides rotation, another factor that can widen the absorption features in the spectrum of a star is pressure broadening. The presence of nearby particles affects the radiation emitted by an individual particle. So the line width is dependent on the surface pressure of the star, which in turn is determined by the temperature and surface gravity. This technique was used to determine the surface gravity of Tau Ceti. The , or logarithm of the star's surface gravity, is about 4.4, very close to the for the Sun.
Luminosity and variability
The luminosity of Tau Ceti is equal to only 55% of the Sun's luminosity. A terrestrial planet would need to orbit this star at a distance of about to match the solar insolation level of Earth. This is approximately the same as the average distance between Venus and the Sun.
The chromosphere of Tau Ceti—the portion of a star's atmosphere just above the light-emitting photosphere—currently displays little or no magnetic activity, indicating a stable star. One 9-year study of temperature, granulation, and the chromosphere showed no systematic variations; Ca II emissions around the H and K infrared bands show a possible 11-year cycle, but this is weak relative to the Sun. Alternatively it has been suggested that the star could be in a low-activity state analogous to a Maunder Minimum—a historical period, associated with the Little Ice Age in Europe, when sunspots became exceedingly rare on the Sun's surface. Spectral line profiles of Tau Ceti are extremely narrow, indicating low turbulence and observed rotation. The star's asteroseismological oscillations have an amplitude about half that of the Sun and a lower mode lifetime.
Planetary system
Principal factors driving research interest in Tau Ceti are its proximity, its Sun-like characteristics, and the implications for possible life on its planets. For categorization purposes, Hall and Lockwood report that "the terms 'solarlike star', 'solar analog', and 'solar twin' [are] progressively restrictive descriptions". Tau Ceti fits the second category, given its similar mass and low variability, but relative lack of metals. The similarities have inspired popular culture references for decades, as well as scientific examination. In 1988, radial-velocity observations ruled out any periodical variations attributable to massive planets around Tau Ceti inside of Jupiter-like distances. Ever more precise measurements continue to rule out such planets, at least until December 2012. The velocity precision reached is about 11 m/s measured over a 5-year time span. This result excludes hot Jupiters and probably excludes any planets with minimal mass greater than or equal to Jupiter's mass and with orbital periods less than 15 years. In addition, a survey of nearby stars by the Hubble Space Telescope's Wide Field and Planetary Camera was completed in 1999, including a search for faint companions to Tau Ceti; none were discovered to limits of the telescope's resolving power.
However, these searches only excluded larger brown dwarf bodies and closer orbiting giant planets, so smaller, Earth-like planets in orbit around the star, like those discovered in 2012, were not precluded. If hot Jupiters were to exist in close orbit, they would likely disrupt the star's habitable zone; their exclusion was thus considered positive for the possibility of Earth-like planets. General research has shown a positive correlation between the presence of planets and a relatively high-metallicity parent star, suggesting that stars with lower metallicity such as Tau Ceti have a lower chance of having planets.
Discovery
On December 19, 2012, evidence was presented that suggested a system of five planets orbiting Tau Ceti. The planets' estimated minimum masses were between 2 and 6 Earth masses, with orbital periods ranging from 14 to 640 days. One of them, Tau Ceti e, appears to orbit about half as far from Tau Ceti as Earth does from the Sun. With Tau Ceti's luminosity of 52% that of the Sun and a distance from the star of 0.552 AU, the planet would receive 1.71 times as much stellar radiation as Earth does, slightly less than Venus with 1.91 times Earth's. Nevertheless, some research places it within the star's habitable zone. The Planetary Habitability Laboratory has estimated that Tau Ceti f, which receives 28.5% as much starlight as Earth, would be within the star's habitable zone, albeit narrowly.
New results were published in August 2017. They confirmed Tau Ceti e and f as candidates but failed to consistently detect planets b (which may be a false negative), c (whose weakly defined apparent signal was correlated to stellar rotation), and d (which did not show up in all data sets). Instead, they found two new planetary candidates, g and h, with orbits of 20 and 49 days. The signals detected from the candidate planets have radial velocities as low as 30 cm/s, and the experimental method used in their detection, as it was applied to HARPS, could in theory have detected down to around 20 cm/s. The updated 4-planet model is dynamically packed and potentially stable for billions of years.
However, with further refinements, even more candidate planets have been detected. In 2019, a paper published in Astronomy & Astrophysics suggested that Tau Ceti could have a Jupiter or super-Jupiter based on a tangential astrometric velocity of around 11.3 m/s. The exact size and position of this conjectured object have not been determined, though it is at most 5 Jupiter masses if it orbits between 3 and 20 AU. A 2020 Astronomical Journal study by astronomers Jamie Dietrich and Daniel Apai analyzed the orbital stability of the known planets and, considering statistical patterns identified from hundreds of other planetary systems, explored the orbits in which the presence of additional, yet-undetected planets are most likely. This analysis predicted three planet candidates at orbits coinciding with planet candidates b, c, and d. The close match between the independently predicted planet periods and the periods of the three planet candidates previously identified in radial velocity data supports the genuine planetary nature of candidates b, c, and d. Furthermore, the study also predicts at least one yet-undetected planet between planets e and f, i.e., within the habitable zone. This predicted exoplanet is identified as PxP-4.
Since Tau Ceti is likely aligned in such a way that it is nearly pole-on to Earth (as indicated by its rotation), if its planets share this alignment and have nearly face-on orbits, they would be less similar to Earth's mass and more to Neptune, Saturn, or Jupiter. For example, were Tau Ceti f's orbit inclined 70 degrees from being face-on to Earth, its mass would be Earth masses, making it a middle-to-low end super-Earth. However, these scenarios aren't necessarily true; since Tau Ceti's debris disk has an inclination of , the planets' orbits could be similarly inclined. If the debris disk and f's orbits were assumed to be equal, f would be between and Earth masses, making it slightly more likely to be a mini-Neptune. On top of that, the lower the inclination of the planetary orbits the less stable they tend to be over a given time period, as the planets would have greater masses and therefore more gravitational pull which would in turn disturb the orbital stability of neighbouring planets. So, for example, if as estimated in the Korolik et al 2023 study Tau Ceti has a pole-on inclination of around 7 degrees, and the postulated planets do as well, then those planets' orbits would be verging on instability within just a 10 million year timeframe, and therefore it is extremely unlikely they would have survived for the billions of years that make up the lifetime of the star system.
Tau Ceti e
Tau Ceti e is a candidate planet orbiting Tau Ceti that was first proposed in 2012 by statistical analyses of the data of the star's variations in radial velocity that were obtained using HIRES, AAPS, and HARPS. Its possible properties were refined in 2017: if confirmed, it would orbit at a distance of 0.552 AU (between the orbits of Venus and Mercury in the Solar System) with an orbital period of 168 days and has a minimum mass of 3.93 Earth masses. If Tau Ceti e possessed an Earth-like atmosphere, the surface temperature would be around . Based upon the incident flux upon the planet, a study by Güdel et al. (2014) speculated that the planet may lie outside the habitable zone and closer to a Venus-like world.
Tau Ceti f
Tau Ceti f is a candidate planet orbiting Tau Ceti that was proposed in 2012 by statistical analyses of the star's variations in radial velocity, and also recovered by further analysis in 2017. It is of interest because its orbit places it in Tau Ceti's extended habitable zone. However, a 2015 study implies that it would have been in the temperate zone for less than one billion years, so there may not be a detectable biosignature.
Few properties of the planet are known other than its orbit and mass. It orbits Tau Ceti at a distance of 1.35 AU (near Mars's orbit in the Solar System) with an orbital period of 642 days and has a minimum mass of 3.93 Earth masses.
However, a reanalysis of the data in 2021 provided an in-depth study of the HARPS spectrograph systematics, showing that the 600-day signal was likely a spurious combination of instrumental systematics with a potential 1000-day yet unknown signal.
Debris disk
In 2004, a team of UK astronomers led by Jane Greaves discovered that Tau Ceti has more than ten times the amount of cometary and asteroidal material orbiting it than does the Sun. This was determined by measuring the disk of cold dust orbiting the star produced by collisions between such small bodies. This result puts a damper on the possibility of complex life in the system, because any planets would suffer from large impact events roughly ten times more frequently than present day Earth. Greaves noted at the time of her research that "it is likely that [any planets] will experience constant bombardment from asteroids of the kind believed to have wiped out the dinosaurs". Such bombardments would inhibit the development of biodiversity between impacts. However, it is possible that a large Jupiter-sized gas giant (such as the proposed planet "i") could deflect comets and asteroids.
The debris disk was discovered by measuring the amount of radiation emitted by the system in the far infrared portion of the spectrum. The disk forms a symmetric feature that is centered on the star, and its outer radius averages . The lack of infrared radiation from the warmer parts of the disk near Tau Ceti implies an inner cut-off at a radius of . By comparison, the Solar System's Kuiper belt extends from 30 to . To be maintained over a long period of time, this ring of dust must be constantly replenished through collisions by larger bodies. The bulk of the disk appears to be orbiting Tau Ceti at a distance of 35–, well outside the orbit of the habitable zone. At this distance, the dust belt may be analogous to the Kuiper belt that lies outside the orbit of Neptune in the Solar System.
Tau Ceti shows that stars need not lose large disks as they age, and such a thick belt may not be uncommon among Sun-like stars. Tau Ceti's belt is only 1/20 as dense as the belt around its young neighbor, Epsilon Eridani. The relative lack of debris around the Sun may be the unusual case: one research-team member suggests the Sun may have passed close to another star early in its history and had most of its comets and asteroids stripped away. Stars with large debris disks have changed the way astronomers think about planet formation because debris disk stars, where dust is continually generated by collisions, appear to form planets readily.
Habitability
Tau Ceti's habitable zone—the locations where liquid water could be present on an Earth-sized planet—spans a radius of 0.55–1.16 AU, where 1 AU is the average distance from the Earth to the Sun. Primitive life on Tau Ceti's planets may reveal itself through an analysis of atmospheric composition via spectroscopy, if the composition is unlikely to be abiotic, just as oxygen on Earth is indicative of life.
The most optimistic search project to date was Project Ozma, which was intended to "search for extraterrestrial intelligence" (SETI) by examining selected stars for indications of artificial radio signals. It was run by the astronomer Frank Drake, who selected Tau Ceti and Epsilon Eridani as the initial targets. Both are located near the Solar System and are physically similar to the Sun. No artificial signals were found despite 200 hours of observations. Subsequent radio searches of this star system have turned up negative.
This lack of results has not dampened interest in observing the Tau Ceti system for biosignatures. In 2002, astronomers Margaret Turnbull and Jill Tarter developed the Catalog of Nearby Habitable Systems (HabCat) under the auspices of Project Phoenix, another SETI endeavour. The list contained more than theoretically habitable systems, approximately 10% of the original sample. The next year, Turnbull would further refine the list to the 30 most promising systems out of within 100 light-years from the Sun, including Tau Ceti; this will form part of the basis of radio searches with the Allen Telescope Array. She chose Tau Ceti for a final shortlist of just five stars suitable for searches by the (now cancelled) Terrestrial Planet Finder telescope system, commenting that "these are places I'd want to live if God were to put our planet around another star".
| Physical sciences | Notable stars | Astronomy |
31349 | https://en.wikipedia.org/wiki/Tyrosine | Tyrosine | -Tyrosine or tyrosine (symbol Tyr or Y) or 4-hydroxyphenylalanine is one of the 20 standard amino acids that are used by cells to synthesize proteins. It is a conditionally essential amino acid with a polar side group. The word "tyrosine" is from the Greek tyrós, meaning cheese, as it was first discovered in 1846 by German chemist Justus von Liebig in the protein casein from cheese. It is called tyrosyl when referred to as a functional group or side chain. While tyrosine is generally classified as a hydrophobic amino acid, it is more hydrophilic than phenylalanine. It is encoded by the codons UAC and UAU in messenger RNA.
The one-letter symbol Y was assigned to tyrosine for being alphabetically nearest of those letters available. Note that T was assigned to the structurally simpler threonine, U was avoided for its similarity with V for valine, W was assigned to tryptophan, while X was reserved for undetermined or atypical amino acids. The mnemonic tYrosine was also proposed.
Functions
Aside from being a proteinogenic amino acid, tyrosine has a special role by virtue of the phenol functionality. Its hydroxy group is able to form the ester linkage, with phosphate in particular. Phosphate groups are transferred to tyrosine residues by way of protein kinases. This is one of the post-translational modifications. Phosphorylated tyrosine occurs in proteins that are part of signal transduction processes.
Similar functionality is also presented in serine and threonine, whose side chains have a hydroxy group, but are alcohols. Phosphorylation of these three amino acids' moieties (including tyrosine) creates a negative charge on their ends, that is greater than the negative charge of the only negatively charged aspartic and glutamic acids. Phosphorylated proteins keep these same properties—which are useful for more reliable protein-protein interactions—by means of phosphotyrosine, phosphoserine and phosphothreonine.
Binding sites for a signalling phosphoprotein may be diverse in their chemical structure.
Phosphorylation of the hydroxyl group can change the activity of the target protein, or may form part of a signaling cascade via SH2 domain binding.
A tyrosine residue also plays an important role in photosynthesis. In chloroplasts (photosystem II), it acts as an electron donor in the reduction of oxidized chlorophyll. In this process, it loses the hydrogen atom of its phenolic OH-group. This radical is subsequently reduced in the photosystem II by the four core manganese clusters.
Dietary requirements and sources
The Dietary Reference Intake for tyrosine is usually estimated together with phenylalanine. It varies depending on an estimate method, however the ideal proportion of these two amino acids is considered to be 60:40 (phenylalanine:tyrosine) as a human body has such composition.
Tyrosine, which can also be synthesized in the body from phenylalanine, is found in many high-protein food products such as meat, fish, cheese, cottage cheese, milk, yogurt, peanuts, almonds, pumpkin seeds, sesame seeds, soy protein and lima beans. For example, the white of an egg has about 250 mg per egg, while beef, lamb, pork, tuna, salmon, chicken, and turkey contain about 500–1000 mg per portion.
Biosynthesis
In plants and most microorganisms, tyrosine is produced via prephenate, an intermediate on the shikimate pathway. Prephenate is oxidatively decarboxylated with retention of the hydroxyl group to give p-hydroxyphenylpyruvate, which is transaminated using glutamate as the nitrogen source to give tyrosine and α-ketoglutarate.
Mammals synthesize tyrosine from the essential amino acid phenylalanine (Phe), which is derived from food. The conversion of Phe to Tyr is catalyzed by the enzyme phenylalanine hydroxylase, a monooxygenase. This enzyme catalyzes the reaction causing the addition of a hydroxyl group to the end of the 6-carbon aromatic ring of phenylalanine, such that it becomes tyrosine.
Metabolism
Phosphorylation and sulfation
Some of the tyrosine residues can be tagged (at the hydroxyl group) with a phosphate group (phosphorylated) by protein kinases. In its phosphorylated form, tyrosine is called phosphotyrosine. Tyrosine phosphorylation is considered to be one of the key steps in signal transduction and regulation of enzymatic activity. Phosphotyrosine can be detected through specific antibodies. Tyrosine residues may also be modified by the addition of a sulfate group, a process known as tyrosine sulfation. Tyrosine sulfation is catalyzed by tyrosylprotein sulfotransferase (TPST). Like the phosphotyrosine antibodies mentioned above, antibodies have recently been described that specifically detect sulfotyrosine.
Precursor to neurotransmitters and hormones
In dopaminergic cells in the brain, tyrosine is converted to L-DOPA by the enzyme tyrosine hydroxylase (TH). TH is the rate-limiting enzyme involved in the synthesis of the neurotransmitter dopamine. Dopamine can then be converted into other catecholamines, such as norepinephrine (noradrenaline) and epinephrine (adrenaline).
The thyroid hormones triiodothyronine (T3) and thyroxine (T4) in the colloid of the thyroid are also derived from tyrosine.
Precursor to other compounds
The latex of Papaver somniferum, the opium poppy, has been shown to convert tyrosine into the alkaloid morphine and the bio-synthetic pathway has been established from tyrosine to morphine by using Carbon-14 radio-labelled tyrosine to trace the in-vivo synthetic route.Tyrosine ammonia lyase (TAL) is an enzyme in the natural phenols biosynthesis pathway. It transforms L-tyrosine into p-coumaric acid. Tyrosine is also the precursor to the pigment melanin. Tyrosine (or its precursor phenylalanine) is needed to synthesize the benzoquinone structure which forms part of coenzyme Q10.
Degradation
The decomposition of L-tyrosine (syn. para-hydroxyphenylalanine) begins with an α-ketoglutarate dependent transamination through the tyrosine transaminase to para-hydroxyphenylpyruvate. The positional description para, abbreviated p, mean that the hydroxyl group and side chain on the phenyl ring are across from each other (see the illustration below).
The next oxidation step catalyzes by p-hydroxyphenylpyruvate dioxygenase and splitting off CO2 homogentisate (2,5-dihydroxyphenyl-1-acetate). In order to split the aromatic ring of homogentisate, a further dioxygenase, homogentisate 1,2-dioxygenase is required. Thereby, through the incorporation of a further O2 molecule, maleylacetoacetate is created.
Fumarylacetoacetate is created by maleylacetoacetate cis-trans-isomerase through rotation of the carboxyl group created from the hydroxyl group via oxidation. This cis-trans-isomerase contains glutathione as a coenzyme. Fumarylacetoacetate is finally split by the enzyme fumarylacetoacetate hydrolase through the addition of a water molecule.
Thereby fumarate (also a metabolite of the citric acid cycle) and acetoacetate (3-ketobutyroate) are liberated. Acetoacetate is a ketone body, which is activated with succinyl-CoA, and thereafter it can be converted into acetyl-CoA, which in turn can be oxidized by the citric acid cycle or be used for fatty acid synthesis.
Phloretic acid is also a urinary metabolite of tyrosine in rats.
Ortho- and meta-tyrosine
Three structural isomers of L-tyrosine are known. In addition to the common amino acid L-tyrosine, which is the para isomer (para-tyr, p-tyr or 4-hydroxyphenylalanine), there are two additional regioisomers, namely meta-tyrosine (also known as , L-m-tyrosine, and m-tyr) and ortho-tyrosine (o-tyr or 2-hydroxyphenylalanine), that occur in nature. The m-tyr and o-tyr isomers, which are rare, arise through non-enzymatic free-radical hydroxylation of phenylalanine under conditions of oxidative stress.
Medical use
Tyrosine is a precursor to neurotransmitters and increases plasma neurotransmitter levels (particularly dopamine and norepinephrine), but has little if any effect on mood in normal subjects.
A 2015 systematic review found that "tyrosine loading acutely counteracts decrements in working memory and information processing that are induced by demanding situational conditions such as extreme weather or cognitive load" and therefore "tyrosine may benefit healthy individuals exposed to demanding situational conditions".
Industrial synthesis
L-tyrosine is used in pharmaceuticals, dietary supplements, and food additives. Two methods were formerly used to manufacture L-tyrosine. The first involves the extraction of the desired amino acid from protein hydrolysates using a chemical approach. The second utilizes enzymatic synthesis from phenolics, pyruvate, and ammonia through the use of tyrosine phenol-lyase. Advances in genetic engineering and the advent of industrial fermentation have shifted the synthesis of L-tyrosine to the use of engineered strains of E. coli.
| Biology and health sciences | Amino acids | Biology |
31373 | https://en.wikipedia.org/wiki/Tor%20%28rock%20formation%29 | Tor (rock formation) | A tor, which is also known by geomorphologists as either a castle koppie or kopje, is a large, free-standing rock outcrop that rises abruptly from the surrounding smooth and gentle slopes of a rounded hill summit or ridge crest. In the South West of England, the term is commonly also used for the hills themselves – particularly the high points of Dartmoor in Devon and Bodmin Moor in Cornwall.
Etymology
Although English topographical names often have a Celtic etymology, the Oxford English Dictionary lists no cognates to the Old English word in either the Breton or Cornish languages (the Scottish Gaelic is thought to derive from the Old English word). It is therefore accepted that the English word Tor derives from the Old Welsh word or , meaning a cluster or heap.
Formation
Tors are landforms created by the erosion and weathering of rock; most commonly granites, but also schists, dacites, dolerites, ignimbrites, coarse sandstones and others. Tors are mostly less than high. Many hypotheses have been proposed to explain their origin and this remains a topic of discussion among geologists and geomorphologists, and physical geographers. It is considered likely that tors were created by geomorphic processes that differed widely in type and duration according to regional and local differences in climate and rock types.
For example, the Dartmoor granite was emplaced around 280 million years ago. When the cover rocks eroded away it was exposed to chemical and physical weathering processes. Where joints are closely spaced, the large crystals in the granite readily disintegrate to form a sandy regolith known locally as growan. This is readily stripped off by solifluction or surface wash when not protected by vegetation, notably during prolonged cold phases during the Quaternary ice ages – periglaciation.
Where joints happen to be unusually widely spaced, core blocks can survive and remain above the weathering surface, developing into tors. These can be monolithic, as at Haytor and Blackingstone Rock, but are more usually subdivided into stacks, often arranged in avenues. Each stack may include several tiers or pillows, which may become separated: rocking pillows are called logan stones. These stacks are vulnerable to frost action and often collapse leaving trails of blocks down the slopes called clitter or clatter. Weathering has also given rise to circular "rock basins" formed by the accumulation of water and repeated freezing and thawing. An example is found at Kes Tor on Dartmoor.
Dating of 28 tors on Dartmoor showed that most are surprisingly young, less than 100,000 years of surface exposure, with none over 200,000 years old. They probably emerged at the start of the last major ice age (Devensian). By contrast the Scottish Cairngorms, which is the other classic granite tor concentration in Britain, the oldest tors dated are between 200 and 675 thousand years of exposure, with even glacially-modified ones having dates of 100–150,000 years.
| Physical sciences | Other erosional landforms | Earth science |
31383 | https://en.wikipedia.org/wiki/Osmotic%20pressure | Osmotic pressure | Osmotic pressure is the minimum pressure which needs to be applied to a solution to prevent the inward flow of its pure solvent across a semipermeable membrane.
It is also defined as the measure of the tendency of a solution to take in its pure solvent by osmosis. Potential osmotic pressure is the maximum osmotic pressure that could develop in a solution if it were separated from its pure solvent by a semipermeable membrane.
Osmosis occurs when two solutions containing different concentrations of solute are separated by a selectively permeable membrane. Solvent molecules pass preferentially through the membrane from the low-concentration solution to the solution with higher solute concentration. The transfer of solvent molecules will continue until equilibrium is attained.
Theory and measurement
Jacobus van 't Hoff found a quantitative relationship between osmotic pressure and solute concentration, expressed in the following equation:
where is osmotic pressure, i is the dimensionless van 't Hoff index, c is the molar concentration of solute, R is the ideal gas constant, and T is the absolute temperature (usually in kelvins). This formula applies when the solute concentration is sufficiently low that the solution can be treated as an ideal solution. The proportionality to concentration means that osmotic pressure is a colligative property. Note the similarity of this formula to the ideal gas law in the form where is the total number of moles of gas molecules in the volume V, and n/V is the molar concentration of gas molecules. Harmon Northrop Morse and Frazer showed that the equation applied to more concentrated solutions if the unit of concentration was molal rather than molar; so when the molality is used this equation has been called the Morse equation.
For more concentrated solutions the van 't Hoff equation can be extended as a power series in solute concentration, c. To a first approximation,
where is the ideal pressure and A is an empirical parameter. The value of the parameter A (and of parameters from higher-order approximations) can be used to calculate Pitzer parameters. Empirical parameters are used to quantify the behavior of solutions of ionic and non-ionic solutes which are not ideal solutions in the thermodynamic sense.
The Pfeffer cell was developed for the measurement of osmotic pressure.
Applications
Osmotic pressure measurement may be used for the determination of molecular weights.
Osmotic pressure is an important factor affecting biological cells. Osmoregulation is the homeostasis mechanism of an organism to reach balance in osmotic pressure.
Hypertonicity is the presence of a solution that causes cells to shrink.
Hypotonicity is the presence of a solution that causes cells to swell.
Isotonicity is the presence of a solution that produces no change in cell volume.
When a biological cell is in a hypotonic environment, the cell interior accumulates water, water flows across the cell membrane into the cell, causing it to expand. In plant cells, the cell wall restricts the expansion, resulting in pressure on the cell wall from within called turgor pressure. Turgor pressure allows herbaceous plants to stand upright. It is also the determining factor for how plants regulate the aperture of their stomata. In animal cells excessive osmotic pressure can result in cytolysis due to the absence of a cell wall.
Osmotic pressure is the basis of filtering ("reverse osmosis"), a process commonly used in water purification. The water to be purified is placed in a chamber and put under an amount of pressure greater than the osmotic pressure exerted by the water and the solutes dissolved in it. Part of the chamber opens to a differentially permeable membrane that lets water molecules through, but not the solute particles. The osmotic pressure of ocean water is approximately 27 atm. Reverse osmosis desalinates fresh water from ocean salt water and is applied globally on a very large scale.
Derivation of the van 't Hoff formula
Consider the system at the point when it has reached equilibrium. The condition for this is that the chemical potential of the solvent (since only it is free to flow toward equilibrium) on both sides of the membrane is equal. The compartment containing the pure solvent has a chemical potential of , where is the pressure. On the other side, in the compartment containing the solute, the chemical potential of the solvent depends on the mole fraction of the solvent, . Besides, this compartment can assume a different pressure, . We can therefore write the chemical potential of the solvent as . If we write , the balance of the chemical potential is therefore:
Here, the difference in pressure of the two compartments is defined as the osmotic pressure exerted by the solutes. Holding the pressure, the addition of solute decreases the chemical potential (an entropic effect). Thus, the pressure of the solution has to be increased in an effort to compensate the loss of the chemical potential.
In order to find , the osmotic pressure, we consider equilibrium between a solution containing solute and pure water.
We can write the left hand side as:
,
where is the activity coefficient of the solvent. The product is also known as the activity of the solvent, which for water is the water activity . The addition to the pressure is expressed through the expression for the energy of expansion:
where is the molar volume (m³/mol). Inserting the expression presented above into the chemical potential equation for the entire system and rearranging will arrive at:
If the liquid is incompressible the molar volume is constant, , and the integral becomes . Thus, we get
The activity coefficient is a function of concentration and temperature, but in the case of dilute mixtures, it is often very close to 1.0, so
The mole fraction of solute, , is , so can be replaced with , which, when is small, can be approximated by .
The mole fraction is . When is small, it may be approximated by . Also, the molar volume may be written as volume per mole, . Combining these gives the following.
For aqueous solutions of salts, ionisation must be taken into account. For example, 1 mole of NaCl ionises to 2 moles of ions.
| Physical sciences | Thermodynamics | Chemistry |
31392 | https://en.wikipedia.org/wiki/Tasmanian%20devil | Tasmanian devil | The Tasmanian devil (Sarcophilus harrisii) (palawa kani: purinina) is a carnivorous marsupial of the family Dasyuridae. It was formerly present across mainland Australia, but became extinct there around 3,500 years ago; it is now confined to the island of Tasmania. The size of a small dog, the Tasmanian devil became the largest carnivorous marsupial in the world following the extinction of the thylacine in 1936. It is related to quolls, and distantly related to the thylacine. It is characterised by its stocky and muscular build, black fur, pungent odour, extremely loud and disturbing screech, keen sense of smell, and ferocity when feeding. The Tasmanian devil's large head and neck allow it to generate among the strongest bites per unit body mass of any extant predatory land mammal. It hunts prey and scavenges on carrion.
Although devils are usually solitary, they sometimes eat and defecate together in a communal location. Unlike most other dasyurids, the devil thermoregulates effectively, and is active during the middle of the day without overheating. Despite its rotund appearance, it is capable of surprising speed and endurance, and can climb trees and swim across rivers. Devils are not monogamous. Males fight one another for females, and guard their partners to prevent female infidelity. Females can ovulate three times in as many weeks during the mating season, and 80% of two-year-old females are seen to be pregnant during the annual mating season.
Females average four breeding seasons in their life, and give birth to 20 to 30 live young after three weeks' gestation. The newborn are pink, lack fur, have indistinct facial features, and weigh around at birth. As there are only four nipples in the pouch, competition is fierce, and few newborns survive. The young grow rapidly, and are ejected from the pouch after around 100 days, weighing roughly . The young become independent after around nine months.
In 1941, devils became officially protected. Since the late 1990s, the devil facial tumour disease (DFTD) has drastically reduced the population and now threatens the survival of the species, which in 2008 was declared to be endangered. Starting in 2013, Tasmanian devils are again being sent to zoos around the world as part of the Australian government's Save the Tasmanian Devil Program. The devil is an iconic symbol of Tasmania and many organisations, groups and products associated with the state use the animal in their logos. It is seen as an important attractor of tourists to Tasmania and has come to worldwide attention through the Looney Tunes character of the same name.
Taxonomy
Believing it to be a type of opossum, naturalist George Harris wrote the first published description of the Tasmanian devil in 1807, naming it Didelphis ursina, due to its bearlike characteristics such as the round ear. He had earlier made a presentation on the topic at the Zoological Society of London. However, that particular binomial name had been given to the common wombat (later reclassified as Vombatus ursinus) by George Shaw in 1800, and was hence unavailable. In 1838, a specimen was named Dasyurus laniarius by Richard Owen, but by 1877 he had relegated it to Sarcophilus. The modern Tasmanian devil was named Sarcophilus harrisii ("Harris's flesh-lover") by French naturalist Pierre Boitard in 1841.
A later revision of the devil's taxonomy, published in 1987, attempted to change the species name to Sarcophilus laniarius based on mainland fossil records of only a few animals. However, this was not accepted by the taxonomic community at large; the name S. harrisii has been retained and S. laniarius relegated to a fossil species. "Beelzebub's pup" was an early vernacular name given to it by the explorers of Tasmania, in reference to a religious figure who is a prince of hell and an assistant of Satan; the explorers first encountered the animal by hearing its far-reaching vocalisations at night. Related names that were used in the 19th century were Sarcophilus satanicus ("Satanic flesh-lover") and Diabolus ursinus ("bear devil"), all due to early misconceptions of the species as implacably vicious. The Tasmanian devil (Sarcophilus harrisii) belongs to the family Dasyuridae. The genus Sarcophilus contains two other species, known only from Pleistocene fossils: S. laniarius and S. moomaensis. Phylogenetic analysis shows that the Tasmanian devil is most closely related to quolls.
According to Pemberton, the possible ancestors of the devil may have needed to climb trees to acquire food, leading to a growth in size and the hopping gait of many marsupials. He speculated that these adaptations may have caused the contemporary devil's peculiar gait. The specific lineage of the Tasmanian devil is theorised to have emerged during the Miocene, molecular evidence suggesting a split from the ancestors of quolls between 10 and 15 million years ago, when severe climate change came to bear in Australia, transforming the climate from warm and moist to an arid, dry ice age, resulting in mass extinctions. As most of their prey died of the cold, only a few carnivores survived, including the ancestors of the quoll and thylacine. It is speculated that the devil lineage may have arisen at this time to fill a niche in the ecosystem, as a scavenger that disposed of carrion left behind by the selective-eating thylacine. The extinct Glaucodon ballaratensis of the Pliocene age has been dubbed an intermediate species between the quoll and devil. Fossil deposits in limestone caves at Naracoorte, South Australia, dating to the Miocene include specimens of S. laniarius, which were around 15% larger and 50% heavier than modern devils. Older specimens believed to be 50–70,000 years old were found in Darling Downs in Queensland and in Western Australia. It is not clear whether the modern devil evolved from S. laniarius, or whether they coexisted at the time. Richard Owen argued for the latter hypothesis in the 19th century, based on fossils found in 1877 in New South Wales. Large bones attributed to S. moornaensis have been found in New South Wales, and it has been conjectured that these two extinct larger species may have hunted and scavenged. It is known that there were several genera of thylacine millions of years ago, and that they ranged in size, the smaller being more reliant on foraging. As the devil and thylacine are similar, the extinction of the co-existing thylacine genera has been cited as evidence for an analogous history for the devils. It has been speculated that the smaller size of S. laniarius and S. moornaensis allowed them to adapt to the changing conditions more effectively and survive longer than the corresponding thylacines. As the extinction of these two species came at a similar time to human habitation of Australia, hunting by humans and land clearance have been mooted as possible causes. Critics of this theory point out that as indigenous Australians only developed boomerangs and spears for hunting around 10,000 years ago, a critical fall in numbers due to systematic hunting is unlikely. They also point out that caves inhabited by Aborigines have a low proportion of bones and rock paintings of devils, and suggest that this is an indication that it was not a large part of indigenous lifestyle. A scientific report in 1910 claimed that Aborigines preferred the meat of herbivores rather than carnivores. The other main theory for the extinction was that it was due to the climate change brought on by the most recent ice age.
Genetics
The Tasmanian devil's genome was sequenced in 2010 by the Wellcome Trust Sanger Institute. Like all dasyurids, the devil has 14 chromosomes. Devils have a low genetic diversity compared to other Australian marsupials and placental carnivores; this is consistent with a founder effect as allelic size ranges were low and nearly continuous throughout all subpopulations measured. Allelic diversity was measured at 2.7–3.3 in the subpopulations sampled, and heterozygosity was in the range 0.386–0.467. According to a study by Menna Jones, "gene flow appears extensive up to ", meaning a high assignment rate to source or close neighbour populations "in agreement with movement data. At larger scales (), gene flow is reduced but there is no evidence for isolation by distance". Island effects may also have contributed to their low genetic diversity. Periods of low population density may also have created moderate population bottlenecks, reducing genetic diversity. Low genetic diversity is thought to have been a feature in the Tasmanian devil population since the mid-Holocene. Outbreaks of devil facial tumour disease (DFTD) cause an increase in inbreeding. A sub-population of devils in the north-west of the state is genetically distinct from other devils, but there is some exchange between the two groups.
One strand conformation polymorphism analysis (OSCP) on the major histocompatibility complex (MHC) class I domain taken from various locations across Tasmania showed 25 different types, and showed a different pattern of MHC types in north-western Tasmania to eastern Tasmania. Those devils in the east of the state have less MHC diversity; 30% are of the same type as the tumour (type 1), and 24% are of type A. Seven of every ten devils in the east are of type A, D, G or 1, which are linked to DFTD; whereas only 55% of the western devils fall into these MHC categories. Of the 25 MHC types, 40% are exclusive to the western devils. Although the north-west population is less genetically diverse overall, it has higher MHC gene diversity, which allows them to mount an immune response to DFTD. According to this research, mixing the devils may increase the chance of disease. Of the fifteen different regions in Tasmania surveyed in this research, six were in the eastern half of the island. In the eastern half, Epping Forest had only two different types, 75% being type O. In the Buckland-Nugent area, only three types were present, and there were an average of 5.33 different types per location. In contrast, in the west, Cape Sorell yielded three types, and Togari North-Christmas Hills yielded six, but the other seven sites all had at least eight MHC types, and West Pencil Pine had 15 types. There was an average of 10.11 MHC types per site in the west. Recent research has suggested that the wild population of devils are rapidly evolving a resistance to DFTD.
Description
The Tasmanian devil is the largest surviving carnivorous marsupial. It has a squat, thick build, with a large head and a tail which is about half its body length. Unusually for a marsupial, its forelegs are slightly longer than its hind legs, and devils can run up to for short distances. The fur is usually black, often with irregular white patches on the chest and rump (although approximately 16% of wild devils do not have white patches). These markings suggest that the devil is most active at dawn and dusk, and they are thought to draw biting attacks toward less important areas of the body, as fighting between devils often leads to a concentration of scars in that region. Males are usually larger than females, having an average head and body length of , a tail and an average weight of . Females have an average head and body length of , a tail and an average weight of , although devils in western Tasmania tend to be smaller. Devils have five long toes on their forefeet, four pointing to the front and one coming out from the side, which gives the devil the ability to hold food. The hind feet have four toes, and the devils have non-retractable claws. The stocky devils have a relatively low centre of mass.
Devils are fully grown at two years of age, and few devils live longer than five years in the wild. Possibly the longest-lived Tasmanian devil recorded was Coolah, a male devil which lived in captivity for more than seven years. Born in January 1997 at the Cincinnati Zoo, Coolah died in May 2004 at the Fort Wayne Children's Zoo. The devil stores body fat in its tail, and healthy devils have fat tails. The tail is largely non-prehensile and is important to its physiology, social behaviour and locomotion. It acts as a counterbalance to aid stability when the devil is moving quickly. An ano-genital scent gland at the base of its tail is used to mark the ground behind the animal with its strong, pungent scent. The male has external testes in a pouch-like structure formed by lateral ventrocrural folds of the abdomen, which partially hides and protects them. The testes are subovoid in shape and the mean dimensions of 30 testes of adult males was . The female's pouch opens backwards, and is present throughout its life, unlike some other dasyurids.
The Tasmanian devil has the most powerful bite relative to body size of any living mammalian carnivore, with a Bite Force Quotient of 181 and exerting a canine bite force of . The jaw can open to 75–80 degrees, allowing the devil to generate the large amount of power to tear meat and crush bones—sufficient force to allow it to bite through thick metal wire. The power of the jaws is in part due to its comparatively large head. The teeth and jaws of Tasmanian devils resemble those of hyenas, an example of convergent evolution. Dasyurid teeth resemble those of primitive marsupials. Like all dasyurids, the devil has prominent canines and cheek teeth. It has three pairs of lower incisors and four pairs of upper incisors. These are located at the top of the front of the devil's mouth. Like dogs, it has 42 teeth, however, unlike dogs, its teeth are not replaced after birth but grow continuously throughout life at a slow rate. It has a "highly carnivorous dentition and trophic adaptations for bone consumption". The devil has long claws that allow it to dig burrows and seek subterranean food easily and grip prey or mates strongly. The teeth and claw strength allow the devil to attack wombats up to in weight. The large neck and forebody that give the devil its strength also cause this strength to be biased towards the front half of the body; the lopsided, awkward, shuffling gait of the devil is attributed to this.
The devil has long whiskers on its face and in clumps on the top of the head. These help the devil locate prey when foraging in the dark, and aid in detecting when other devils are close during feeding. The whiskers can extend from the tip of the chin to the rear of the jaw and can cover the span of its shoulder. Hearing is its dominant sense, and it also has an excellent sense of smell, which has a range of . The devil, unlike other marsupials, has a "well-defined, saddle-shaped ectotympanic". Since devils hunt at night, their vision seems to be strongest in black and white. In these conditions they can detect moving objects readily, but have difficulty seeing stationary objects.
Distribution and habitat
The Tasmanian devil was formerly present across mainland Australia, but became extinct there 3,500 years ago, co-incident with the extinction of the Thylacine from the region. A number of causal factors for the extinction have been proposed, including the introduction of the dingo, intensification of human activity, as well as climatic change.
Devils are found in all habitats on the island of Tasmania, including the outskirts of urban areas, and are distributed throughout the Tasmanian mainland and on Robbins Island (which is connected to mainland Tasmania at low tide). The north-western population is located west of the Forth River and as far south as Macquarie Heads. Previously, they were present on Bruny Island from the 19th century, but there have been no records of them after 1900. They were illegally introduced to Badger Island in the mid-1990s but were removed by the Tasmanian government by 2007. Although the Badger Island population was free from DFTD, the removed individuals were returned to the Tasmanian mainland, some to infected areas. A study has modelled the reintroduction of DFTD-free Tasmanian devils to the Australian mainland in areas where dingoes are sparse. It is proposed that devils would have fewer impacts on both livestock and native fauna than dingoes, and that the mainland population could act as an additional insurance population. In September 2015, 20 immunised captive-bred devils were released into Narawntapu National Park, Tasmania. Two later died from being hit by cars.
The "core habitat" of the devils is considered to be within the "low to moderate annual rainfall zone of eastern and north-western Tasmania". Tasmanian devils particularly like dry sclerophyll forests and coastal woodlands. Although they are not found at the highest altitudes of Tasmania, and their population density is low in the button grass plains in the south-west of the state, their population is high in dry or mixed sclerophyll forests and coastal heaths. Devils prefer open forest to tall forest, and dry rather than wet forests. They are also found near roads where roadkill is prevalent, although the devils themselves are often killed by vehicles while retrieving the carrion. According to the Threatened Species Scientific Committee, their versatility means that habitat modification from destruction is not seen as a major threat to the species.
The devil is directly linked to the Dasyurotaenia robusta, a tapeworm which is classified as Rare under the Tasmanian Threatened Species Protection Act 1995. This tapeworm is found only in devils.
In late 2020, Tasmanian devils were reintroduced to mainland Australia in a sanctuary run by Aussie Ark in the Barrington Tops area of New South Wales. This was the first time devils had lived on the Australian mainland in over 3,000 years. 26 adult devils were released into the protected area, and by late April 2021, seven joeys had been born, with up to 20 expected by the end of the year.
Ecology and behaviour
The Tasmanian devil is a keystone species in the ecosystem of Tasmania. It is a nocturnal and crepuscular hunter, spending the days in dense bush or in a hole. It has been speculated that nocturnalism may have been adopted to avoid predation by eagles and humans. Young devils are predominantly crepuscular. There is no evidence of torpor.
Young devils can climb trees, but this becomes more difficult as they grow larger. Devils can scale trees of trunk diameter larger than , which tend to have no small side branches to hang onto, up to a height of around . Devils that are yet to reach maturity can climb shrubs to a height of , and can climb a tree to if it is not vertical. Adult devils may eat young devils if they are very hungry, so this climbing behaviour may be an adaptation to allow young devils to escape. Devils can also swim and have been observed crossing rivers that are in width, including icy cold waterways, apparently enthusiastically.
Tasmanian devils do not form packs, but rather spend most of their time alone once weaned. Classically considered as solitary animals, their social interactions were poorly understood. However, a field study published in 2009 shed some light on this. Tasmanian devils in Narawntapu National Park were fitted with proximity sensing radio collars which recorded their interactions with other devils over several months from February to June 2006. This revealed that all devils were part of a single huge contact network, characterised by male-female interactions during mating season, while female–female interactions were the most common at other times, although frequency and patterns of contact did not vary markedly between seasons. Previously thought to fight over food, males only rarely interacted with other males. Hence, all devils in a region are part of a single social network. They are considered to be non-territorial in general, but females are territorial around their dens. This allows a higher total mass of devils to occupy a given area than territorial animals, without conflict. Tasmanian devils instead occupy a home range. In a period of between two and four weeks, devils' home ranges are estimated to vary between , with an average of . The location and geometry of these areas depend on the distribution of food, particularly wallabies and pademelons nearby.
Devils use three or four dens regularly. Dens formerly owned by wombats are especially prized as maternity dens because of their security. Dense vegetation near creeks, thick grass tussocks, and caves are also used as dens. Adult devils use the same dens for life. It is believed that, as a secure den is highly prized, some may have been used for several centuries by generations of animals. Studies have suggested that food security is less important than den security, as habitat destruction that affects the latter has had more effect on mortality rates. Young pups remain in one den with their mother, and other devils are mobile, changing dens every 1–3 days and travelling a mean distance of every night. However, there are also reports that an upper bound can be per night. They choose to travel through lowlands, saddles and along the banks of creeks, particularly preferring carved-out tracks and livestock paths and eschewing steep slopes and rocky terrain. The amount of movement is believed to be similar throughout the year, except for mothers who have given birth recently. The similarity in travel distances for males and females is unusual for sexually dimorphic, solitary carnivores. As a male needs more food, he will spend more time eating than travelling. Devils typically make circuits of their home range during their hunts. In areas near human habitation, they are known to steal clothes, blankets and pillows and take them for use in dens in wooden buildings.
While the dasyurids have similar diet and anatomy, differing body sizes affect thermoregulation and thus behaviour. In ambient temperatures between , the devil was able to maintain a body temperature between . When the temperature was raised to , and the humidity to 50%, the devil's body temperature spiked upwards by within 60 minutes, but then steadily decreased back to the starting temperature after a further two hours, and remained there for two more hours. During this time, the devil drank water and showed no visible signs of discomfort, leading scientists to believe that sweating and evaporative cooling is its primary means of heat dissipation. A later study found that devils pant but do not sweat to release heat. In contrast, many other marsupials were unable to keep their body temperatures down. As the smaller animals have to live in hotter and more arid conditions to which they are less well-adapted, they take up a nocturnal lifestyle and drop their body temperatures during the day, whereas the devil is active in the day and its body temperature varies by from its minimum at night to the maximum in the middle of the day.
The standard metabolic rate of a Tasmanian devil is 141 kJ/kg (15.3 kcal/lb) per day, many times lower than smaller marsupials. A devil uses per day. The field metabolic rate is 407 kJ/kg (44.1 kcal/lb). Along with quolls, Tasmanian devils have a metabolic rate comparable to non-carnivorous marsupials of a similar size. This differs from placental carnivores, which have comparatively high basal metabolic rates. A study of devils showed a loss of weight from from summer to winter, but in the same time, daily energy consumption increased from . This is equivalent to an increase in food consumption from . The diet is protein-based with 70% water content. For every of insects consumed, of energy are produced, while a corresponding amount of wallaby meat generated . In terms of its body mass, the devil eats only a quarter of the eastern quoll's intake, allowing it to survive longer during food shortages.
Feeding
Tasmanian devils can take prey up to the size of a small kangaroo, but in practice they are opportunistic and eat carrion more often than they hunt live prey. Although the devil favours wombats because of the ease of predation and high fat content, it will eat all small native mammals such as wallabies, bettong and potoroos, domestic mammals (including sheep and rabbits), birds (including penguins), fish, fruit, vegetable matter, insects, tadpoles, frogs and reptiles. Their diet is widely varied and depends on the food available. Before the extinction of the thylacine, the Tasmanian devil ate thylacine joeys left alone in dens when their parents were away. This may have helped to hasten the extinction of the thylacine, which also ate devils. They are known to hunt water rats by the sea and forage on dead fish that have been washed ashore. Near human habitation, they can also steal shoes and chew on them, and eat the legs of otherwise robust sheep that have slipped in wooden shearing sheds, leaving their legs dangling below. Other unusual matter observed in devil scats includes collars and tags of devoured animals, intact echidna spines, pencil, plastic and jeans. Devils can bite through metal traps, and tend to reserve their strong jaws for escaping captivity rather than breaking into food storage. Due to their relative lack of speed, they cannot run down a wallaby or a rabbit, but they can attack animals that have become slow due to illness. They survey flocks of sheep by sniffing them from away and attack if the prey is ill. The sheep stamp their feet in a show of strength.
Despite their lack of extreme speed, there have been reports that devils can run at for , and it has been conjectured that, before European immigration and the introduction of livestock, vehicles and roadkill, they would have had to chase other native animals at a reasonable pace to find food. Pemberton has reported that they can average for "extended periods" on several nights per week, and that they run for long distances before sitting still for up to half an hour, something that has been interpreted as evidence of ambush predation.
Devils can dig to forage corpses, in one case digging down to eat the corpse of a buried horse that had died due to illness. They are known to eat animal cadavers by first ripping out the digestive system, which is the softest part of the anatomy, and they often reside in the resulting cavity while they are eating.
On average, devils eat about 15% of their body weight each day, although they can eat up to 40% of their body weight in 30 minutes if the opportunity arises. This means they can become very heavy and lethargic after a large meal; in this state they tend to waddle away slowly and lie down, becoming easy to approach. This has led to a belief that such eating habits became possible due to the lack of a predator to attack such bloated individuals.
Tasmanian devils can eliminate all traces of a carcass of a smaller animal, devouring the bones and fur if desired. In this respect, devils have earned the gratitude of Tasmanian farmers, as the speed at which they clean a carcass helps prevent the spread of insects that might otherwise harm livestock. Some of these dead animals are disposed of when the devils haul off the excess feed back to their residence to continue eating at a later time.
The diet of a devil can vary substantially for males and females, and seasonally, according to studies at Cradle Mountain. In winter, males prefer medium mammals over larger ones, with a ratio of 4:5, but in summer, they prefer larger prey in a 7:2 ratio. These two categories accounted for more than 95% of the diet. Females are less inclined to target large prey, but have the same seasonal bias. In winter, large and medium mammals account for 25% and 58% each, with 7% small mammals and 10% birds. In summer, the first two categories account for 61% and 37% respectively.
Juvenile devils are sometimes known to climb trees; in addition to small vertebrates and invertebrates, juveniles climb trees to eat grubs and birds' eggs. Juveniles have also been observed climbing into nests and capturing birds. Throughout the year, adult devils derive 16.2% of their biomass intake from arboreal species, almost all of which is possum meat, just 1.0% being large birds. From February to July, subadult devils derive 35.8% of their biomass intake from arboreal life, 12.2% being small birds and 23.2% being possums. Female devils in winter source 40.0% of their intake from arboreal species, including 26.7% from possums and 8.9% from various birds. Not all of these animals were caught while they were in trees, but this high figure for females, which is higher than for male spotted-tailed quolls during the same season, is unusual, as the devil has inferior tree climbing skills.
Although they hunt alone, there have been unsubstantiated claims of communal hunting, where one devil drives prey out of its habitat and an accomplice attacks. Eating is a social event for the Tasmanian devil. This combination of a solitary animal that eats communally makes the devil unique among carnivores. Much of the noise attributed to the animal is a result of raucous communal eating, at which up to 12 individuals can gather, although groups of two to five are common; it can often be heard several kilometres away. This has been interpreted as notifications to colleagues to share in the meal, so that food is not wasted by rot and energy is saved. The amount of noise is correlated to the size of the carcass. The devils eat in accordance with a system. Juveniles are active at dusk, so they tend to reach the source before the adults. Typically, the dominant animal eats until it is satiated and leaves, fighting off any challengers in the meantime. Defeated animals run into the bush with their hair and tail erect, their conqueror in pursuit and biting their victim's rear where possible. Disputes are less common as the food source increases as the motive appears to be getting sufficient food rather than oppressing other devils. When quolls are eating a carcass, devils will tend to chase them away. This is a substantial problem for spotted-tailed quolls, as they kill relatively large possums and cannot finish their meal before devils arrive. In contrast, the smaller eastern quolls prey on much smaller victims, and can complete feeding before devils turn up. This is seen as a possible reason for the relatively small population of spotted-tailed quolls.
A study of feeding devils identified twenty physical postures, including their characteristic vicious yawn, and eleven different vocal sounds, including clicks, shrieks and various types of growls, that devils use to communicate as they feed. They usually establish dominance by sound and physical posturing, although fighting does occur. The white patches on the devil are visible to the night-vision of its colleagues. Chemical gestures are also used. Adult males are the most aggressive, and scarring is common. They can also stand on their hind legs and push each other's shoulders with their front legs and heads, similar to sumo wrestling. Torn flesh around the mouth and teeth, as well as punctures in the rump, can sometimes be observed, although these can also be inflicted during breeding fights.
Digestion is very fast in dasyurids and, for the Tasmanian devil, the few hours taken for food to pass through the small gut is a long period in comparison to some other dasyuridae. Devils are known to return to the same places to defecate, and to do so at a communal location, called a devil latrine. It is believed that the communal defecation may be a means of communication that is not well understood. Devil scats are very large compared to body size; they are on average long, but there have been samples that are in length. They are characteristically grey in colour due to digested bones, or have bone fragments included.
Owen and Pemberton believe that the relationship between Tasmanian devils and thylacines was "close and complex", as they competed directly for prey and probably also for shelter. The thylacines preyed on the devils, the devils scavenged from the thylacine's kills, and the devils ate thylacine young. Menna Jones hypothesises that the two species shared the role of apex predator in Tasmania. Wedge-tailed eagles have a similar carrion-based diet to the devils and are regarded as competitors. Quolls and devils are also seen as being in direct competition in Tasmania. Jones believed that the quoll has evolved into its current state in just 100–200 generations of around two years as determined by the equal spacing effect on the devil, the largest species, the spotted-tail quoll, and the smallest species, the eastern quoll. Both the Tasmanian devil and the quolls appears to have evolved up to 50 times faster than the average evolutionary rate amongst mammals.
Reproduction
Females start to breed when they reach sexual maturity, typically in their second year. At this point, they become fertile once a year, producing multiple ova while in heat. As prey is most abundant in spring and early summer, the devil's reproductive cycle starts in March or April so that the end of the weaning period coincides with the maximisation of food supplies in the wild for the newly roaming young devils.
Occurring in March, mating takes places in sheltered locations during both day and night. Males fight over females in the breeding season, and female devils will mate with the dominant male. Females can ovulate up to three times in a 21-day period, and copulation can take five days; one instance of a couple being in the mating den for eight days has been recorded. Devils are not monogamous, and females will mate with several males if not guarded after mating; males also reproduce with several females during a season. Females have been shown to be selective in an attempt to ensure the best genetic offspring, for example, fighting off the advances of smaller males. Males often keep their mates in custody in the den, or take them along if they need to drink, lest they engage in infidelity.
Males can produce up to 16 offspring over their lifetime, while females average four mating seasons and 12 offspring. Theoretically this means that a devil population can double on an annual basis and make the species insulated against high mortality. The pregnancy rate is high; 80% of two-year-old females were observed with newborns in their pouches during the mating season. More recent studies of breeding place the mating season between February and June, as opposed to between February and March.
Gestation lasts 21 days, and devils give birth to 20–30 young standing up, each weighing approximately . Embryonic diapause does not occur. At birth, the front limb has well-developed digits with claws; unlike many marsupials, the claws of baby devils are not deciduous. As with most other marsupials, the forelimb is longer () than the rear limb (), the eyes are spots, and the body is pink. There are no external ears or openings. Unusually, the sex can be determined at birth, with an external scrotum present.
Tasmanian devil young are variously called "pups", "joeys", or "imps". When the young are born, competition is fierce as they move from the vagina in a sticky flow of mucus to the pouch. Once inside the pouch, they each remain attached to a nipple for the next 100 days. The female Tasmanian devil's pouch, like that of the wombat, opens to the rear, so it is physically difficult for the female to interact with young inside the pouch. Despite the large litter at birth, the female has only four nipples, so there are never more than four babies nursing in the pouch, and the older a female devil gets, the smaller her litters will become. Once the young have made contact with the nipple, it expands, resulting in the oversized nipple being firmly clamped inside the newborn and ensuring that the newborn does not fall out of the pouch. On average, more females survive than males, and up to 60% of young do not survive to maturity. Milk replacements are often used for devils that have been bred in captivity, for orphaned devils or young who are born to diseased mothers. Little is known about the composition of the devil's milk compared to other marsupials.
Inside the pouch, the nourished young develop quickly. In the second week, the rhinarium becomes distinctive and heavily pigmented. At 15 days, the external parts of the ear are visible, although these are attached to the head and do not open out until the devil is around 10 weeks old. The ear begins blackening after around 40 days, when it is less than long, and by the time the ear becomes erect, it is between . Eyelids are apparent at 16 days, whiskers at 17 days, and the lips at 20 days. The devils can make squeaking noises after eight weeks, and after around 10–11 weeks, the lips can open. Despite the formation of eyelids, they do not open for three months, although eyelashes form at around 50 days. The young—up to this point they are pink—start to grow fur at 49 days and have a full coat by 90 days. The fur growing process starts at the snout and proceeds back through the body, although the tail attains fur before the rump, which is the last part of the body to become covered. Just before the start of the furring process, the colour of the bare devil's skin will darken and become black or dark grey in the tail.
The devils have a complete set of facial vibrissae and ulnar carpels, although it is devoid of anconeal vibrissae. During the third week, the mystacials and ulnarcarpals are the first to form. Subsequently, the infraorbital, interramal, supraorbital and submental vibrissae form. The last four typically occur between the 26th and 39th day. Their eyes open shortly after their fur coat develops—between 87 and 93 days—and their mouths can relax their hold of the nipple at 100 days. They leave the pouch 105 days after birth, appearing as small copies of the parent and weighing around . Zoologist Eric Guiler recorded its size at this time as follows: a crown-snout length of , tail length of , pes length , manus , shank , forearm and crown-rump length is . During this period, the devils lengthen at a roughly linear rate.
After being ejected, the devils stay outside the pouch, but they remain in the den for around another three months, first venturing outside the den between October and December before becoming independent in January. During this transitional phase out of the pouch, the young devils are relatively safe from predation as they are generally accompanied. When the mother is hunting they can stay inside a shelter or come along, often riding on their mother's back. During this time they continue to drink their mother's milk. Female devils are occupied with raising their young for all but approximately six weeks of the year. The milk contains a higher amount of iron than the milk of placental mammals. In Guiler's 1970 study, no females died while rearing their offspring in the pouch. After leaving the pouch, the devils grow by around a month until they are six months old. While most pups will survive to be weaned, Guiler reported that up to three fifths of devils do not reach maturity. As juveniles are more crepuscular than adults, their appearance in the open during summer gives the impression to humans of a population boom. A study into the success of translocated devils that were orphaned and raised in captivity found that young devils who had consistently engaged with new experiences while they were in captivity survived better than young who had not.
Conservation status
The cause of the devil's disappearance from the mainland is unclear, but their decline seems to coincide with an abrupt change in climate and the expansion across the mainland of indigenous Australians and dingoes. However, whether it was direct hunting by people, competition with dingoes, changes brought about by the increasing human population, who by 3000 years ago were using all habitat types across the continent, or a combination of all three, is unknown; devils had coexisted with dingoes on the mainland for around 3000 years. Brown has also proposed that the El Niño–Southern Oscillation (ENSO) grew stronger during the Holocene, and that the devil, as a scavenger with a short life span, was highly sensitive to this. In dingo-free Tasmania, carnivorous marsupials were still active when Europeans arrived. The extermination of the thylacine after the arrival of the Europeans is well known, but the Tasmanian devil was threatened as well.
Habitat disruption can expose dens where mothers raise their young. This increases mortality, as the mother leaves the disturbed den with her pups clinging to her back, making them more vulnerable. Cancer in general is a common cause of death in devils. In 2008, high levels of potentially carcinogenic flame retardant chemicals were found in Tasmanian devils. Preliminary results of tests ordered by the Tasmanian government on chemicals found in fat tissue from 16 devils have revealed high levels of hexabromobiphenyl (BB153) and "reasonably high" levels of decabromodiphenyl ether (BDE209). The Save the Tasmanian Devil Appeal is the official fundraising entity for the Save the Tasmanian Devil Program. The priority is to ensure the survival of the Tasmanian devil in the wild.
Population declines
At least two major population declines, possibly due to disease epidemics, have occurred in recorded history: in 1909 and 1950. The devil was also reported as scarce in the 1850s. It is difficult to estimate the size of the devil population. In the mid-1990s, the population was estimated at 130,000–150,000 animals, but this is likely to have been an overestimate. The Tasmanian devil's population has been calculated in 2008 by Tasmania's Department of Primary Industries and Water as being in the range of 10,000 to 100,000 individuals, with 20,000 to 50,000 mature individuals being likely. Experts estimate that the devil has suffered a more than 80% decline in its population since the mid-1990s and that only around 10,000–15,000 remain in the wild as of 2008.
The species was listed as vulnerable under the Tasmanian Threatened Species Protection Act 1995 in 2005 and the Australian Environment Protection and Biodiversity Conservation Act 1999 in 2006, which means that it is at risk of extinction in the "medium term". The IUCN classified the Tasmanian devil in the lower risk/least concern category in 1996, but in 2009 they reclassified it as endangered. Appropriate wildlife refuges such as Savage River National Park in North West Tasmania provide hope for their survival.
Culling
The first European Tasmanian settlers ate Tasmanian devil, which they described as tasting like veal. As it was believed devils would hunt and kill livestock, possibly due to strong imagery of packs of devils eating weak sheep, a bounty scheme to remove the devil from rural properties was introduced as early as 1830. However, Guiler's research contended that the real cause of livestock losses was poor land management policies and feral dogs. In areas where the devil is now absent, poultry has continued to be killed by quolls. In earlier times, hunting possums and wallabies for fur was a big business—more than 900,000 animals were hunted in 1923—and this resulted in a continuation of bounty hunting of devils as they were thought to be a major threat to the fur industry, even though quolls were more adept at hunting the animals in question. Over the next 100 years, trapping and poisoning brought them to the brink of extinction.
After the death of the last thylacine in 1936, the Tasmanian devil was protected by law in June 1941 and the population slowly recovered. In the 1950s, with reports of increasing numbers, some permits to capture devils were granted after complaints of livestock damage. In 1966, poisoning permits were issued although attempts to have the animal unprotected failed. During this time environmentalists also became more outspoken, particularly as scientific studies provided new data suggesting the threat of devils to livestock had been vastly exaggerated. Numbers may have peaked in the early 1970s after a population boom; in 1975 they were reported to be lower, possibly due to overpopulation and consequent lack of food. Another report of overpopulation and livestock damage was reported in 1987. The following year, Trichinella spiralis, a parasite which kills animals and can infect humans, was found in devils and minor panic broke out before scientists assured the public that 30% of devils had it but that they could not transmit it to other species. Control permits were ended in the 1990s, but illegal killing continues to a limited extent, albeit "locally intense". This is not considered a substantial problem for the survival of the devil. Approximately 10,000 devils were killed per year in the mid-1990s. A selective culling program has taken place to remove individuals affected with DFTD, and has been shown to not slow the rate of disease progression or reduced the number of animals dying. A model has been tested to find out whether culling devils infected with DFTD would assist in the survival of the species, and it has found that culling would not be a suitable strategy to employ.
Road mortality
Motor vehicles are a threat to localised populations of non-abundant Tasmanian mammals, and a 2010 study showed that devils were particularly vulnerable. A study of nine species, mostly marsupials of a similar size, showed that devils were more difficult for drivers to detect and avoid. At high beam, devils had the lowest detection distance, 40% closer than the median. This requires a 20% reduction in speed for a motorist to avoid the devil. For low beam, the devils had the second shortest detection distance, 16% below the median. For avoidance of roadkill to be feasible, motorists would have to drive at around half the current speed limit in rural areas. A study in the 1990s on a localised population of devils in a national park in Tasmania recorded a halving of the population after a hitherto gravel access road was upgraded, surfaced with bitumen and widened. At the same time, there was a large increase in deaths caused by vehicles along the new road; there had been none in the preceding six months.
The vast majority of deaths occurred in the sealed portion of the road, believed to be due to an increase in speeds. It was also conjectured that the animals were harder to see against the dark bitumen instead of the light gravel. The devil and quoll are especially vulnerable as they often try to retrieve roadkill for food and travel along the road. To alleviate the problem, traffic slowing measures, man-made pathways that offer alternative routes for devils, education campaigns, and the installation of light reflectors to indicate oncoming vehicles have been implemented. They are credited with decreases in roadkill. Devils have often been victims of roadkill when they are retrieving other roadkill. Work by scientist Menna Jones and a group of conservation volunteers to remove dead animals from the road resulted in a significant reduction in devil traffic deaths. It was estimated that 3,392 devils, or 3.8–5.7% of the population, were being killed annually by vehicles in 2001–2004. In 2009, the Save the Tasmanian Devil group launched the "Roadkill Project", which allowed members of the public to report sightings of devils which had been killed on the road. On 25 September 2015, 20 immunised devils were microchipped and released in Narawntapu National Park. By 5 October four had been hit by cars, prompting Samantha Fox, leader of Save the Tasmanian Devil, to describe roadkill as being the biggest threat to the Tasmanian devil after DFTD. A series of solar-powered alarms have been trialled that make noises and flash lights when cars are approaching, warning the animals. The trial ran for 18 months and the trial area had two-thirds less deaths than the control.
Devil facial tumour disease
First seen in 1996 in Mount William in northeastern Tasmania, devil facial tumour disease (DFTD) has ravaged Tasmania's wild devils, and estimates of the impact range from 20% to as much as an 80% decline in the devil population, with over 65% of the state affected. The state's west coast area and far north-west are the only places where devils are tumour free. Individual devils die within months of infection. The disease is an example of transmissible cancer, which means that it is contagious and passed from one animal to another. This tumour is able to pass between hosts without inducing a response from the host's immune system. Dominant devils who engage in more biting behaviour are more exposed to the disease.
Wild Tasmanian devil populations are being monitored to track the spread of the disease and to identify changes in disease prevalence. Field monitoring involves trapping devils within a defined area to check for the presence of the disease and determine the number of affected animals. The same area is visited repeatedly to characterise the spread of the disease over time. So far, it has been established that the short-term effects of the disease in an area can be severe. Long-term monitoring at replicated sites will be essential to assess whether these effects remain, or whether populations can recover. Field workers are also testing the effectiveness of disease suppression by trapping and removing diseased devils. It is hoped that the removal of diseased devils from wild populations should decrease disease prevalence and allow more devils to survive beyond their juvenile years and breed. In March 2017, scientists at the University of Tasmania presented an apparent first report of having successfully treated Tasmanian devils with the disease. Live cancer cells that were treated with IFN-γ to restore MHC-I expression, were injected into the infected devils to stimulate their immune system to recognise and fight the disease. In 2020 it was reported that one of the last DFTD-free wild population of Tasmanian devils was suffering from inbreeding depression and has undergone a significant decline in reproductive success in recent years.
Relationship with humans
At Lake Nitchie in western New South Wales in 1970, a male human skeleton wearing a necklace of 178 teeth from 49 different devils was found. The skeleton is estimated to be 7000 years old, and the necklace is believed to be much older than the skeleton. Archaeologist Josephine Flood believes the devil was hunted for its teeth and that this contributed to its extinction on mainland Australia. Owen and Pemberton note that few such necklaces have been found. Middens that contain devil bones are rare—two notable examples are Devil's Lair in the south-western part of Western Australia and Tower Hill in Victoria. In Tasmania, local Indigenous Australians and devils sheltered in the same caves. Tasmanian Aboriginal names for the devil recorded by Europeans include "tarrabah", "poirinnah", and "par-loo-mer-rer". Variations also exist, such as "Taraba" and "purinina".
It is a common belief that devils will eat humans. While they are known to eat dead bodies, there are prevalent myths that they eat living humans who wander into the bush. Despite outdated beliefs and exaggerations regarding their disposition, many, although not all, devils will remain still when in the presence of a human; some will also shake nervously. They can bite and scratch out of fear when held by a human, but a firm grip will cause them to remain still. Although they can be tamed, they are asocial, and are not considered appropriate as pets; they have an unpleasant odour, and neither demonstrate nor respond to affection.
Until recently, the devil was not studied much by academics and naturalists. At the start of the 20th century, Hobart zoo operator Mary Roberts, who was not a trained scientist, was credited for changing people's attitudes and encouraging scientific interest in native animals (such as the devil) that were seen as fearsome and abhorrent, and the human perception of the animal changed. Theodore Thomson Flynn was the first professor of biology in Tasmania, and carried out some research during the period around World War I. In the mid-1960s, Professor Guiler assembled a team of researchers and started a decade of systematic fieldwork on the devil. This is seen as the start of modern scientific study of it. However, the devil was still negatively depicted, including in tourism material. The first doctorate awarded for research into the devil came in 1991.
In captivity
Early attempts to breed Tasmanian devils in captivity had limited success. Mary Roberts bred a pair at Beaumaris Zoo (which she named Billy and Truganini) in 1913. However, although advised to remove Billy, Roberts found Truganini too distressed by his absence, and returned him. The first litter was presumed eaten by Billy, but a second litter in 1914 survived, after Billy was removed. Roberts wrote an article on keeping and breeding the devils for the London Zoological Society. Even by 1934, successful breeding of the devil was rare. In a study on the growth of young devils in captivity, some developmental stages were very different from those reported by Guiler. The pinnae were free on day 36, and eyes opened later, on days 115–121. In general, females tend to retain more stress after being taken into captivity than males.
Tasmanian devils were displayed in various zoos around the world from the 1850s onwards. In the 1950s several animals were given to European zoos. In October 2005 the Tasmanian government sent four devils, two male and two female, to the Copenhagen Zoo, following the birth of the first son of King Frederik X of Denmark and his Tasmanian-born wife Mary. Due to restrictions on their export by the Australian government, at the time these were the only devils known to be living outside Australia. In June 2013, due to the successes of the insurance population program, it was planned to send devils to other zoos around the world in a pilot program. San Diego Zoo Wildlife Alliance and Albuquerque Biopark were selected to participate in the program, and Wellington Zoo and Auckland Zoo soon followed. In the United States, four additional zoos have since been selected as part of the Australian government's Save the Tasmanian Devil program, the zoos selected were: the Fort Wayne Children's Zoo, the Los Angeles Zoo, the Saint Louis Zoo, and the Toledo Zoo. Captive devils are usually forced to stay awake during the day to cater to visitors, rather than following their natural nocturnal style.
In popular culture
The devil is an iconic animal within Australia, and particularly associated with Tasmania. The animal is used as the emblem of the Tasmanian National Parks and Wildlife Service, and the former Tasmanian Australian rules football team which played in the Victorian Football League was known as the Devils. The Hobart Devils were once part of the National Basketball League. The devil has appeared on several commemorative coins in Australia over the years. Cascade Brewery in Tasmania sells a ginger beer with a Tasmanian devil on the label. In 2015, the Tasmanian devil was chosen as Tasmania's state emblem.
Tasmanian devils are popular with tourists, and the director of the Tasmanian Devil Conservation Park has described their possible extinction as "a really significant blow for Australian and Tasmanian tourism". There has also been a multimillion-dollar proposal to build a giant 19 m-high, 35 m-long devil in Launceston in northern Tasmania as a tourist attraction. Devils began to be used as ecotourism in the 1970s, when studies showed that the animals were often the only things known about Tasmania overseas, and suggested that they should therefore be the centrepiece of marketing efforts, resulting in some devils being taken on promotional tours.
The Tasmanian devil is probably best known internationally as the inspiration for the Looney Tunes cartoon character the Tasmanian Devil, or "Taz" in 1954. Little known at the time, the loud hyperactive cartoon character has little in common with the real life animal. After a few shorts between 1957 and 1964, the character was retired until the 1990s, when he gained his own show, Taz-Mania, and again became popular. In 1997, a newspaper report noted that Warner Bros. had "trademarked the character and registered the name Tasmanian Devil", and that this trademark "was policed", including an eight-year legal case to allow a Tasmanian company to call a fishing lure "Tasmanian Devil". Debate followed, and a delegation from the Tasmanian government met with Warner Bros. Ray Groom, the Tourism Minister, later announced that a "verbal agreement" had been reached. An annual fee would be paid to Warner Bros. in return for the Government of Tasmania being able to use the image of Taz for "marketing purposes". This agreement later disappeared. In 2006, Warner Bros. permitted the Government of Tasmania to sell stuffed toys of Taz with profits funnelled into research on DFTD.
| Biology and health sciences | Marsupials | null |
31415 | https://en.wikipedia.org/wiki/Topaz | Topaz | Topaz is a silicate mineral made of aluminum and fluorine with the chemical formula AlSiO(F, OH). It is used as a gemstone in jewelry and other adornments. Common topaz in its natural state is colorless, though trace element impurities can make it pale blue or golden brown to yellow-orange. Topaz is often treated with heat or radiation to make it a deep blue, reddish-orange, pale green, pink, or purple.
Topaz is a nesosilicate mineral, and more specifically, an aluminosilicate mineral. It is one of the hardest naturally occurring minerals and has a relatively low index of refraction. It has the orthorhombic crystal system and a dipyramidial crystal class.
It occurs in many places in the world. Some of the most popular places where topaz is sourced are Brazil and Russia. Topaz is often mined in open pit or alluvial settings.
Etymology
The word "topaz" is usually believed to be derived (via Old French: Topace and Latin: Topazius) from the Greek Τοπάζιος (Topázios) or Τοπάζιον (Topázion), from Τοπαζος. This is the ancient name of St. John's Island in the Red Sea which was difficult to find and from which a yellow stone (now believed to be chrysolite: yellowish olivine) was mined in ancient times. The name topaz was first applied to the mineral now known by that name in 1737. Ancient Sri Lanka (Tamraparni) exported topazes to Greece and ancient Egypt, which led to the etymologically related names of the island by Alexander Polyhistor (Topazius) and the early Egyptians (Topapwene) – "land of the Topaz". Pliny said that Topazos is a legendary island in the Red Sea and the mineral "topaz" was first mined there. Alternatively, the word topaz may be related to the Sanskrit word तपस् "tapas", meaning "heat" or "fire".
History
Nicols, the author of one of the first systematic treatises on minerals and gemstones, dedicated two chapters to the topic in 1652. In the Middle Ages, the name topaz was used to refer to any yellow gemstone, but in modern times it denotes only the silicate described above.
Many English translations of the Bible, including the King James Version, mention topaz. However, because these translations as topaz all derive from the Septuagint translation topazi[os], which referred to a yellow stone that was not topaz, but probably chrysolite (chrysoberyl or peridot), topaz is likely not meant here.
An English superstition also held that topaz cured lunacy. The ancient Romans believed that topaz provided protection from danger while traveling. During the Middle Ages, it was believed that attaching the topaz to the left arm protected the owner from any curse and warded off the evil eye. It was also believed that wearing topaz increased body heat, which would enable people to relieve a cold or fever. In Europe during the Middle Ages, topaz was believed to enhance mental powers. In India, people believed topaz granted beauty, intelligence, and longevity when worn over the heart.
Gemstone
Topaz is a gemstone. In cut and polished form, it is used to make jewelry or other adornments. Lower quality topaz is commonly used as an abrasive material due to its hardness and it is used to produce refractory materials for high temperature environments. Topaz can be used as a flux in steel production. Using topaz as a refectory material does have some health and environmental concerns due to the production of fluorine as a byproduct of calcining topaz.
Topaz is a part of the second rank of gemstones, or semiprecious stones, accompanying aquamarine, morganite, and tourmaline. The first rank of gemstones, or precious stones, includes ruby, sapphire, diamond, and emerald.
Orange topaz, also known as precious topaz, is the birthstone for the month of November, the symbol of friendship, and the state gemstone of the U.S. state of Utah. Blue topaz is the state gemstone of the US state of Texas. The 4th wedding anniversary gem is blue topaz and the 23rd is imperial topaz.
Synthetic topaz can be produced using a method that includes the thermal hydrolysis of SiO2 and AlF3. When these compounds are heated to temperatures of 750° to 850 °C topaz is formed. Another method uses a combination of amorphous Al2O3, Na2SiF6, and water which is heated to a temperature of 500 °C, put under a pressure of 4000 bars, and left for 9 days.
To care for a topaz gemstone, it is best to avoid ultrasonic cleaners or steam as this could produce small fractures within the crystal. Warm water with soap is the best way to wash it.
To choose an ethically sourced topaz gemstone, it is recommended to search for a stone that the seller knows the origin of. If the seller cannot produce information about the locality and mine that the topaz was collected from, it is likely that it was collected unethically.
Structure
Topaz is an accessory mineral to felsic igneous, sedimentary, and hydrothermally altered rocks.
The crystal structure of topaz alternates between sheets of (F, OH)2O and O along (010) with Al3+ occupying the octahedral sites and Si4+ in the tetrahedral sites. Fluorine can be substituted by hydroxide in topaz by up to 30 mol.% in nature and hydroxide-dominating topaz can be made in laboratories but has not been found in nature.
On occasion, cavities can be found within topaz and they are filled with a liquid called brewsterlinite. Brewsterlinite was discovered by David Brewster upon heating a sample of topaz. After heating, the topaz lost mass, and through examination Brewster concluded Topaz was formed in a wet environment creating these liquid-filled cavities. This liquid is a hydrocarbon with a refractive index of 1·13.
Topaz's crystal habit takes many forms. It can display a range of slender and long crystals to bulky and short. There can also be variation in the terminations displaying blunt, pyramidal, chisel, or wedge-shaped terminations. The perfect cleavage {001} in topaz breaks no Si-O bonds within its structure and only breaks Al-O and Al-F bonds. This cleavage is diagnostic for this mineral. The 2V optical angle in topaz can range from 48° to 69.5°. Low fluorine content yields a smaller angle and high fluorine content yields a larger angle.
Characteristics
Topaz in its natural state is colorless, often with a greyish cast. It also occurs as a golden brown to yellow color which makes it sometimes confused with citrine, a less valuable gemstone. The specific gravity of all shades of topaz, however, means that it is considerably heavier than citrine (about 25% per volume) and this difference in weight can be used to distinguish two stones of equal volume. Also, if the volume of a given stone can be determined, its weight if it were topaz can be established and then checked with a sensitive scale. Likewise, glass stones are also much lighter than equally sized topaz.
A variety of impurities and treatments may make topaz wine red, pale gray, reddish-orange, pale green, or pink (rare), and opaque to translucent/transparent. The pink and red varieties come from chromium replacing aluminium in its crystalline structure.
Imperial topaz is yellow, pink (rare, if natural), or pink-orange. Brazilian imperial topaz can often have a bright yellow to deep golden brown hue, sometimes even violet. Many brown or pale topazes are treated to make them bright yellow, gold, pink, or violet colored. Some imperial topaz stones can fade from exposure to sunlight for an extended period of time. Naturally occurring blue topaz is quite rare. Typically, colorless, gray, or pale yellow and blue material is heat treated and irradiated to produce a more desired darker blue. Mystic topaz is a colorless topaz that has been artificially coated via a vapor deposition process giving it a rainbow effect on its surface.
Although very hard, topaz must be treated with greater care than some other minerals of similar hardness (such as corundum) because of a weakness of atomic bonding of the stone's molecules along one or another axial plane (whereas diamonds, for example, are composed of carbon atoms bonded to each other with equal strength along all of its planes). This gives topaz a tendency to break along such a cleavage plane if struck with sufficient force.
Topaz has a relatively low index of refraction for a gemstone, and so stones with large facets or tables do not sparkle as readily as stones cut from minerals with higher refractive indices, though quality colorless topaz sparkles and shows more "life" than similarly cut quartz. When given a typical "brilliant" cut, topaz may either show a sparkling table facet surrounded by dead-looking crown facets or a ring of sparkling crown facets with a dull well-like table. It also takes an exceptionally fine polish, and can sometimes be distinguished from citrine by its slippery feel alone (quartz cannot be polished to this level of smoothness).
Another method of distinguishing topaz from quartz is by placing the unset stone in a solution of bromoform or methylene iodide. Quartz will invariably float in these solutions, whereas topaz will sink.
Localities and occurrence
Topaz is commonly associated with silicic igneous rocks of the granite and rhyolite type. It typically crystallizes in granitic pegmatites or in vapor cavities in rhyolite lava flows including those at Topaz Mountain in western Utah and Chivinar in South America. It can be found with fluorite and cassiterite in various areas including the Ural and Ilmensky mountains of Russia, Afghanistan, Sri Lanka, the Czech Republic, Germany, Norway, Pakistan, Italy, Sweden, Japan, Brazil, Mexico; Flinders Island, Australia; Nigeria, Ukraine and the United States. Topaz was found around the time of the 1700s in a pegmatite formation within the central Urals Mountains in Russia.
Brazil is one of the largest producers of topaz, some clear topaz crystals from Brazilian pegmatites can reach boulder size and weigh hundreds of pounds. The Topaz of Aurangzeb, observed by Jean Baptiste Tavernier weighed . The American Golden Topaz, a more recent gem, weighed . Large, vivid blue topaz specimens from the St. Anns mine in Zimbabwe were found in the late 1980s. Colorless and light-blue varieties of topaz are found in Precambrian granite in Mason County, Texas within the Llano Uplift. There is no commercial mining of topaz in that area. It is possible to synthesize topaz.
Mining
Large-scale topaz mining typically uses open pit and underground mining to extract the gem. The waste material is discarded using large machines to transport it away while the valuable ore is washed and sorted to recover the topaz gems. In smaller-scale mines, dry sieving is used in alluvial environments by shoveling the material into sieves to separate the gems from unwanted dust and debris. The topaz can then be selected by hand from the remaining material. Mined topaz is then sent to be processed for use in jewelry by polishing the gem and treating it to achieve the desired color.
Mining for topaz can cause some environmental concerns mostly associated with larger-scale operations. The introduction of a large open pit mine into an environment leads to modification of the land around it to make it accessible to workers. After use of such mines is over, they are often refilled with loose sediments left over from the mining process. These loose sediments can be washed away to other areas, cutting off water features, destroying farmland, and creating a threat of landslides. The pollution produced by mining can impact the environment around it and damage its health. Deforestation undergone to create the mine, along with the machinery used during the mining process, adds greenhouse gasses to the atmosphere. Deforestation also removes habitats and biodiversity from a large area of natural space. These disruptions to the ecosystem can be challenging to wildlife and local populations. Water, also a large component of mining operations, is drawn away from neighboring communities to create a lack of water. Tailings leftover from the mining process can leach contaminants into nearby water systems and can contaminate the drinking water of local communities.
Some ways humans can be impacted by gem mining is through danger in mines and inadequate compensation. Accidents such as collapsing mines and machinery malfunctioning can put human life in danger. Those working in the mines can also be exposed to harmful chemicals and heavy metals that can impact their health. For income, there can be an unequal dispersal of the funds made from gem mining between land owners, laborers, and the operators of the mine. In illegal mining operations, there can be more money given to miners, however, these operations have fewer regulations and more dangerous environments.
| Physical sciences | Silicate minerals | Earth science |
31418 | https://en.wikipedia.org/wiki/Thermobaric%20weapon | Thermobaric weapon | A thermobaric weapon, also called an aerosol bomb, or a vacuum bomb, is a type of explosive munition that works by dispersing an aerosol cloud of gas, liquid or powdered explosive. The fuel is usually a single compound, rather than a mixture of multiple substances. Many types of thermobaric weapons can be fitted to hand-held launchers, and can also be launched from airplanes.
Terminology
The term thermobaric is derived from the Greek words for 'heat' and 'pressure': thermobarikos (θερμοβαρικός), from thermos (θερμός) 'hot' + baros (βάρος) 'weight, pressure' + suffix -ikos (-ικός) '-ic'.
Other terms used for the family of weapons are high-impulse thermobaric weapons, heat and pressure weapons, vacuum bombs, and fuel-air explosives (FAE).
Mechanism
Most conventional explosives consist of a fuel–oxidiser premix, but thermobaric weapons consist only of fuel and as a result are significantly more energetic than conventional explosives of equal weight. Their reliance on atmospheric oxygen makes them unsuitable for use under water, at high altitude, and in adverse weather. They are, however, considerably more effective when used in enclosed spaces such as tunnels, buildings, and non-hermetically sealed field fortifications (foxholes, covered slit trenches, bunkers).
The initial explosive charge detonates as it hits its target, opening the container and dispersing the fuel mixture as a cloud. The typical blast wave of a thermobaric weapon lasts significantly longer than that of a conventional explosive.
In contrast to an explosive that uses oxidation in a confined region to produce a blast front emanating from a single source, a thermobaric flame front accelerates to a large volume, which produces pressure fronts within the mixture of fuel and oxidant and then also in the surrounding air.
Thermobaric explosives apply the principles underlying accidental unconfined vapor cloud explosions, which include those from dispersions of flammable dusts and droplets. Such dust explosions happened most often in flour mills and their storage containers, grain bins (corn silos etc.), and later in coal mines, prior to the 20th century. Accidental unconfined vapor cloud explosions now happen most often in partially or completely empty oil tankers, refinery tanks, and vessels, such as the Buncefield fire in the United Kingdom in 2005, where the blast wave woke people from its centre.
A typical weapon consists of a container packed with a fuel substance, the centre of which has a small conventional-explosive "scatter charge". Fuels are chosen on the basis of the exothermicity of their oxidation, ranging from powdered metals, such as aluminium or magnesium, to organic materials, possibly with a self-contained partial oxidant. The most recent development involves the use of nanofuels.
A thermobaric bomb's effective yield depends on a combination of a number of factors such as how well the fuel is dispersed, how rapidly it mixes with the surrounding atmosphere and the initiation of the igniter and its position relative to the container of fuel. In some designs, strong munitions cases allow the blast pressure to be contained long enough for the fuel to be heated well above its autoignition temperature so that once the container bursts, the superheated fuel autoignites progressively as it comes into contact with atmospheric oxygen.
Conventional upper and lower limits of flammability apply to such weapons. Close in, blast from the dispersal charge, compressing and heating the surrounding atmosphere, has some influence on the lower limit. The upper limit has been demonstrated to influence the ignition of fogs above pools of oil strongly. That weakness may be eliminated by designs in which the fuel is preheated well above its ignition temperature so that its cooling during its dispersion still results in a minimal ignition delay on mixing. The continual combustion of the outer layer of fuel molecules, as they come into contact with the air, generates added heat which maintains the temperature of the interior of the fireball, and thus sustains the detonation.
In confinement, a series of reflective shock waves is generated, which maintain the fireball and can extend its duration to between 10 and 50 ms as exothermic recombination reactions occur. Further damage can result as the gases cool and pressure drops sharply, leading to a partial vacuum. This rarefaction effect has given rise to the misnomer "vacuum bomb". Piston-type afterburning is also believed to occur in such structures, as flame-fronts accelerate through it.
Fuel–air explosive
A fuel–air explosive (FAE) device consists of a container of fuel and two separate explosive charges. After the munition is dropped or fired, the first explosive charge bursts open the container at a predetermined height and disperses the fuel in a cloud that mixes with atmospheric oxygen (the size of the cloud varies with the size of the munition). The cloud of fuel flows around objects and into structures. The second charge then detonates the cloud and creates a massive blast wave. The blast wave can destroy reinforced buildings, equipment, and kill or injure people. The antipersonnel effect of the blast wave is more severe in foxholes and tunnels and in enclosed spaces, such as bunkers and caves.
Effects
Conventional countermeasures such as barriers (sandbags) and personnel armour are not effective against thermobaric weapons. A Human Rights Watch report of 1 February 2000 quotes a study made by the US Defense Intelligence Agency:
According to a US Central Intelligence Agency study,
Another Defense Intelligence Agency document speculates that, because the "shock and pressure waves cause minimal damage to brain tissue... it is possible that victims of FAEs are not rendered unconscious by the blast, but instead suffer for several seconds or minutes while they suffocate".
Development
German
The first attempts occurred during the First World War when incendiary shells (in German 'Brandgranate') used a slow but intense burning material, such as tar impregnated tissue and gunpowder dust. These shells burned for approximately 2 minutes after the shell exploded and spread the burning elements in every direction.
In World War II, the German Wehrmacht attempted to develop a vacuum bomb, under the direction of the Austrian physicist Mario Zippermayr.
The weapon was claimed by a weapons specialist (K.L. Bergmann) to have been tested on the Eastern front under the code-name "Taifun B" and was ready for deployment during the Normandy invasion in June, 1944. Apparently, canisters of a charcoal, aluminium and aviation fuel would've been launched, followed with a secondary launch of incendiary rockets. It was destroyed by a Western artillery barrage minutes before being fired just before Operation Cobra.
United States
FAEs were developed by the United States for use in the Vietnam War. The CBU-55 FAE fuel-air cluster bomb was mostly developed by the US Naval Weapons Center at China Lake, California.
Current American FAE munitions include the following:
BLU-73 FAE I
BLU-95 (FAE-II)
BLU-96 (FAE-II)
CBU-72 FAE I
AGM-114 Hellfire missile
XM1060 grenade
SMAW-NE round for rocket launcher
The XM1060 40-mm grenade is a small-arms thermobaric device, which was fielded by US forces in Afghanistan in 2002, and proved to be popular against targets in enclosed spaces, such as caves. Since the 2003 invasion of Iraq, the US Marine Corps has introduced a thermobaric "Novel Explosive" (SMAW-NE) round for the Mk 153 SMAW rocket launcher. One team of Marines reported that they had destroyed a large one-story masonry type building with one round from . The AGM-114N Hellfire II, uses a Metal Augmented Charge (MAC) warhead, which contains a thermobaric explosive fill that uses aluminium powder coated or mixed with PTFE layered between the charge casing and a PBXN-112 explosive mixture. When the PBXN-112 detonates, the aluminium mixture is dispersed and rapidly burns. The result is a sustained high pressure that is extremely effective against people and structures.
Soviet, later Russian
Following FAEs developed by the United States for use in the Vietnam War, Soviet Union scientists quickly developed their own FAE weapons. Since Afghanistan, research and development has continued, and Russian forces now field a wide array of third-generation FAE warheads, such as the RPO-A. The Russian armed forces have developed thermobaric ammunition variants for several of their weapons, such as the TBG-7V thermobaric grenade with a lethality radius of , which can be launched from a rocket propelled grenade (RPG) RPG-7. The GM-94 is a pump-action grenade launcher designed mainly to fire thermobaric grenades for close combat. The grenade weighed and contained of explosive, its lethality radius is , but due to the deliberate "fragmentation-free" design of the grenade, a distance of is considered safe.
The RPO-A and upgraded RPO-M are infantry-portable rocket propelled grenades designed to fire thermobaric rockets. The RPO-M, for instance, has a thermobaric warhead with a TNT equivalence of and destructive capabilities similar to a high-explosive fragmentation artillery shell. The RShG-1 and the RShG-2 are thermobaric variants of the RPG-27 and RPG-26 respectively. The RShG-1 is the more powerful variant, with its warhead having a lethality radius and producing about the same effect as of TNT. The RMG is a further derivative of the RPG-26 that uses a tandem-charge warhead, with the precursor high-explosive anti-tank (HEAT) warhead blasting an opening for the main thermobaric charge to enter and detonate inside. The RMG's precursor HEAT warhead can penetrate 300 mm of reinforced concrete or over 100 mm of rolled homogeneous armour, thus allowing the -diameter thermobaric warhead to detonate inside.
Other examples include the semi-automatic command to line of sight (SACLOS) or millimeter-wave active radar homing guided thermobaric variants of the 9M123 Khrizantema, the 9M133F-1 thermobaric warhead variant of the 9M133 Kornet, and the 9M131F thermobaric warhead variant of the 9K115-2 Metis-M, all of which are anti-tank missiles. The Kornet has since been upgraded to the Kornet-EM, and its thermobaric variant has a maximum range of and has a TNT equivalence of . The 9M55S thermobaric cluster warhead rocket was built to be fired from the BM-30 Smerch MLRS. A dedicated carrier of thermobaric weapons is the purpose-built TOS-1, a 24-tube MLRS designed to fire thermobaric rockets. A full salvo from the TOS-1 will cover a rectangle . The Iskander-M theatre ballistic missile can also carry a thermobaric warhead.
Many Russian Air Force munitions have thermobaric variants. The S-8 rocket has the S-8DM and S-8DF thermobaric variants. The S-8's brother, the S-13, has the S-13D and S-13DF thermobaric variants. The S-13DF's warhead weighs only , but its power is equivalent to of TNT. The KAB-500-OD variant of the KAB-500KR has a thermobaric warhead. The ODAB-500PM and ODAB-500PMV unguided bombs carry a fuel–air explosive each. ODAB-1500 is a larger version of the bomb. The KAB-1500S GLONASS/GPS guided bomb also has a thermobaric variant. Its fireball will cover a radius and its lethal zone is a radius. The 9M120 Ataka-V and the 9K114 Shturm ATGMs both have thermobaric variants.
In September 2007, Russia exploded the largest thermobaric weapon ever made, and claimed that its yield was equivalent to that of a nuclear weapon. Russia named this particular ordnance the "Father of All Bombs" in response to the American-developed Massive Ordnance Air Blast (MOAB) bomb, which has the backronym "Mother of All Bombs" and once held the title of the most powerful non-nuclear weapon in history.
Iraq
Iraq was alleged to possess the technology as early as 1990.
Israel
Israel was alleged to possess thermobaric technology as early as 1990, according to Pentagon sources.
Spain
In 1983, a program of military research was launched with collaboration between the Spanish Ministry of Defence (Directorate General of Armament and Material, DGAM) and Explosivos Alaveses (EXPAL) which was a subsidiary of Unión Explosivos Río Tinto (ERT). The goal of the programme was to develop a thermobaric bomb, the BEAC (Bomba Explosiva de Aire-Combustible). A prototype was tested successfully in a foreign location out of safety and confidentiality concerns. The Spanish Air and Space Force has an undetermined number of BEACs in its inventory.
China
In 1996, the People's Liberation Army (PLA) began development of the , a portable thermobaric rocket launcher, based on the Soviet RPO-A Shmel. Introduced in 2000 it is reported to weigh 3.5 kg and contains 2.1 kg of thermobaric filler. An improved version called the PF-97A was introduced in 2008.
China is reported to have other thermobaric weapons, including bombs, grenades and rockets. Research continues on thermobaric weapons capable of reaching 2,500 degrees.
Brazil
In 2004, under the request of the Estado Maior da Aeronáutica (Military Staff of Aeronautics) and the Diretoria de Material Aeronáutico e Bélico (Board of Aeronautical and Military Equipment) the Instituto de Aeronautica e Espaço (Institute of Aeronautics and Space) started developing a thermobaric bomb called Trocano .
Trocano is a thermobaric weapon similar in design to the United States' MOAB weapon or Russia's FOAB. Like the US weapon, the Trocano was designed to be pallet-loaded into a C-130 Hercules aircraft, and deployed using a parachute to drag it from the C-130's cargo bay and separate the bomb from its pallet.
United Kingdom
In 2009, the British Ministry of Defence (MoD) acknowledged that Army Air Corps (AAC) AgustaWestland Apaches had used AGM-114 Hellfire missiles purchased from the United States against Taliban forces in Afghanistan. The MoD stated that 20 missiles, described as "blast fragmentation warheads", were used in 2008 and a further 20 in 2009. MoD officials told Guardian journalist Richard Norton-Taylor that the missiles were "particularly designed to take down structures and kill everyone in the buildings", as AAC AgustaWestland Apaches were previously equipped with weapon systems deemed ineffective to combat the Taliban. The MoD also stated that "British pilots' rules of engagement were strict and everything a pilot sees from the cockpit is recorded."
In 2018, the MoD accidentally divulged the details of General Atomics MQ-9 Reapers utilised by the Royal Air Force (RAF) during the Syrian civil war, which revealed that the drones were equipped with AGM-114 Hellfire missiles. The MoD had sent a report to a British publication, Drone Wars, in response to a freedom of information request. In the report, it was stated that AGM-114N Hellfire missiles which contained a thermobaric warhead were used by RAF attack drones in Syria.
India
Based on the high-explosive squash head (HESH) round, a 120 mm thermobaric round was developed in the 2010s by the Indian Ministry of Defence. This HESH round packs thermobaric explosives into the tank shells to increase the effectiveness against enemy bunkers and light armoured vehicles.
The design and the development of the round was taken up by Armament Research and Development Establishment (ARDE). The rounds were designed for the Arjun MBT. The TB rounds contains fuel rich explosive composition called thermobaric explosive. As the name implies, the shells, when they hit a target, produce blast overpressure and heat energy for hundreds of milliseconds. The overpressure and heat causes damage to enemy fortified structures like bunkers and buildings and for soft targets like enemy personnel and light armoured vehicles.
Serbia
The company Balkan Novoteh, formed in 2011, provides the Thermobaric hand grenade TG-1 to the market.
Military Technical Institute in Belgrade has developed a technology for producing cast-cured thermobaric PBX explosives. Since recently, the Factory of Explosives and Pyrotechnics TRAYAL Corporation has been producing cast-cured thermobaric PBX formulations.
Ukraine
In 2017 Ukroboronprom's Scientific Research Institute for Chemical Products in conjunction with (aka Artem Holding Company) announced to the market its new product, the . These can be combined with the grenade launcher, a demonstration of which was witnessed by Oleksandr Turchynov. The grenades, of approximately 600 grams, "create a two second fire cloud with a volume of not less than 13 m³, inside of which the temperature reaches 2,500 degrees. This temperature allows not only for the destruction of the enemy, but are also able to disable lightly armored vehicles." The firm showed them at the Azerbaijan International Defense Exhibition in 2018.
In 2024, Ukraine started using drones rigged with thermobaric explosives to strike Russian positions in the Russo-Ukrainian War.
History
Attempted prohibitions
Mexico, Switzerland and Sweden presented in 1980 a joint motion to the United Nations to prohibit the use of thermobaric weapons, to no avail.
United Nations Institute for Disarmament Research categorises these weapons as "enhanced blast weapons" and there was pressure to regulate these around 2010, again to no avail.
Military use
United States
FAEs such as first-generation CBU-55 fuel–air weapons saw extensive use in the Vietnam War. A second generation of FAE weapons were based on those, and were used by the United States in Iraq during Operation Desert Storm. A total of 254 CBU-72s were dropped by the United States Marine Corps, mostly from A-6Es. They were targeted against mine fields and personnel in trenches, but were more useful as a psychological weapon.
The US military used thermobaric weapons in Afghanistan. On 3 March 2002, a single laser guided thermobaric bomb was used by the United States Air Force against cave complexes in which Al-Qaeda and Taliban fighters had taken refuge in the Gardez region of Afghanistan. The SMAW-NE was used by the US Marines during the First Battle of Fallujah and the Second Battle of Fallujah. The AGM-114N Hellfire II was first used by US forces in 2003 in Iraq.
Soviet Union
FAEs were reportedly used against China in the 1969 Sino-Soviet border conflict.
The TOS-1 system was test fired in Panjshir Valley during the Soviet–Afghan War in the late 1980s. MiG-27 attack aircraft of the 134th APIB used ODAB-500S/P fuel–air bombs against Mujahideen forces in Afghanistan, but they were found to be unreliable and dangerous to ground crew.
Russia
Russian military forces reportedly used ground-delivered thermobaric weapons during the Battle for Grozny (first and second Chechen Wars) to attack dug-in Chechen fighters. The use of TOS-1 heavy MLRS and "RPO-A Shmel" shoulder-fired rocket system during the Chechen Wars is reported to have occurred. Russia used the RPO-A Shmel in the First Battle of Grozny, whereupon it was designated as a very useful round.
It was thought that, during the September 2004 Beslan school hostage crisis, a multitude of handheld thermobaric weapons were used by the Russian Armed Forces in their efforts to retake the school. The RPO-A and either the TGB-7V thermobaric rocket from the RPG-7 or rockets from either the RShG-1 or the RShG-2 is claimed to have been used by the Spetsnaz during the initial storming of the school. At least three and as many as nine RPO-A casings were later found at the positions of the Spetsnaz. In July 2005 the Russian government admitted to the use of the RPO-A during the crisis.
During the 2022 Russian invasion of Ukraine, CNN reported that Russian forces were moving thermobaric weapons into Ukraine. On 28 February 2022, Ukraine's ambassador to the United States accused Russia of deploying a thermobaric bomb. Russia has claimed to have used the weapon in March 2024 against Ukrainian soldiers in an unspecified location (denied by Ukraine), and during the August 2024 Ukrainian incursion into Kursk Oblast.
United Kingdom
During the War in Afghanistan, British forces, including the Army Air Corps and Royal Air Force, used thermobaric AGM-114N Hellfire missiles against the Taliban. In the Syrian civil war, British military drones used AGM-114N Hellfire missiles; in the first three months of 2018, British drones fired 92 Hellfire missiles in Syria.
Israel
A report by Human Rights Watch claimed Israel has used thermobaric weaponry in the past including the 2008–2009 conflict in Gaza. Moreover, Euro-Med Human Rights Monitor states that Israel appears to be using thermobaric weaponry in the current 2023 Israel-Hamas War. Both organizations claim that the use of this weaponry in densely populated neighborhoods violates international humanitarian law due to its damaging affects on civilians and civilian structures. The Eurasian Times reported that an Israeli AH-64D Apache attack helicopter was photographed with a 'mystery' warhead with a red band that was speculated to be a thermobaric warhead capable of destroying Hamas tunnels and multi-story buildings.
Syria
Reports by the rebel fighters of the Free Syrian Army claim the Syrian Air Force used such weapons against residential area targets occupied by the rebel fighters, such as during the Battle of Aleppo and in Kafar Batna. Others contend that in 2012 the Syrian government used an bomb in Azaz. A United Nations panel of human rights investigators reported that the Syrian government had used thermobaric bombs against the rebellious town of Al-Qusayr in March 2013.
The Russia and Syrian governments have used thermobaric bombs and other thermobaric munitions during the Syrian civil war against insurgents and insurgent-held civilian areas.
Ukraine
Mikhail Tolstykh, a controversial figure and top rank pro-Russian officer in the War in Donbass was killed on 8 February 2017 at his office in Donetsk by an RPO-A rocket fired by members of the Security Service of Ukraine. In March 2023 soldiers from the 59th Motorised Brigade of Ukraine showed off the destruction of a derelict Russian infantry fighting vehicle by a thermobaric RGT-27S2 hand grenade delivered by Mavic 3 drone.
Non-state actor use
Thermobaric and fuel–air explosives have been used in guerrilla warfare since the 1983 Beirut barracks bombing in Lebanon, which used a gas-enhanced explosive mechanism that was probably propane, butane, or acetylene. The explosive used by the bombers in the US 1993 World Trade Center bombing incorporated the FAE principle by using three tanks of bottled hydrogen gas to enhance the blast.
Jemaah Islamiyah bombers used a shock-dispersed solid fuel charge, based on the thermobaric principle, to attack the Sari nightclub during the 2002 Bali bombings.
International law
International law does not prohibit the use of thermobaric munitions, fuel-air explosive devices, or vacuum bombs against military targets. , all past attempts to regulate or restrict thermobaric weapons have failed.
According to some scholars, thermobaric weapons are not intrinsically indiscriminate by nature, as they are often engineered for precision targeting capabilities. This precision aspect serves to provide humanitarian advantages by potentially minimizing collateral damage and also lessens the amount of munitions needed to effectively engage with the chosen military goals. Nonetheless, authors holding this view recommend that the use of thermobaric weapons in populated areas should be minimised due to their wide-area impact and multiple harm mechanisms.
In media
In the 1995 film Outbreak, a thermobaric weapon (referred to as a fuel air bomb) is used to destroy an African village to keep the perfect biological weapon (a virus) a secret, and later nearly used to wipe out a US town to keep the original virus intact.
| Technology | Explosive weapons | null |
31422 | https://en.wikipedia.org/wiki/Talc | Talc | Talc, or talcum, is a clay mineral composed of hydrated magnesium silicate, with the chemical formula . Talc in powdered form, often combined with corn starch, is used as baby powder. This mineral is used as a thickening agent and lubricant. It is an ingredient in ceramics, paints, and roofing material. It is a main ingredient in many cosmetics. It occurs as foliated to fibrous masses, and in an exceptionally rare crystal form. It has a perfect basal cleavage and an uneven flat fracture, and it is foliated with a two-dimensional platy form.
The Mohs scale of mineral hardness, based on scratch hardness comparison, defines value 1 as the hardness of talc, the softest mineral. When scraped on a streak plate, talc produces a white streak, though this indicator is of little importance, because most silicate minerals produce a white streak. Talc is translucent to opaque, with colors ranging from whitish grey to green with a vitreous and pearly luster. Talc is not soluble in water, and is slightly soluble in dilute mineral acids.
Soapstone is a metamorphic rock composed predominantly of talc.
Etymology
The word talc derives from tālk. In ancient times, the word was used for various related minerals, including talc, mica, and selenite.
Formation
Talc dominantly forms from the metamorphism of magnesian minerals such as serpentine, pyroxene, amphibole, and olivine, in the presence of carbon dioxide and water. This is known as "talc carbonation" or "steatization" and produces a suite of rocks known as talc carbonates.
Talc is primarily formed by hydration and carbonation by this reaction:
+ → + +
Talc can also be formed via a reaction between dolomite and silica, which is typical of skarnification of dolomites by silica-flooding in contact metamorphic aureoles:
+ + → + +
Talc can also be formed from magnesium chlorite and quartz in blueschist and eclogite metamorphism by the following metamorphic reaction:
chlorite + quartz → kyanite + talc + water
Talc is also found as a diagenetic mineral in sedimentary rocks where it can form from the transformation of metastable hydrated magnesium-clay precursors such as kerolite, sepiolite, or stevensite that can precipitate from marine and lake water in certain conditions.
In this reaction, the ratio of talc and kyanite depends on aluminium content, with more aluminous rocks favoring production of kyanite. This is typically associated with high-pressure, low-temperature minerals such as phengite, garnet, and glaucophane within the lower blueschist facies. Such rocks are typically white, friable, and fibrous, and are known as
whiteschist.
Talc is a trioctahedral layered mineral; its structure is similar to pyrophyllite, but with magnesium in the octahedral sites of the composite layers. The crystal structure of talc is described as TOT, meaning that it is composed of parallel TOT layers weakly bonded to each other by weak van der Waals forces. The TOT layers in turn consist of two tetrahedral sheets (T) strongly bonded to the two faces of a single trioctahedral sheet (O). It is the weak bonding between TOT layers that gives talc its perfect basal cleavage and softness.
The tetrahedral sheets consist of silica tetrahedra, which are silicon ions surrounded by four oxygen ions. The tetrahedra each share three of their four oxygen ions with neighboring tetrahedra to produce a hexagonal sheet. The remaining oxygen ion (the apical oxygen ion) is available to bond with the trioctahedral sheet.
The trioctahedral sheet has the structure of a sheet of the mineral brucite. Apical oxygens take the place of some of the hydroxyl ions that would be present in a brucite sheet, bonding the tetrahedral sheets tightly to the trioctahedral sheet.
Tetrahedral sheets have a negative charge, since their bulk composition is Si4O104-. The trioctahedral sheet has an equal positive charge, since its bulk composition is Mg3(OH)24+ The combined TOT layer thus is electrically neutral.
Because the hexagons in the T and O sheets are slightly different in size, the sheets are slightly distorted when they bond into a TOT layer. This breaks the hexagonal symmetry and reduces it to monoclinic or triclinic symmetry. However, the original hexahedral symmetry is discernible in the pseudotrigonal character of talc crystals.
Occurrence
Talc is a common metamorphic mineral in metamorphic belts that contain ultramafic rocks, such as soapstone (a high-talc rock), and within whiteschist and blueschist metamorphic terranes. Prime examples of whiteschists include the Franciscan Metamorphic Belt of the western United States, the western European Alps especially in Italy, certain areas of the Musgrave Block, and some collisional orogens such as the Himalayas, which stretch along Pakistan, India, Nepal, and Bhutan.
Talc carbonate ultramafics are typical of many areas of the Archaean cratons, notably the komatiite belts of the Yilgarn Craton in Western Australia. Talc-carbonate ultramafics are also known from the Lachlan Fold Belt, eastern Australia, from Brazil, the Guiana Shield, and from the ophiolite belts of Turkey, Oman, and the Middle East.
China is the key world talc and steatite-producing country with an output of about 2.2M tonnes(2016), which accounts for 30% of total global output. The other major producers are Brazil (12%), India (11%), the U.S. (9%), France (6%), Finland (4%), Italy, Russia, Canada, and Austria (2%, each).
Notable economic talc occurrences include the Mount Seabrook talc mine, Western Australia, formed upon a polydeformed, layered ultramafic intrusion. The France-based Luzenac Group is the world's largest supplier of mined talc. Its largest talc mine at Trimouns near Luzenac in southern France produces 400,000 tonnes of talc per year.
Conflict mineral
Extraction in disputed areas of Nangarhar province, Afghanistan, has led the international monitoring group Global Witness to declare talc a conflict resource, as the profits are used to fund armed confrontation between the Taliban and Islamic State.
Uses
Talc is used in many industries, including paper making, plastic, paint and coatings (e.g. for metal casting molds), rubber, food, electric cable, pharmaceuticals, cosmetics, and ceramics. A coarse grayish-green high-talc rock is soapstone or steatite, used for stoves, sinks, electrical switchboards, etc. It is often used for surfaces of laboratory table tops and electrical switchboards because of its resistance to heat, electricity, and acids.
In finely ground form, talc finds use as a cosmetic (talcum powder), as a lubricant, and as a filler in paper manufacture. It is used to coat the insides of inner tubes and rubber gloves during manufacture to keep the surfaces from sticking. Talcum powder, with heavy refinement, has been used in baby powder, an astringent powder used to prevent diaper rash (nappy rash). The American Academy of Pediatrics recommends that parents avoid using baby powder because it poses a risk of respiratory problems, including breathing trouble and serious lung damage if inhaled. The small size of the particles makes it difficult to keep them out of the air while applying the powder. Zinc oxide-based ointments are a much safer alternative.
Soapstone (massive talc) is often used as a marker for welding or metalworking.
Talc is also used as food additive or in pharmaceutical products as a glidant. In medicine, talc is used as a pleurodesis agent to prevent recurrent pleural effusion or pneumothorax. In the European Union, the additive number is E553b. Talc may be used in the processing of white rice as a buffing agent in the polishing stage.
Due to its low shear strength, talc is one of the oldest known solid lubricants. Also, limited use is made of talc as a friction-reducing additive in lubricating oils.
Talc is widely used in the ceramics industry in both bodies and glazes. In low-fire art-ware bodies, it imparts whiteness and increases thermal expansion to resist crazing. In stonewares, small percentages of talc are used to flux the body and therefore improve strength and vitrification. It is a source of MgO flux in high-temperature glazes (to control melting temperature). It is also employed as a matting agent in earthenware glazes and can be used to produce magnesia mattes at high temperatures.
ISO standard for quality (ISO 3262)
Patents are pending on the use of magnesium silicate as a cement substitute. Its production requirements are less energy-intensive than ordinary Portland cement (at a heating requirement of around 650 °C for talc compared to 1500 °C for limestone to produce Portland cement), while it absorbs far more carbon dioxide as it hardens. This results in a negative carbon footprint overall, as the cement substitute removes 0.6 tonnes of CO2 per tonne used. This contrasts with a positive carbon footprint of 0.4 tonnes per tonne of conventional cement.
Talc is used in the production of the materials that are widely used in the building interiors such as base content paints in wall coatings. Other areas that use talc to a great extent are organic agriculture, the food industry, cosmetics, and hygiene products such as baby powder and detergent powder.
Talc is sometimes used as an adulterant to illegal heroin, to expand volume and weight and thereby increase its street value. With intravenous use, it may lead to pulmonary talcosis, a granulomatous inflammation in the lungs.
Sterile talc powder
Sterile talc powder (NDC 63256-200-05) is a sclerosing agent used in the procedure of pleurodesis. This can be helpful as a cancer treatment to prevent pleural effusions (an abnormal collection of fluid in the space between the lungs and the thoracic wall). It is inserted into the space via a chest tube, causing it to close up, so fluid cannot collect there. The product can be sterilized by dry heat, ethylene oxide, or gamma irradiation.
Safety
Suspicions have been raised that talc use contributes to certain types of disease, mainly cancers of the ovaries and lungs. According to the IARC, talc containing asbestos is classified as a group 1 agent (carcinogenic to humans), talc use in the perineum is classified as group 2B (possibly carcinogenic to humans), and talc not containing asbestos is classified as group 2A (probably carcinogenic to humans). Reviews by Cancer Research UK and the American Cancer Society conclude that some studies have found a link, but other studies have not.
The studies discuss pulmonary issues, lung cancer, and ovarian cancer. One of these, published in 1993, was a US National Toxicology Program report, which found that cosmetic grade talc containing no asbestos-like fibres was correlated with tumor formation in rats forced to inhale talc for 6 hours a day, five days a week over at least 113 weeks. A 1971 paper found particles of talc embedded in 75% of the ovarian tumors studied. In 2018, Health Canada issued a warning against inhaling talcum powder or women's using it perineally.
In contrast, however, research published in 1995 and 2000 concluded that, although it was plausible that talc could cause ovarian cancer, no conclusive evidence had been shown. Further, a 2008 European Journal of Cancer Prevention review of ovarian cancer and talc use studies pointed out that, although many of them examined the duration, frequency, and accumulation of hygienic talc use, few found a positive association among these factors and some found a negative one: “It may be argued that the overall null findings associated with talc-dusted diaphragms and condom use is more convincing evidence for a lack of a carcinogenic effect, especially given the lack of an established correlation between perineal dusting frequency and ovarian tissue talc concentrations and the lack of a consistent dose-response relationship with ovarian cancer risk." Instead, the authors credited powdered talc with "a high degree of safety.”
Similarly, in a 2014 article published in a leading cancer journal, the Journal of the National Cancer Institute, researchers reported the results of a survey of 61,576 postmenopausal women, more than half of whom had used talc powder perineally. The researchers compared the subjects’ reports of their own talc use with their reports of having had ovarian cancer diagnosed by their doctors, and found, regardless of subjects’ age and tubal ligation status, “Ever use of perineal powder ... was not associated with risk of ovarian cancer compared with never use,” nor was any greater individual cancer risk associated with longer use of talc powder. On this basis, the article concluded, “perineal powder use does not appear to influence ovarian cancer risk.” The Cosmetic Ingredient Review Expert Panel concluded in 2015 that talc, in the concentrations currently used in cosmetics, is safe.
In July 2024, the International Agency for Research on Cancer listed talc as "probably" carcinogenic for humans. The study is based on limited evidence it could cause ovarian cancer in humans.
Industrial grade
In the United States, the Occupational Safety and Health Administration and National Institute for Occupational Safety and Health have set occupational exposure limits to respirable talc dusts at 2 mg/m3 over an eight-hour workday. At levels of 1000 mg/m3, inhalation of talc is considered immediately dangerous to life and health.
Food grade
The United States Food and Drug Administration considers talc (magnesium silicate) generally recognized as safe (GRAS) to use as an anticaking agent in table salt in concentrations smaller than 2%.
Association with asbestos
One particular issue with commercial use of talc is its frequent co-location in underground deposits with asbestos ore. Asbestos is a general term for different types of fibrous silicate minerals, desirable in construction for their heat resistant properties. There are six varieties of asbestos; the most common variety in manufacturing, white asbestos, is in the serpentine family. Serpentine minerals are sheet silicates; although not in the serpentine family, talc is also a sheet silicate, with two sheets connected by magnesium cations. The frequent co-location of talc deposits with asbestos may result in contamination of mined talc with white asbestos, which poses serious health risks when dispersed into the air and inhaled. Stringent quality control since 1976, including separating cosmetic- and food-grade talc from that destined for industrial use, has largely eliminated this issue, but it remains a potential hazard requiring mitigation in the mining and processing of talc. A 2010 US FDA survey failed to find asbestos in a variety of talc-containing products. A 2018 Reuters investigation asserted that pharmaceuticals company Johnson & Johnson knew for decades that there was asbestos in its baby powder, and in 2020 the company stopped selling its baby powder in the US and Canada. There were calls for Johnson & Johnson's largest shareholders to force the company to end global sales of baby powder, and hire an independent firm to conduct a racial justice audit as it had been marketed to African American and overweight women. On August 11, 2022, the company announced it would stop making talc-based powder by 2023 and replace it with cornstarch-based powders. The company said the talc-based powder is safe to use and does not contain asbestos.
Litigation
In 2006 the International Agency for Research on Cancer classified talcum powder as a possible human carcinogen if used in the female genital area. Despite this, no federal agency in the US acted to remove talcum powder from the market or add warnings.
In February 2016, as the result of a lawsuit against Johnson & Johnson (J&J), a St. Louis jury awarded $72 million to the family of an Alabama woman who died from ovarian cancer. The family claimed that the use of talcum powder was responsible for her cancer.
In May 2016, a South Dakota woman was awarded $55 million as the result of another lawsuit against J&J. The woman had used Johnson & Johnson's Baby Powder for more than 35 years before being diagnosed with ovarian cancer in 2011.
In October 2016, a St. Louis jury awarded $70.1 million to a Californian woman with ovarian cancer who had used Johnson's Baby Powder for 45 years.
In August 2017, a Los Angeles jury awarded $417 million to a Californian woman, Eva Echeverria, who developed ovarian cancer as a "proximate result of the unreasonably dangerous and defective nature of talcum powder", her lawsuit against Johnson & Johnson stated. On 20 October 2017, Los Angeles Superior Court judge Maren Nelson dismissed the verdict. The judge stated that Echeverria proved there is "an ongoing debate in the scientific and medical community about whether talc more probably than not causes ovarian cancer and thus (gives) rise to a duty to warn", but not enough to sustain the jury's imposition of liability against Johnson & Johnson stated, and concluded that Echeverria did not adequately establish that talc causes ovarian cancer.
In July 2018, a court in St. Louis awarded a $4.7bn claim ($4.14bn in punitive damages and $550m in compensatory damages) against J&J to 22 claimant women, concluding that the company had suppressed evidence of asbestos in its products for more than four decades.
At least 1,200 to 2,000 other talcum powder-related lawsuits were pending .
In 2020 J&J stopped sales of its talcum-based baby powder, which it had been selling for 130 years. J&J created a subsidiary responsible for the claims in an effort to resolve the lawsuits in bankruptcy court. In 2023 J&J proposed a nearly $9bn settlement with 50,000 claimants saying the claims were "specious" but it wanted to move on from the issue, but judges blocked the plans, ruling that the subsidiary was not in financial distress and could not use the bankruptcy system to resolve the lawsuits.
In July 2023 J&J sued researchers who linked talc to cancer alleging they used junk science to disparage company's products, while defendants say the lawsuits are meant to silence scientists.
| Physical sciences | Mineralogy | null |
31424 | https://en.wikipedia.org/wiki/Torpedo | Torpedo | A modern torpedo is an underwater ranged weapon launched above or below the water surface, self-propelled towards a target, and with an explosive warhead designed to detonate either on contact with or in proximity to the target. Historically, such a device was called an automotive, automobile, locomotive, or fish torpedo; colloquially a fish. The term torpedo originally applied to a variety of devices, most of which would today be called mines. From about 1900, torpedo has been used strictly to designate a self-propelled underwater explosive device.
While the 19th-century battleship had evolved primarily with a view to engagements between armored warships with large-caliber guns, the invention and refinement of torpedoes from the 1860s onwards allowed small torpedo boats and other lighter surface vessels, submarines/submersibles, even improvised fishing boats or frogmen, and later light aircraft, to destroy large ships without the need of large guns, though sometimes at the risk of being hit by longer-range artillery fire.
Modern torpedoes are classified variously as lightweight or heavyweight; straight-running, autonomous homers, and wire-guided types. They can be launched from a variety of platforms. In modern warfare, a submarine-launched torpedo is almost certain to hit its target; the best defense is a counterattack using another torpedo.
Etymology
The word torpedo was first used as a name for electric rays (in the order Torpediniformes), which in turn comes from the Latin word torpēdō ("lethargy" or "sluggishness"). In naval usage, the American inventor David Bushnell was reported to have first used the term as the name of a submarine of his own design, the "American Turtle or Torpedo." This usage likely inspired Robert Fulton's use of the term to describe his stationary mines, and later Robert Whitehead's naming of the first self-propelled torpedo.
History
Middle Ages
Torpedo-like weapons were first proposed many centuries before they were successfully developed. For example, in 1275, engineer Hasan al-Rammah – who worked as a military scientist for the Mamluk Sultanate of Egypt – wrote that it might be possible to create a projectile resembling "an egg", which propelled itself through water, whilst carrying "fire".
Early naval mines
In modern language, a "torpedo" is an underwater self-propelled explosive, but historically, the term also applied to primitive naval mines and spar torpedoes. These were used on an ad hoc basis during the early modern period up to the late 19th century.
In the early 17th century, the Dutchman Cornelius Drebbel, in the employ of King James I of England, invented the spar torpedo; he attached explosives to the end of a beam affixed to one of his submarines. These were used (to little effect) during the English expeditions to La Rochelle in 1626. The first use of a torpedo by a submarine was in 1775, by the American , which attempted to lay a bomb with a timed fuse on the hull of during the American Revolutionary War, but failed in the attempt.
In the early 1800s, the American inventor Robert Fulton, while in France, "conceived the idea of destroying ships by introducing floating mines under their bottoms in submarine boats". He employed the term "torpedo" for the explosive charges with which he outfitted his submarine Nautilus. However, both the French and the Dutch governments were uninterested in the submarine. Fulton then concentrated on developing the torpedo-like weapon independent of a submarine deployment, and in 1804 succeeded in convincing the British government to employ his 'catamaran' against the French. An April 1804 torpedo attack on French ships anchored at Boulogne, and a follow-up attack in October, produced several explosions but no significant damage and the weapon was abandoned.
Fulton carried out a demonstration for the US government on 20 July 1807, destroying a vessel in New York's harbor. Further development languished as Fulton focused on his "steam-boat matters". After the War of 1812 broke out, the Royal Navy established a blockade of the East Coast of the United States. During the war, American forces unsuccessfully attempted to destroy the British ship of the line HMS Ramillies while it was lying at anchor in New London, Connecticut's harbor with torpedoes launched from small boats. This prompted the captain of Ramillies, Sir Thomas Hardy, 1st Baronet, to warn the Americans to cease using this "cruel and unheard-of warfare" or he would "order every house near the shore to be destroyed". The fact that Hardy had been previously so lenient and considerate to the Americans led them to abandon such attempts with immediate effect.
Torpedoes were used by the Russian Empire during the Crimean War in 1855 against British warships in the Gulf of Finland. They used an early form of chemical detonator. During the American Civil War, the term torpedo was used for what is today called a contact mine, floating on or below the water surface using an air-filled demijohn or similar flotation device. These devices were very primitive and apt to prematurely explode. They would be detonated on contact with the ship or after a set time, although electrical detonators were also occasionally used. was the first warship to be sunk in 1862 by an electrically-detonated mine. Spar torpedoes were also used; an explosive device was mounted at the end of a spar up to long projecting forward underwater from the bow of the attacking vessel, which would then ram the opponent with the explosives. These were used by the Confederate submarine to sink although the weapon was apt to cause as much harm to its user as to its target. Rear Admiral David Farragut's famous/apocryphal command during the Battle of Mobile Bay in 1864, "Damn the torpedoes, full speed ahead!" refers to a minefield laid at Mobile, Alabama.
On 26 May 1877, during the Romanian War of Independence, the Romanian spar torpedo boat attacked and sank the Ottoman river monitor Seyfi. This was the first instance in history when a torpedo boat sank its targets without also sinking.
Invention of the modern torpedo
A prototype of the self-propelled torpedo was created on a commission placed by Giovanni Luppis, an Austro-Hungarian naval officer from Rijeka (modern-day Croatia), at the time a port city of the Austro-Hungarian Monarchy and Robert Whitehead, an English engineer who was the manager of a town factory, Stabilimento Tecnico di Fiume (STF). In 1864, Luppis presented Whitehead with the plans of the Salvacoste ("Coastsaver"), a floating weapon driven by ropes from the land that had been dismissed by the naval authorities due to the impractical steering and propulsion mechanisms.
In 1866, Whitehead invented the first effective self-propelled torpedo, the eponymous Whitehead torpedo, the first modern torpedo. French and German inventions followed closely, and the term torpedo came to describe self-propelled projectiles that traveled under or on water. By 1900, the term no longer included mines and booby-traps as the navies of the world added submarines, torpedo boats and torpedo boat destroyers to their fleets.
Whitehead was unable to improve the machine substantially, since the clockwork motor, attached ropes, and surface attack mode all contributed to a slow and cumbersome weapon. However, he kept considering the problem after the contract had finished, and eventually developed a tubular device, designed to run underwater on its own, and powered by compressed air. The result was a submarine weapon, the Minenschiff (mine ship), the first modern self-propelled torpedo, officially presented to the Austrian Imperial Naval commission on 21 December 1866.
The first trials were not successful as the weapon was unable to maintain a course at a steady depth. After much work, Whitehead introduced his "secret" in 1868 which overcame this. It was a mechanism consisting of a hydrostatic valve and pendulum that caused the torpedo's hydroplanes to be adjusted to maintain a preset depth.
Production and spread
The Austrian government decided to invest in the invention, and the factory in Rijeka started producing more Whitehead torpedoes. In 1870, he improved the devices to travel up to approximately at a speed of up to . Royal Navy (RN) representatives visited Rijeka for a demonstration in late 1869, and in 1870 a batch of torpedoes was ordered. In 1871, the British Admiralty paid Whitehead £15,000 for certain of his developments and production started at the Royal Laboratories in Woolwich the following year.
The company in Fiume went bankrupt in 1873, but was reformed as Whitehead Torpedo Works a few years later, and by 1881 it was exporting torpedoes to ten other countries. The torpedo was powered by compressed air and had an explosive charge of gun-cotton. Whitehead went on to develop more efficient devices, demonstrating torpedoes capable of in 1876, in 1886, and, finally, in 1890.
In the 1880s, a British committee, informed by hydrodynamicist Dr. R. E. Froude, conducted comparative tests and determined that a blunt nose, contrary to prior assumptions, did not hinder speed: in fact, the blunt nose provided a speed advantage of approximately one knot compared to the traditional pointed nose design. This discovery allowed for larger explosive payloads and increased air storage for propulsion without compromising speed.
Whitehead opened a new factory adjacent to Portland Harbour, England, in 1890, which continued making torpedoes until the end of World War II. Because orders from the RN were not as large as expected, torpedoes were mostly exported. A series of devices was produced at Rijeka, with diameters from upward. The largest Whitehead torpedo was in diameter and long, made of polished steel or phosphor bronze, with a gun-cotton warhead. It was propelled by a three-cylinder Brotherhood radial engine, using compressed air at around and driving two contra-rotating propellers, and was designed to self-regulate its course and depth as far as possible. By 1881, nearly 1,500 torpedoes had been produced. Whitehead also opened a factory at St Tropez in 1890 that exported torpedoes to Brazil, The Netherlands, Turkey, and Greece.
Whitehead purchased rights to the gyroscope of Ludwig Obry in 1888 but it was not sufficiently accurate, so in 1890 he purchased a better design to improve control of his designs, which came to be called the "Devil's Device". The firm of L. Schwartzkopff in Germany also produced torpedoes and exported them to Russia, Japan, and Spain. In 1885, Britain ordered a batch of 50 as torpedo production at home and Rijeka could not meet demand.
In 1893, Royal Navy torpedo production was transferred to the Royal Gun Factory. The British later established a Torpedo Experimental Establishment at and a production facility at the Royal Naval Torpedo Factory, Greenock, in 1910. These are now closed.
By World War I, Whitehead's torpedo remained a worldwide success, and his company was able to maintain a monopoly on torpedo production. By that point, his torpedo had grown to a diameter of 18 inches with a maximum speed of with a warhead weighing .
Whitehead faced competition from the American Lieutenant Commander John A. Howell, whose design, driven by a flywheel, was simpler and cheaper. It was produced from 1885 to 1895, and it ran straight, leaving no wake. A Torpedo Test Station was set up in Rhode Island in 1870. The Howell torpedo was the only United States Navy model until an American company, Bliss and Williams secured manufacturing rights to produce Whitehead torpedoes. These were put into service for the U.S. Navy in 1892. Five varieties were produced, all 18-inch diameter.
The Royal Navy introduced the Brotherhood wet heater engine in 1907 with the 18 in. Mk. VII & VII* which greatly increased the speed and/or range over compressed air engines and wet heater type engines became the standard in many major navies up to and during the Second World War.
Torpedo boats and guidance systems
Ships of the line were superseded by ironclads, large steam-powered ships with heavy gun armament and heavy armor, in the mid 19th century. Ultimately this line of development led to the dreadnought category of all-big-gun battleships, starting with .
Although these ships were incredibly powerful, the new weight of armor slowed them down, and the huge guns needed to penetrate that armor fired at very slow rates. The development of torpedoes allowed for the possibility that small and fast vessels could credibly threaten if not sink even the most powerful battleships. While such attacks would carry enormous risks to the attacking boats and their crews (which would likely need to expose themselves to artillery fire which their small vessels were not designed to withstand) this was offset by the ability to construct large numbers of small vessels far more quickly and for a much lower unit cost compared to a capital ship.
The first boat designed to fire the self-propelled Whitehead torpedo was , completed in 1877. The French Navy followed suit in 1878 with , launched in 1878 though she had been ordered in 1875. The first torpedo boats were built at the shipyards of Sir John Thornycroft and gained recognition for their effectiveness.
At the same time, inventors were working on building a guided torpedo. Prototypes were built by John Ericsson, John Louis Lay, and Victor von Scheliha, but the first practical guided missile was patented by Louis Brennan, an emigre to Australia, in 1877.
It was designed to run at a consistent depth of , and was fitted with an indicator mast that just broke the surface of the water. At night the mast had a small light, only visible from the rear. Two steel drums were mounted one behind the other inside the torpedo, each carrying several thousand yards of high-tensile steel wire. The drums connected via a differential gear to twin contra-rotating propellers. If one drum was rotated faster than the other, then the rudder was activated. The other ends of the wires were connected to steam-powered winding engines, which were arranged so that speeds could be varied within fine limits, giving sensitive steering control for the torpedo.
The torpedo attained a speed of using a wire in diameter but later this was changed to to increase the speed to . The torpedo was fitted with elevators controlled by a depth-keeping mechanism, and the fore and aft rudders operated by the differential between the drums.
Brennan traveled to Britain, where the Admiralty examined the torpedo and found it unsuitable for shipboard use. However, the War Office proved more amenable, and in early August 1881, a special Royal Engineer committee was instructed to inspect the torpedo at Chatham and report back directly to the Secretary of State for War, Hugh Childers. The report strongly recommended that an improved model be built at government expense. In 1883 an agreement was reached between the Brennan Torpedo Company and the government. The newly appointed Inspector-General of Fortifications in England, Sir Andrew Clarke, appreciated the value of the torpedo and in spring 1883 an experimental station was established at Garrison Point Fort, Sheerness, on the River Medway, and a workshop for Brennan was set up at the Chatham Barracks, the home of the Royal Engineers. Between 1883 and 1885 the Royal Engineers held trials and in 1886 the torpedo was recommended for adoption as a harbor defense torpedo. It was used throughout the British Empire for more than fifteen years.
Use in conflict
The Royal Navy frigate was the first naval vessel to fire a self-propelled torpedo in anger during the Battle of Pacocha against rebel Peruvian ironclad on 29 May 1877. The Peruvian ship successfully outran the device. On 16 January 1878, the Turkish steamer Intibah became the first vessel to be sunk by self-propelled torpedoes, launched from torpedo boats operating from the tender under the command of Stepan Osipovich Makarov during the Russo-Turkish War of 1877–78.
In another early use of the torpedo, during the War of the Pacific, the Peruvian ironclad Huáscar commanded by captain Miguel Grau attacked the Chilean corvette Abtao on 28 August 1879 at Antofagasta with a self-propelled Lay torpedo only to have it reverse course. The ship Huascar was saved when an officer jumped overboard to divert it.
The Chilean ironclad was sunk on 23 April 1891 by a self-propelled torpedo from the Almirante Lynch, during the Chilean Civil War of 1891, becoming the first ironclad warship sunk by this weapon. The Chinese turret ship was purportedly hit and disabled by a torpedo after numerous attacks by Japanese torpedo boats during the First Sino-Japanese War in 1894. At this time torpedo attacks were still very close range and very dangerous to the attackers.
Several western sources reported that the Qing dynasty Imperial Chinese military, under the direction of Li Hongzhang, acquired electric torpedoes, which they deployed in numerous waterways, along with fortresses and numerous other modern military weapons acquired by China. At the Tientsin Arsenal in 1876, the Chinese developed the capacity to manufacture these "electric torpedoes" on their own. Although a form of Chinese art, the Nianhua, depict such torpedoes being used against Russian ships during the Boxer Rebellion, whether they were actually used in battle against them is undocumented and unknown.
The Russo-Japanese War (1904–1905) was the first great war of the 20th century. During the war the Imperial Russian and Imperial Japanese navies launched nearly 300 torpedoes at each other, all of them of the "self-propelled automotive" type. The deployment of these new underwater weapons resulted in one battleship, two armored cruisers, and two destroyers being sunk in action, with the remainder of the roughly 80 warships being sunk by the more conventional methods of gunfire, mines, and scuttling.
On 27 May 1905, during the Battle of Tsushima, Admiral Rozhestvensky's flagship, the battleship , had been gunned to a wreck by Admiral Tōgō's 12-inch gunned battleline. With the Russians sunk and scattering, Tōgō prepared for pursuit, and while doing so ordered his torpedo boat destroyers (TBDs) (mostly referred to as just destroyers in most written accounts) to finish off the Russian battleship. Knyaz Suvorov was set upon by 17 torpedo-firing warships, ten of which were destroyers and four torpedo boats. Twenty-one torpedoes were launched at the pre-dreadnought, and three struck home, one fired from the destroyer and two from torpedo boats No. 72 and No. 75. The flagship slipped under the waves shortly thereafter, taking over 900 men with her to the bottom. On December 9, 1912, the Greek submarine "Dolphin" launched a torpedo against the Ottoman cruiser "Medjidieh".
Aerial torpedo
The end of the Russo-Japanese War fuelled new theories, and the idea of dropping lightweight torpedoes from aircraft was conceived in the early 1910s by Bradley A. Fiske, an officer in the United States Navy. Awarded a patent in 1912, Fiske worked out the mechanics of carrying and releasing the aerial torpedo from a bomber, and defined tactics that included a night-time approach so that the target ship would be less able to defend itself. Fiske determined that the notional torpedo bomber should descend rapidly in a sharp spiral to evade enemy guns, then when about above the water the aircraft would straighten its flight long enough to line up with the torpedo's intended path. The aircraft would release the torpedo at a distance of from the target. Fiske reported in 1915 that, using this method, enemy fleets could be attacked within their harbors if there was enough room for the torpedo track.
Meanwhile, the Royal Naval Air Service began actively experimenting with this possibility. The first successful aerial torpedo drop was performed by Gordon Bell in 1914 – dropping a Whitehead torpedo from a Short S.64 seaplane. The success of these experiments led to the construction of the first purpose-built operational torpedo aircraft, the Short Type 184, built-in 1915.
An order for ten aircraft was placed, and 936 aircraft were built by ten different British aircraft companies during the First World War. The two prototype aircraft were embarked upon , which sailed for the Aegean on 21 March 1915 to take part in the Gallipoli campaign.
On 12 August 1915 one of these, piloted by Flight Commander Charles Edmonds, was the first aircraft in the world to attack an enemy ship with an air-launched torpedo.
On 17 August 1915 Flight Commander Edmonds torpedoed and sank an Ottoman transport ship a few miles north of the Dardanelles. His formation colleague, Flight Lieutenant G B Dacre, was forced to land on the water owing to engine trouble but, seeing an enemy tug close by, taxied up to it and released his torpedo, sinking the tug. Without the weight of the torpedo Dacre was able to take off and return to Ben-My-Chree.
World War I
Torpedoes were widely used in World War I, both against shipping and against submarines. Germany disrupted the supply lines to Britain largely by use of submarine torpedoes, though submarines also extensively used guns. Britain and its allies also used torpedoes throughout the war. U-boats themselves were often targeted, twenty being sunk by torpedo. Two Royal Italian Navy torpedo boats scored a success against an Austrian-Hungarian squadron, sinking the battleship with two torpedoes.
The Royal Navy had been experimenting with ways to further increase the range of torpedoes during World War 1 using pure oxygen instead of compressed air, this work ultimately leading to the development of the oxygen-enriched air 24.5 in. Mk. I intended originally for the s and battleships of 1921, both being cancelled due to the Washington Naval Treaty.
Initially, the Imperial Japanese Navy purchased Whitehead or Schwartzkopf torpedoes but by 1917, like the Royal Navy, they were conducting experiments with pure oxygen instead of compressed air. Because of explosions they abandoned the experiments but resumed them in 1926 and by 1933 had a working torpedo. They also used conventional wet heater torpedoes.
World War II
In the inter-war years, financial stringency caused nearly all navies to skimp on testing their torpedoes. Only the British and Japanese had fully tested new technologies for torpedoes (in particular the Type 93, nicknamed Long Lance postwar by the US official historian Samuel E. Morison) at the start of World War II. Unreliable torpedoes caused many problems for the American submarine force in the early years of the war, primarily in the Pacific Theater. One possible exception to the pre-war neglect of torpedo development was the 45-cm caliber, 1931-premiered Japanese Type 91 torpedo, the sole aerial torpedo (Koku Gyorai) developed and brought into service by the Japanese Empire before the war. The Type 91 had an advanced PID controller and jettisonable, wooden Kyoban aerial stabilizing surfaces which released upon entering the water, making it a formidable anti-ship weapon; Nazi Germany considered manufacturing it as the Luftorpedo LT 850 after August 1942.
The Royal Navy's 24.5-inch oxygen-enriched air torpedo saw service in the two battleships although by World War II the use of enriched oxygen had been discontinued due to safety concerns. In the final phase of the action against , fired a pair of 24.5-inch torpedoes from her port-side tube and claimed one hit. According to Ludovic Kennedy, "if true, [this is] the only instance in history of one battleship torpedoing another". The Royal Navy continued the development of oxygen-enriched air torpedoes with the 21 in. Mk. VII of the 1920s designed for the s although once again these were converted to run on normal air at the start of World War II. Around this time too the Royal Navy were perfecting the Brotherhood burner cycle engine which offered a performance as good as the oxygen-enriched air engine but without the issues arising from the oxygen equipment and which was first used in the extremely successful and long-lived 21 in. Mk. VIII torpedo of 1925. This torpedo served throughout WW II (with 3,732 being fired by September 1944) and is still in limited service in the 21st century. The improved Mark VIII** was used in two particularly notable incidents; on 6 February 1945 the only intentional wartime sinking of one submarine by another while both were submerged took place when HMS Venturer sank the German submarine U-864 with four Mark VIII** torpedoes and on 2 May 1982 when the Royal Navy submarine sank the Argentine cruiser with two Mark VIII** torpedoes during the Falklands War. This is the only sinking of a surface ship by a nuclear-powered submarine in wartime and the second (of three) sinkings of a surface ship by any submarine since the end of World War II). The other two sinkings were of the Indian frigate and the South Korean corvette ROKS Cheonan.
Many classes of surface ships, submarines, and aircraft were armed with torpedoes. Naval strategy at the time was to use torpedoes, launched from submarines or warships, against enemy warships in a fleet action on the high seas. There were concerns torpedoes would be ineffective against warships' heavy armor; an answer to this was to detonate torpedoes underneath a ship, badly damaging its keel and the other structural members in the hull, commonly called "breaking its back". This was demonstrated by magnetic influence mines in World War I. The torpedo would be set to run at a depth just beneath the ship, relying on a magnetic exploder to activate at the appropriate time.
Germany, Britain, and the U.S. independently devised ways to do this; German and American torpedoes, however, suffered problems with their depth-keeping mechanisms, coupled with faults in magnetic pistols shared by all designs. Inadequate testing had failed to reveal the effect of the Earth's magnetic field on ships and exploder mechanisms, which resulted in premature detonation. The Kriegsmarine and Royal Navy promptly identified and eliminated the problems. In the United States Navy (USN), there was an extended wrangle over the problems plaguing the Mark 14 torpedo (and its Mark 6 exploder). Cursory trials had allowed bad designs to enter service. Both the Navy Bureau of Ordnance and the United States Congress were too busy protecting their interests to correct the errors, and fully functioning torpedoes only became available to the USN twenty-one months into the Pacific War.
British submarines used torpedoes to interdict the Axis supply shipping to North Africa, while Fleet Air Arm Swordfish sank three Italian battleships at Taranto by a torpedo and (after a mistaken, but abortive, attack on ) scored one crucial hit in the hunt for the German battleship . Large tonnages of merchant shipping were sunk by submarines with torpedoes in both the Battle of the Atlantic and the Pacific War.
Torpedo boats, such as MTBs, PT boats, or S-boats, enabled the relatively small but fast craft to carry enough firepower, in theory, to destroy a larger ship, though this rarely occurred in practice. The largest warship sunk by torpedoes from small craft in World War II was the British cruiser , sunk by Italian MAS boats on the night of 12/13 August 1942 during Operation Pedestal. Destroyers of all navies were also armed with torpedoes to attack larger ships. In the Battle off Samar, destroyer torpedoes from the escorts of the American task force "Taffy 3" showed effectiveness at defeating armor. Damage and confusion caused by torpedo attacks were instrumental in beating back a superior Japanese force of battleships and cruisers. In the Battle of the North Cape in December 1943, torpedo hits from British destroyers and slowed the German battleship enough for the British battleship to catch and sink her, and in May 1945 the British 26th Destroyer Flotilla (coincidentally led by Saumarez again) ambushed and sank Japanese heavy cruiser .
Frequency-hopping
During World War II, Hedy Lamarr and composer George Antheil developed a radio guidance system for Allied torpedoes, it intended to use frequency-hopping technology to defeat the threat of jamming by the Axis powers. As radio guidance had been abandoned some years earlier, it was not pursued. Although the US Navy never adopted the technology, it did, in the 1960s, investigate various spread-spectrum techniques. Spread-spectrum techniques are incorporated into Bluetooth technology and are similar to methods used in legacy versions of Wi-Fi. This work led to their induction into the National Inventors Hall of Fame in 2014.
Post–World War II
Because of improved submarine strength and speed, torpedoes had to be given improved warheads and better motors. During the Cold War torpedoes were an important asset with the advent of nuclear-powered submarines, which did not have to surface often, particularly those carrying strategic nuclear missiles.
Several navies have launched torpedo strikes since World War II, including:
During the Korean War the United States Navy successfully attacked a dam with air-launched torpedoes.
Israeli Navy fast attack craft crippled the American electronic intelligence vessel USS Liberty with gunfire and torpedoes during the 1967 Six-Day War, resulting in the loss of 34 crew.
A Pakistan Navy sank the Indian frigate on 9 December 1971 during the Indo-Pakistani War of 1971, with the loss of over 18 officers and 176 sailors.
The British Royal Navy nuclear attack submarine sank the Argentine Navy light cruiser on 2 May 1982 with two Mark 8 torpedoes during the Falklands War with the loss of 323 lives.
On 16 June 1982, during the Lebanon War, an unnamed Israeli submarine torpedoed and sank the Lebanese coaster Transit, which was carrying 56 Palestinian refugees to Cyprus, in the belief that the vessel was evacuating anti-Israeli militias. The ship was hit by two torpedoes, managed to run aground but eventually sank. There were 25 dead, including her captain. The Israeli Navy disclosed the incident in November 2018.
The Croatian Navy disabled the Yugoslav patrol boat PČ-176 Mukos with a torpedo launched by Croatian naval commandos from an improvised device during the Battle of the Dalmatian channels on 14 November 1991, in the course of the Croatian War of Independence. Three members of the crew were killed. The stranded boat was later recovered by Croatian trawlers, salvaged and put in service with the Croatian Navy as OB-02 Šolta.
On 26 March 2010 the South Korean Navy ship ROKS Cheonan was sunk with the loss of 46 personnel. A subsequent investigation concluded that the warship had been sunk by a North Korean torpedo fired by a midget submarine.
Propulsion
Compressed air
The Whitehead torpedo of 1866, the first successful self-propelled torpedo, used compressed air as its energy source. The air was stored at pressures of up to and fed to a piston engine that turned a single propeller at about 100 rpm. It could travel about at an average speed of . The speed and range of later models were improved by increasing the pressure of the stored air. In 1906 Whitehead built torpedoes that could cover nearly at an average speed of .
At higher pressures the adiabatic cooling experienced by the air as it expanded in the engine caused icing problems. This drawback was remedied by heating the air with seawater before it was fed to the engine, which increased engine performance further because the air expanded even more after heating. This was the principle used by the Brotherhood engine.
Heated torpedoes
Torpedoes propelled by compressed air encountered a significant problem when attempts were made to increase their range and speed. The cold compressed air, upon entering the expansion phase in the piston chambers of the torpedo's engine, caused a rapid drop in temperature. This could freeze the engine solid, by jamming the piston heads inside the cylinders. This led to the idea of injecting a liquid fuel, like kerosene, into the compressed air and igniting it inside a separate expansion chamber. In this manner, the air is heated more and expands even further, and the burned propellant adds more gas to drive the engine. The earliest form was the "Elswick” heater as patented by Armstrong Whitworth in 1904. The device was demonstrated in an 18-inch Fiume Mark III torpedo at Bincleaves in 1905 before an audience of British and Japanese experts. The weapon speed was more than for the identical unheated version. Construction of such heated torpedoes started the same year by Whitehead's company.
Dry heater
The earliest production version of the heated torpedo propulsion system, which became known as the Whitehead heater system, mixed the fuel and compressed air after the pressure regulator. Combustion took place in a specialized expansion chamber, with hot combustion products driving the pistons of a reciprocating engine. This had the disadvantage of badly sooting the air vessel with combustion byproducts, and could also engage in a thermal runaway, jamming the engine - not from low temperature as observed with compressed air, but from excess heat causing the piston heads to jam. The "dry heater" distinction was only made after wet heater torpedoes were developed, prior to which all heated torpedoes were of the dry heater type - originally simply called "heated".
Wet heater
A further improvement was the use of water to wash and cool the combustion chamber of the fuel-burning torpedo. Water would be injected into the combustion chamber, at a rate commensurate with the fuel supply rate. This water would flash to steam, with stray condensate carrying the soot combustion byproducts out through the engine. An early example was the wet heater system developed by Lieutenant Sydney Hardcastle at the Royal Gun Factory, in 1908. The compressed air bottle was partially filled with water, with an outlet at the bottom leading into the combustion chamber. This would guarantee that compressed air and water would be injected into the combustion chamber at the same pressure. The system not only solved heating problems so more fuel could be burned but also allowed additional power to be generated by feeding the resulting steam into the engine together with the combustion products. Torpedoes with such a propulsion system became known as wet heaters, while heated torpedoes without steam generation were retrospectively called dry heaters. Most torpedoes used in World War I and World War II were wet heaters.
Increased oxidant
The amount of fuel that can be burned by a torpedo engine (i.e. wet engine) is limited by the amount of oxygen it can carry. Since compressed air contains only about 21% oxygen, engineers in Japan developed the Type 93 (nicknamed "Long Lance" postwar) for destroyers and cruisers in the 1930s. It used pure compressed oxygen instead of compressed air and had performance unmatched by any contemporary torpedo in service, through the end of World War II. However, oxygen systems posed a danger to ships carrying such torpedoes under normal operation, and more so under attack; Japan lost several cruisers partly due to catastrophic secondary explosions of Type 93s.
During World War II, Germany experimented with hydrogen peroxide for the same purpose.
The British approached the problem of providing additional oxygen for the torpedo engine by the use of oxygen-enriched air rather than pure oxygen: up to 57% instead of the 21% of normal atmospheric compressed air. This significantly increased the range of the torpedo, the 24.5 inch Mk 1 having a range of at or at with a warhead. There was a general nervousness about the oxygen enrichment equipment, known for reasons of secrecy as 'No 1 Air Compressor Room' on board ships, and development shifted to the highly efficient Brotherhood Burner Cycle engine that used un-enriched air.
Burner cycle engine
After the First World War, Peter Brotherhood developed a four-cylinder burner cycle engine which was roughly twice as powerful as the older wet heater engine. It was first used in the British Mk VIII torpedoes, which were still in service in 1982. It used a modified diesel cycle, using a small amount of paraffin to heat the incoming air, which was then compressed and further heated by the piston, and then more fuel was injected. It produced about 322 hp when introduced, but by the end of WW2 was at 465 hp, and there was a proposal to fuel it with nitric acid, when it was projected to develop 750 hp.
Wire driven
The Brennan torpedo had two wires wound around internal drums joined to the propellers. Shore-based steam winches pulled the wires, which spun the internal drums and drove the propellers. An operator controlled the relative speeds of the winches, providing guidance. Such systems were used for coastal defense of the British homeland and colonies from 1887 to 1903 and were purchased by, and under the control of, the Army as opposed to the Navy. Speed was about for over 2,400 m.
Flywheel
The Howell torpedo used by the US Navy in the late 19th century featured a heavy flywheel that had to be spun up before launch. It was able to travel about at . The Howell had the advantage of not leaving a trail of bubbles behind it, unlike compressed air torpedoes. This gave the target vessel less chance to detect and evade the torpedo and avoided giving away the attacker's position. Additionally, it ran at a constant depth, unlike Whitehead models.
Electric batteries
Electric propulsion systems avoided tell-tale bubbles. John Ericsson invented an electrically propelled torpedo in 1873; it was powered by a cable from an external power source, because batteries of the time had insufficient capacity. The Sims-Edison torpedo was similarly powered. The Nordfelt torpedo was also electrically powered and was steered by impulses down a trailing wire.
Germany introduced its first battery-powered torpedo shortly before World War II, the G7e. It was slower and had a shorter range than the conventional G7a, but was wakeless and much cheaper. Its lead-acid rechargeable battery was sensitive to shock, required frequent maintenance before use, and required preheating for best performance. The experimental G7es, an enhancement of the G7e, used primary cells.
The United States had an electric design, the Mark 18, largely copied from the German torpedo (although with improved batteries), as well as FIDO, an air-dropped acoustic homing torpedo for anti-submarine use.
Modern electric torpedoes such as the Mark 24 Tigerfish, the Black Shark or DM2 series commonly use silver oxide batteries that need no maintenance, so torpedoes can be stored for years without losing performance.
Rockets
Several experimental rocket-propelled torpedoes were tried soon after Whitehead's invention but were not successful. Rocket propulsion has been implemented successfully by the Soviet Union, for example in the VA-111 Shkval—and has been recently revived in Russian and German torpedoes, as it is especially suitable for supercavitating devices.
Modern energy sources
Modern torpedoes use a variety of propellants, including electric batteries (as with the French F21 torpedo or Italian Black Shark), monopropellants (e.g., Otto fuel II as with the US Mark 48 torpedo), and bipropellants (e.g., hydrogen peroxide plus kerosene as with the Swedish Torped 62, sulfur hexafluoride plus lithium as with the US Mark 50 torpedo, or Otto fuel II plus hydroxyl ammonium perchlorate as with the British Spearfish torpedo).
Propeller
The first of Whitehead's torpedoes had a single propeller and needed a large vane to stop it spinning about its longitudinal axis. Not long afterward the idea of contra-rotating propellers was introduced, to avoid the need for the vane. The three-bladed propeller came in 1893 and the four-bladed one in 1897. To minimize noise, today's torpedoes often use pump-jets.
Supercavitation
Some torpedoes—like the Russian VA-111 Shkval, Iranian Hoot, and German Unterwasserlaufkörper/ Barracuda—use supercavitation to increase speed to over . Torpedoes that don't use supercavitation, such as the American Mark 48 and British Spearfish, are limited to under , though manufacturers and the military don't always release exact figures.
Guidance
Torpedoes may be aimed at the target and fired unguided, similarly to a traditional artillery shell, or they may be guided onto the target. They may be guided autonomously towards the target by some procedure, e.g., sound (homing), or by the operator, typically via commands sent over a signal-carrying cable (wire guidance).
Unguided
The Victorian era Brennan torpedo could be steered onto its target by varying the relative speeds of its propulsion cables. However, the Brennan required a substantial infrastructure and was not suitable for shipboard use. Therefore, for the first part of its history, the torpedo was guided only in the sense that its course could be regulated to achieve an intended impact depth (because of the sine wave running path of the Whitehead, this was a hit or miss proposition, even when everything worked correctly) and, through gyroscopes, a straight course. With such torpedoes the method of attack in small torpedo boats, torpedo bombers and small submarines was to steer a predictable collision course abeam to the target and release the torpedo at the last minute, then veer away, all the time subject to defensive fire.
In larger ships and submarines, fire control calculators gave a wider engagement envelope. Originally, plotting tables (in large ships), combined with specialized slide rules (known in U.S. service as the "banjo" and "Is/Was"), reconciled the speed, distance, and course of a target with the firing ship's speed and course, together with the performance of its torpedoes, to provide a firing solution. By the Second World War, all sides had developed automatic electro-mechanical calculators, exemplified by the U.S. Navy's Torpedo Data Computer. Submarine commanders were still expected to be able to calculate a firing solution by hand as a backup against mechanical failure, and because many submarines existed at the start of the war were not equipped with a TDC; most could keep the "picture" in their heads and do much of the calculations (simple trigonometry) mentally, from extensive training.
Against high-value targets and multiple targets, submarines would launch a spread of torpedoes, to increase the probability of success. Similarly, squadrons of torpedo boats and torpedo bombers would attack together, creating a "fan" of torpedoes across the target's course. Faced with such an attack, the prudent thing for a target to do was to turn to parallel the course of the incoming torpedo and steam away from the torpedoes and the firer, allowing the relatively short-range torpedoes to use up their fuel. An alternative was to "comb the tracks", turning to parallel the incoming torpedo's course, but turning towards the torpedoes. The intention of such a tactic was still to minimize the size of the target offered to the torpedoes, but at the same time be able to aggressively engage the firer. This was the tactic advocated by critics of Jellicoe's actions at Jutland, his caution at turning away from the torpedoes being seen as the reason the Germans escaped.
The use of multiple torpedoes to engage single targets depletes torpedo supplies and greatly reduces a submarine's combat endurance. Endurance can be improved by ensuring a target can be effectively engaged by a single torpedo, which gave rise to the guided torpedo.
Pattern running
In World War II the Germans introduced programmable pattern-running torpedoes, which would run a predetermined pattern until they either ran out of fuel or hit something. The earlier version, FaT, ran out after launch in a straight line, and then weaved backward and forwards parallel to that initial course, whilst the more advanced LuT could transit to a different angle after launch, and then enter a more complex weaving pattern.
Radio and wire guidance
Though Luppis' original design had been rope-guided, torpedoes were not wire-guided until the 1960s.
During the First World War the U.S. Navy evaluated a radio controlled torpedo launched from a surface ship called the Hammond Torpedo. A later version tested in the 1930s was claimed to have an effective range of .
Modern torpedoes use an umbilical wire, which nowadays allows the computer processing power of the submarine or ship to be used. Torpedoes such as the U.S. Mark 48 can operate in a variety of modes, increasing tactical flexibility.
Homing
Homing "fire and forget" torpedoes can use passive or active guidance or a combination of both. Passive acoustic torpedoes home in on emissions from a target. Active acoustic torpedoes home in on the reflection of a signal, or "ping", from the torpedo or its parent vehicle; this has the disadvantage of giving away the presence of the torpedo. In semi-active mode, a torpedo can be fired to the last known position or calculated position of a target, which is then acoustically illuminated ("pinged") once the torpedo is within attack range.
Later in the Second World War torpedoes were given acoustic (homing) guidance systems, with the American Mark 24 mine and Mark 27 torpedo and the German G7es torpedo. Pattern-following and wake homing torpedoes were also developed. Acoustic homing formed the basis for torpedo guidance after the Second World War.
The homing systems for torpedoes are generally acoustic, though there have been other target sensor types used. A ship's acoustic signature is not the only emission a torpedo can home in on; to engage U.S. supercarriers, the Soviet Union developed the 53–65 wake-homing torpedo. As standard acoustic lures can't distract a wake homing torpedo, the US Navy has installed the Surface Ship Torpedo Defense on aircraft carriers that use a Countermeasure Anti-Torpedo to home in on and destroy the attacking torpedo.
Warhead and fuzing
The warhead is generally some form of aluminized explosive, because the sustained explosive pulse produced by the powdered aluminum is particularly destructive against underwater targets. Torpex was popular until the 1950s, but has been superseded by PBX compositions. Nuclear torpedoes have also been developed, e.g. the Mark 45 torpedo. In lightweight antisubmarine torpedoes designed to penetrate submarine hulls, a shaped charge can be used. Detonation can be triggered by direct contact with the target or by a proximity fuze incorporating sonar and/or magnetic sensors.
Contact detonation
When a torpedo with a contact fuze strikes the side of the target hull, the resulting explosion creates a bubble of expanding gas, the walls of which move faster than the speed of sound in water, thus creating a shock wave. The side of the bubble which is against the hull rips away the external plating creating a large breach. The bubble then collapses in on itself, forcing a high-speed stream of water into the breach which can destroy bulkheads and machinery in its path.
Proximity detonation
A torpedo fitted with a proximity fuze can be detonated directly under the keel of a target ship. The explosion creates a gas bubble which may damage the keel or underside plating of the target. However, the most destructive part of the explosion is the upthrust of the gas bubble, which will bodily lift the hull in the water. The structure of the hull is designed to resist downward rather than upward pressure, causing severe strain in this phase of the explosion. When the gas bubble collapses, the hull will tend to fall into the void in the water, creating a sagging effect. Finally, the weakened hull will be hit by the uprush of water caused by the collapsing gas bubble, causing structural failure. On vessels up to the size of a modern frigate, this can result in the ship breaking in two and sinking. This effect is likely to prove less catastrophic on a much larger hull, for instance, that of an aircraft carrier.
Damage
The damage that may be caused by a torpedo depends on the "shock factor value", a combination of the initial strength of the explosion and the distance between the target and the detonation. When taken about ship hull plating, the term "hull shock factor" (HSF) is used, while keel damage is termed "keel shock factor" (KSF). If the explosion is directly underneath the keel, then HSF is equal to KSF, but explosions that are not directly underneath the ship will have a lower value of KSF.
Direct damage
Usually only created by contact detonation, direct damage is a hole blown in the ship. Among the crew, fragmentation wounds are the most common form of injury. Flooding typically occurs in one or two main watertight compartments, which can sink smaller ships or disable larger ones.
Bubble jet effect
The bubble jet effect occurs when a mine or torpedo detonates in the water a short distance away from the targeted ship. The explosion creates a bubble in the water, and due to the pressure difference, the bubble will collapse from the bottom. The bubble is buoyant, and so it rises towards the surface. If the bubble reaches the surface as it collapses, it can create a pillar of water that can go over a hundred meters into the air (a "columnar plume"). If conditions are right and the bubble collapses onto the ship's hull, the damage to the ship can be extremely serious; the collapsing bubble forms a high-energy jet that can break a meter-wide hole straight through the ship, flooding one or more compartments, and is capable of breaking smaller ships apart. The crew in the areas hit by the pillar are usually killed instantly. Other damage is usually limited.
The Baengnyeong incident, in which broke in half and sank off the coast South Korea in 2010, was caused by the bubble jet effect, according to an international investigation.
Shock effect
If the torpedo detonates at a distance from the ship, and especially under the keel, the change in water pressure causes the ship to resonate. This is frequently the most deadly type of explosion if it is strong enough. The whole ship is dangerously shaken and everything on board is tossed around. Engines rip from their beds, cables from their holders, etc. A badly shaken ship usually sinks quickly, with hundreds, or even thousands of small leaks all over the ship and no way to power the pumps. The crew fares no better, as the violent shaking tosses them around. This shaking is powerful enough to cause disabling injury to knees and other joints in the body, particularly if the affected person stands on surfaces connected directly to the hull (such as steel decks).
The resulting gas cavitation and shock-front-differential over the width of the human body is sufficient to stun or kill divers.
Control surfaces and hydrodynamics
Control surfaces are essential for a torpedo to maintain its course and depth. A homing torpedo also needs to be able to outmaneuver a target. Good hydrodynamics are needed for it to attain high speed efficiently and also to give a long range since the torpedo has limited stored energy.
Launch platforms and launchers
Torpedoes may be launched from submarines, surface ships, helicopters and fixed-wing aircraft, unmanned naval mines and naval fortresses. They are also used in conjunction with other weapons; for example, the Mark 46 torpedo used by the United States is the warhead section of the ASROC, a kind of anti-submarine missile; the CAPTOR mine (CAPsulated TORpedo) is a submerged sensor platform which releases a torpedo when a hostile contact is detected.
Ships
Originally, Whitehead torpedoes were intended for launch underwater and the firm was upset when they found out the British were launching them above water, as they considered their torpedoes too delicate for this. However, the torpedoes survived. The launch tubes could be fitted in a ship's bow, which weakened it for ramming, or on the broadside; this introduced problems because of water flow twisting the torpedo, so guide rails and sleeves were used to prevent it. The torpedoes were originally ejected from the tubes by compressed air but later slow-burning gunpowder was used. Torpedo boats originally used a frame that dropped the torpedo into the sea. Royal Navy Coastal Motor Boats of World War I used a rear-facing trough and a cordite ram to push the torpedoes into the water tail-first; they then had to move rapidly out of the way to avoid being hit by their torpedo.
Developed in the run-up to the First World War, multiple-tube mounts (initially twin, later triple and in WW2 up to quintuple in some ships) for torpedoes in rotating turntable mounts appeared. Destroyers could be found with two or three of these mounts with between five and twelve tubes in total. The Japanese went one better, covering their tube mounts with splinter protection and adding reloading gear (both unlike any other navy in the world), making them true turrets and increasing the broadside without adding tubes and top hamper (as the quadruple and quintuple mounts did). Considering that their Type 93s were very effective weapons, the IJN equipped their cruisers with torpedoes. The Germans also equipped their capital ships with torpedoes.
Smaller vessels such as PT boats carried their torpedoes in fixed deck-mounted tubes using compressed air. These were either aligned to fire forward or at an offset angle from the centerline.
Later, lightweight mounts for homing torpedoes were developed for anti-submarine use consisting of triple launch tubes used on the decks of ships. These were the 1960 Mk 32 torpedo launcher in the US and part of STWS (Shipborne Torpedo Weapon System) in the UK. Later a below-decks launcher was used by the RN. This basic launch system continues to be used today with improved torpedoes and fire control systems.
Submarines
Modern submarines use either swim-out systems or a pulse of water to discharge the torpedo from the tube, both of which have the advantage of being significantly quieter than previous systems, helping avoid detection of the firing from passive sonar. Earlier designs used a pulse of compressed air or a hydraulic ram.
Early submarines, when they carried torpedoes, were fitted with a variety of torpedo launching mechanisms in a range of locations; on the deck, in the bow or stern, amidships, with some launch mechanisms permitting the torpedo to be aimed over a wide arc. By World War II, designs favored multiple bow tubes and fewer or no stern tubes. Modern submarine bows are usually occupied by a large sonar array, necessitating midships tubes angled outward, while stern tubes have largely disappeared. The first French and Russian submarines carried their torpedoes externally in Drzewiecki drop collars. These were cheaper than tubes but less reliable. Both the United Kingdom and the United States experimented with external tubes in World War II. External tubes offered a cheap and easy way of increasing torpedo capacity without radical redesign, something neither had time or resources to do before nor early in, the war. British T-class submarines carried up to 13 torpedo tubes, up to 5 of them external. America's use was mainly limited to earlier Porpoise-, -, and -class boats. Until the appearance of the class, most American submarines only carried 4 bow and either 2 or 4 stern tubes, something many American submarine officers felt provided inadequate firepower. This problem was compounded by the notorious unreliability of the Mark 14 torpedo.
Late in World War II, the U.S. adopted a homing torpedo (known as "Cutie") for use against escorts. It was basically a modified Mark 24 Mine with wooden rails to allow firing from a torpedo tube.
Air launch
Aerial torpedoes may be carried by fixed-wing aircraft, helicopters, or missiles. They are launched from the first two at prescribed speeds and altitudes, dropped from bomb-bays or underwing hardpoints.
Handling equipment
Although lightweight torpedoes are fairly easily handled, the transport and handling of heavyweight torpedoes is difficult, especially in the tight spaces in a submarine. After the Second World War, some Type XXI submarines were obtained from Germany by the United States and Britain. One of the main novel developments seen was a mechanical handling system for torpedoes. Such systems were widely adopted as a result of this discovery.
Classes and diameters
Torpedoes are launched in several ways:
From a torpedo tube mounted either in a trainable deck mount (common in destroyers), or fixed above or below the waterline of a surface vessel (as in cruisers, battleships, and armed merchant cruisers) or submarine.
Early submarines and some torpedo boats (such as the U.S. World War II PT boats, which used the Mark 13 aircraft torpedo) used deck-mounted "drop collars", which simply relied on gravity.
From shackles aboard low-flying aircraft or helicopters.
As the final stage of a compound rocket or ramjet powered munition (sometimes called an assisted torpedo).
Many navies have two weights of torpedoes:
A light torpedo used primarily as a close attack weapon, particularly by aircraft. The caliber has been described as a NATO standard for this class.
A heavy torpedo used primarily as a standoff weapon, particularly by submerged submarines. The caliber is a common standard.
In the case of deck or tube launched torpedoes, the diameter of the torpedo is a key factor in determining the suitability of a particular torpedo to a tube or launcher, similar to the caliber of the gun. The size is not quite as critical as for a gun, but the diameter has become the most common way of classifying torpedoes.
Length, weight, and other factors also contribute to compatibility. In the case of aircraft launched torpedoes, the key factors are weight, provision of suitable attachment points, and launch speed. Assisted torpedoes are the most recent development in torpedo design, and are normally engineered as an integrated package. Versions for aircraft and assisted launching have sometimes been based on deck or tube launched versions, and there has been at least one case of a submarine torpedo tube being designed to fire an aircraft torpedo.
As in all munition design, there is a compromise between standardization, which simplifies manufacture and logistics, and specialization, which may make the weapon significantly more effective. Small improvements in either logistics or effectiveness can translate into enormous operational advantages.
Use by various navies
List of active torpedoes by place of origin
Modern heavyweight torpedoes generally launched from submarines and used to attack both surface ships and submarines. Relatively older heavyweight torpedoes generally used to attack either surface ships or submarines. Modern lightweight torpedoes launched from surface ships, helicopters and fixed-wing aircraft used to attack submarines.
China
Yu-11 torpedo (lightweight)
Yu-7 torpedo (lightweight)
Yu-6 torpedo (heavyweight)
Yu-5 torpedo (heavyweight)
Yu-4 torpedo (heavyweight)
France
F21 torpedo (heavyweight)
F17 torpedo (heavyweight)
MU90 Impact torpedo (lightweight)
Germany
Seadpider anti-torpedo (lightweight) (Atlas Elektronik)
DM2A4 torpedo (heavyweight)
DM2A3 torpedo (heavyweight)
SUT torpedo (heavyweight)
India
Varunastra (surface and submarine launched heavyweight torpedo)
Takshak torpedo (heavyweight)
Torpedo Advanced Light Shyena (lightweight)
Iran
Valfajr torpedo (heavyweight)
Hoot supercavitation torpedo (heavyweight)
Italy
A184 torpedo (heavyweight)
A244/S torpedo (lightweight)
MU90 Impact torpedo (lightweight)
A200 LCAW torpedo (miniature)
Black Shark torpedo (heavyweight)
Black Arrow torpedo (lightweight)
Black Scorpion torpedo (miniature)
Japan
Type 80 (G-RX1) torpedo (heavyweight)
Type 89 (G-RX2) torpedo (heavyweight)
Type 97 (G-RX4) torpedo (lightweight)
Type 12 (G-RX5) torpedo (lightweight)
Type 18 (G-RX6) torpedo (heavyweight)
Pakistan
Eghraaq
Russia
Status-6 Oceanic Multipurpose System (nuclear-powered, nuclear-armed UUV)
Futlyar (Fizik-2) (heavyweight)
Type 53 torpedo family including Fizik (UGST), USET-80 (heavyweight)
TEST 71/76 torpedo (heavyweight)
VA-111 Shkval supercavitation torpedo (heavyweight)
Type 65 torpedo (heavyweight)
APR-3E torpedo (lightweight)
APR-2 torpedo (lightweight)
In April 2015, the Fizik (UGST) heat-seeking torpedo entered service to replace the wake-homing USET-80 developed in the 1980s and the next-gen Futlyar entered service in 2017.
Republic of Korea
K731 White Shark torpedo (heavyweight)
K745 Blue Shark torpedo (lightweight)
K761 Tiger Shark torpedo (heavyweight)
Sweden
Torped 613 (heavyweight)
Torped 62 (heavyweight)
Torped 47 (lightweight)
Torped 45 (lightweight)
Turkey
Roketsan Akya torpedo (heavyweight)
Roketsan Orka torpedo (lightweight)
United Kingdom
Spearfish torpedo (heavyweight)
Tigerfish torpedo (heavyweight)
Sting Ray torpedo (lightweight)
United States of America
Mark 54 torpedo (lightweight)
Mark 50 torpedo (lightweight)
Mark 48 torpedo (heavyweight)
Mark 46 torpedo (lightweight)
List of key WWII torpedoes
The torpedoes used by the Imperial Japanese Navy (World War II) included:
Japanese 53 cm torpedoes (up to Type 96); Type 89, Type 95, Type 96 most significant
Japanese 45 cm torpedoes; Type 44, Type 91, Type 97/98 most significant
Japanese 61 cm torpedoes; Type 90, Type 93 most significant
Type 91 torpedo
Type 92 torpedo
Type 93 torpedo (Long Lance)
Type 95 torpedo
Type 97 torpedo
Kaiten
The torpedoes used by the World War II Kriegsmarine included:
G7a(TI)
G7e(TII)
G7e(TIII)
G7s(TIV) "Falke"
G7s(TV) "Zaunkönig
The torpedoes used by the World War II Royal Navy included:
British 21-inch torpedoes (up to Mark XI); Mark VIII and Mark IX most significant
British 18-inch torpedoes (up to Mark XVII); Mark XII and Mark XV most significant
The torpedoes used by the World War II United States Navy included:
Mark 18 torpedo
Mark 15 torpedo
Mark 14 torpedo
| Technology | Naval warfare | null |
31428 | https://en.wikipedia.org/wiki/Tourmaline | Tourmaline | Tourmaline ( ) is a crystalline silicate mineral group in which boron is compounded with elements such as aluminium, iron, magnesium, sodium, lithium, or potassium. This gemstone comes in a wide variety of colors.
The name is derived from the Sinhalese (), which refers to the carnelian gemstones.
History
Brightly colored Ceylonese gem tourmalines were brought to Europe in great quantities by the Dutch East India Company to satisfy a demand for curiosities and gems. Tourmaline was sometimes called the "Ceylonese Magnet" because it could attract and then repel hot ashes due to its pyroelectric properties.
Tourmalines were used by chemists in the 19th century to polarize light by shining rays onto a cut and polished surface of the gem.
Species and varieties
Commonly encountered species and varieties of tourmaline include the following:
Schorl species
Brownish-black to black—
Dravite species (from the Drave district of Carinthia)
Dark yellow to brownish-black—
Elbaite species (named after the island of Elba, Italy)
Red or pinkish-red— variety
Light blue to bluish-green— variety (from indigo)
Green— variety
Colorless— variety ()
Schorl
The most common species of tourmaline is , the sodium iron (divalent) endmember of the group. It may account for 95% or more of all tourmaline in nature. The early history of the mineral schorl shows that the name "schorl" was in use prior to 1400 because a village known today as Zschorlau (in Saxony, Germany) was then named "Schorl" (or minor variants of this name), and the village had a nearby tin mine where, in addition to cassiterite, black tourmaline was found. The first description of schorl with the name "schürl" and its occurrence (various tin mines in the Ore Mountains) was written by Johannes Mathesius (1504–1565) in 1562 under the title "Sarepta oder Bergpostill". Up to about 1600, additional names used in the German language were "Schurel", "Schörle", and "Schurl". Beginning in the 18th century, the name Schörl was mainly used in the German-speaking area. In English, the names shorl and shirl were used in the 18th century. In the 19th century the names common schorl, schörl, schorl and iron tourmaline were the English words used for this mineral.
Dravite
, also called , is the sodium magnesium rich tourmaline endmember. Uvite, in comparison, is a calcium magnesium tourmaline. Dravite forms multiple series, with other tourmaline members, including schorl and elbaite.
The name dravite was used for the first time by Gustav Tschermak (1836–1927), Professor of Mineralogy and Petrography at the University of Vienna, in his book Lehrbuch der Mineralogie (published in 1884) for magnesium-rich (and sodium-rich) tourmaline from village Dobrova near Unterdrauburg in the Drava river area, Carinthia, Austro-Hungarian Empire. Today this tourmaline locality (type locality for dravite) at Dobrova (near Dravograd), is a part of the Republic of Slovenia. Tschermak gave this tourmaline the name dravite, for the Drava river area, which is the district along the Drava River (in German: Drau, in Latin: Drave) in Austria and Slovenia. The chemical composition which was given by Tschermak in 1884 for this dravite approximately corresponds to the formula , which is in good agreement (except for the OH content) with the endmember formula of dravite as known today.
Dravite varieties include the deep green chromium dravite and the vanadium dravite.
Elbaite
A lithium-tourmaline elbaite was one of three pegmatitic minerals from Utö, Sweden, in which the new alkali element lithium (Li) was determined in 1818 by Johan August Arfwedson for the first time. Elba Island, Italy, was one of the first localities where colored and colorless Li-tourmalines were extensively chemically analysed. In 1850, Karl Friedrich August Rammelsberg described fluorine (F) in tourmaline for the first time. In 1870, he proved that all varieties of tourmaline contain chemically bound water. In 1889, Scharitzer proposed the substitution of (OH) by F in red Li-tourmaline from Sušice, Czech Republic. In 1914, Vladimir Vernadsky proposed the name Elbait for lithium-, sodium-, and aluminum-rich tourmaline from Elba Island, Italy, with the simplified formula . Most likely the type material for elbaite was found at Fonte del Prete, San Piero in Campo, Campo nell'Elba, Elba Island, Province of Livorno, Tuscany, Italy. In 1933 Winchell published an updated formula for elbaite, , which is commonly used to date written as . The first crystal structure determination of a Li-rich tourmaline was published in 1972 by Donnay and Barton, performed on a pink elbaite from San Diego County, California, United States.
Chemical composition
The tourmaline mineral group is chemically one of the most complicated groups of silicate minerals. Its composition varies widely because of isomorphous replacement (solid solution), and its general formula can be written as , where:
X = Ca, Na, K, ▢ = vacancy
Y = Li, Mg, Fe2+, Mn2+, Zn, Al, Cr3+, V3+, Fe3+, Ti4+, ▢ = vacancy
Z = Mg, Al, Fe3+, Cr3+, V3+
T = Si, Al, B
B = B, ▢ = vacancy
V = OH, O
W = OH, F, O
Mineral species that were named before the IMA was founded in 1958 do not have an IMA number.
The IMA commission on new mineral names published a list of approved symbols for each mineral species in 2021.
A revised nomenclature for the tourmaline group was published in 2011.
Physical properties
Crystal structure
Tourmaline is a six-member ring cyclosilicate having a trigonal crystal system. It occurs as long, slender to thick prismatic and columnar crystals that are usually triangular in cross-section, often with curved striated faces. The style of termination at the ends of crystals is sometimes asymmetrical, called hemimorphism. Small slender prismatic crystals are common in a fine-grained granite called aplite, often forming radial daisy-like patterns. Tourmaline is distinguished by its three-sided prisms; no other common mineral has three sides. Prisms faces often have heavy vertical striations that produce a rounded triangular effect. Tourmaline is rarely perfectly euhedral. An exception was the fine dravite tourmalines of Yinnietharra, in western Australia. The deposit was discovered in the 1970s, but is now exhausted. All hemimorphic crystals are piezoelectric, and are often pyroelectric as well.
A crystal of tourmaline is built up of units consisting of a six-member silica ring that binds above to a large cation, such as sodium. The ring binds below to a layer of metal ions and hydroxyls or halogens, which structurally resembles a fragment of kaolin. This in turn binds to three triangular borate ions. Units joined end to end form columns running the length of the crystal. Each column binds with two other columns offset one-third and two-thirds of the vertical length of a single unit to form bundles of three columns. Bundles are packed together to form the final crystal structure. Because the neighboring columns are offset, the basic structural unit is not a unit cell: The actual unit cell of this structure includes portions of several units belonging to adjacent columns.
Color
Tourmaline has a variety of colors. Iron-rich tourmalines are usually black to bluish-black to deep brown, while magnesium-rich varieties are brown to yellow, and lithium-rich tourmalines are almost any color: blue, green, red, yellow, pink, etc. Rarely, it is colorless. Bi-colored and multicolored crystals are common, reflecting variations of fluid chemistry during crystallization. Crystals may be green at one end and pink at the other, or green on the outside and pink inside; this type is called watermelon tourmaline and is prized in jewelry. An excellent example of watermelon tourmaline jewelry is a brooch piece (1969, gold, watermelon tourmaline, diamonds) by Andrew Grima (British, b. Italy, 1921–2007), in the collection of Kimberly Klosterman and on display at the Cincinnati Art Museum. Some forms of tourmaline are dichroic; they change color when viewed from different directions.
The pink color of tourmalines from many localities is the result of prolonged natural irradiation. During their growth, these tourmaline crystals incorporated Mn2+ and were initially very pale. Due to natural gamma ray exposure from radioactive decay of 40K in their granitic environment, gradual formation of Mn3+ ions occurs, which is responsible for the deepening of the pink to red color.
Magnetism
Opaque black schorl and yellow tsilaisite are idiochromatic tourmaline species that have high magnetic susceptibilities due to high concentrations of iron and manganese respectively. Most gem-quality tourmalines are of the elbaite species. Elbaite tourmalines are allochromatic, deriving most of their color and magnetic susceptibility from schorl (which imparts iron) and tsilaisite (which imparts manganese).
Red and pink tourmalines have the lowest magnetic susceptibilities among the elbaites, while tourmalines with bright yellow, green and blue colors are the most magnetic elbaites. Dravite species such as green chromium dravite and brown dravite are diamagnetic. A handheld neodymium magnet can be used to identify or separate some types of tourmaline gems from others. For example, blue indicolite tourmaline is the only blue gemstone of any kind that will show a drag response when a neodymium magnet is applied. Any blue tourmaline that is diamagnetic can be identified as paraiba tourmaline colored by copper in contrast to magnetic blue tourmaline colored by iron.
Treatments
Some tourmaline gems, especially pink to red colored stones, are altered by heat treatment to improve their color. Overly dark red stones can be lightened by careful heat treatment. The pink color in manganese-containing near-colorless to pale pink stones can be greatly increased by irradiation with gamma-rays or electron beams. Irradiation is almost impossible to detect in tourmalines, and does not, currently, affect the value. Heavily included tourmalines, such as rubellite and Brazilian paraiba, are sometimes clarity-enhanced. A clarity-enhanced tourmaline (especially the paraiba variety) is worth much less than an untreated gem of equal clarity.
Geology
Tourmaline is found in granite and granite pegmatites and in metamorphic rocks such as schist and marble. Schorl and lithium-rich tourmalines are usually found in granite and granite pegmatite. Magnesium-rich tourmalines, dravites, are generally restricted to schists and marble. Tourmaline is a durable mineral and can be found in minor amounts as grains in sandstone and conglomerate, and is part of the ZTR index for highly weathered sediments.
Localities
Gem and specimen tourmaline is mined chiefly in Brazil and many parts of Africa, including Tanzania, Nigeria, Kenya, Madagascar, Mozambique, Malawi, and Namibia. It is also mined in Asia, notably in Pakistan, Afghanistan, and Indonesia as well as in Sri Lanka and India, where some placer material suitable for gem use is found.
United States
Some fine gems and specimen material have been produced in the United States, with the first discoveries in 1822, in the state of Maine. California became a large producer of tourmaline in the early 1900s. The Maine deposits tend to produce crystals in raspberry pink-red as well as minty greens. The California deposits are known for bright pinks, as well as bicolors. During the early 1900s, Maine and California were the world's largest producers of gem tourmalines. The Empress Dowager Cixi of China loved pink tourmaline and bought large quantities for gemstones and carvings from the then new Himalaya Mine, located in San Diego County, California. It is not clear when the first tourmaline was found in California. Native Americans have used pink and green tourmaline as funeral gifts for centuries. The first documented case was in 1890 when Charles Russel Orcutt found pink tourmaline at what later became the Stewart Mine at Pala, California in San Diego County.
Brazil
Almost every color of tourmaline can be found in Brazil, especially in Minas Gerais and Bahia. The new type of tourmaline, which soon became known as paraiba tourmaline, came in blue and green. Brazilian paraiba tourmaline usually contains abundant inclusions. Much of the paraiba tourmaline from Brazil does not actually come from Paraíba, but the neighboring state of Rio Grande do Norte. Material from Rio Grande do Norte is often somewhat less intense in color, but many fine gems are found there. It was determined that the element copper was important in the coloration of the stone.
A large bluish-green tourmaline from Paraiba, measuring and weighing , is the world's largest cut tourmaline. Owned by Billionaire Business Enterprises, it was presented in Montreal, Quebec, Canada, on 14 October 2009.
Africa
In the late 1990s, copper-containing tourmaline was found in Nigeria. The material was generally paler and less saturated than the Brazilian materials, although the material generally was much less included. A more recent African discovery from Mozambique has also produced tourmaline colored by copper, similar to the Brazilian paraiba. The Mozambique paraiba material usually is more intensely colored than the Nigerian and Mozambique Paraiba tourmaline have similar colors to the Brazilian Paraiba, but the prices are relatively cheaper, better clarity and larger sizes. In recent years the pricing of these beautiful gemstones has increased significantly.
Another highly valuable variety is chrome tourmaline, a rare type of dravite tourmaline from Tanzania. Chrome tourmaline is a rich green color due to the presence of chromium atoms in the crystal. Of the standard elbaite colors, blue indicolite gems are typically the most valuable, followed by green verdelite and pink to red rubellite.
| Physical sciences | Silicate minerals | Earth science |
31429 | https://en.wikipedia.org/wiki/Twin%20paradox | Twin paradox | In physics, the twin paradox is a thought experiment in special relativity involving twins, one of whom takes a space voyage at relativistic speeds and returns home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin sees the other twin as moving, and so, as a consequence of an incorrect and naive application of time dilation and the principle of relativity, each should paradoxically find the other to have aged less. However, this scenario can be resolved within the standard framework of special relativity: the travelling twin's trajectory involves two different inertial frames, one for the outbound journey and one for the inbound journey. Another way to understand the paradox is to realize the travelling twin is undergoing acceleration, which makes them a non-inertial observer. In both views there is no symmetry between the spacetime paths of the twins. Therefore, the twin paradox is not actually a paradox in the sense of a logical contradiction.
Starting with Paul Langevin in 1911, there have been various explanations of this paradox. These explanations "can be grouped into those that focus on the effect of different standards of simultaneity in different frames, and those that designate the acceleration [experienced by the travelling twin] as the main reason". Max von Laue argued in 1913 that since the traveling twin must be in two separate inertial frames, one on the way out and another on the way back, this frame switch is the reason for the aging difference. Explanations put forth by Albert Einstein and Max Born invoked gravitational time dilation to explain the aging as a direct effect of acceleration. However, it has been proven that neither general relativity, nor even acceleration, are necessary to explain the effect, as the effect still applies if two astronauts pass each other at the turnaround point and synchronize their clocks at that point. The situation at the turnaround point can be thought of as where a pair of observers, one travelling away from the starting point and another travelling toward it, pass by each other, and where the clock reading of the first observer is transferred to that of the second one, both maintaining constant speed, with both trip times being added at the end of their journey.
History
In his famous paper on special relativity in 1905, Albert Einstein deduced that for two stationary and synchronous clocks that are placed at points A and B, if the clock at A is moved along the line AB and stops at B, the clock that moved from A would lag behind the clock at B. He stated that this result would also apply if the path from A to B was polygonal or circular. Einstein considered this to be a natural consequence of special relativity, not a paradox as some suggested, and in 1911, he restated and elaborated on this result as follows (with physicist Robert Resnick's comments following Einstein's):
In 1911, Paul Langevin gave a "striking example" by describing the story of a traveler making a trip at a Lorentz factor of (99.995% the speed of light). The traveler remains in a projectile for one year of his time, and then reverses direction. Upon return, the traveler will find that he has aged two years, while 200 years have passed on Earth. During the trip, both the traveler and Earth keep sending signals to each other at a constant rate, which places Langevin's story among the Doppler shift versions of the twin paradox. The relativistic effects upon the signal rates are used to account for the different aging rates. The asymmetry that occurred because only the traveler underwent acceleration is used to explain why there is any difference at all, because "any change of velocity, or any acceleration has an absolute meaning".
Max von Laue (1911, 1913) elaborated on Langevin's explanation. Using Hermann Minkowski's spacetime formalism, Laue went on to demonstrate that the world lines of the inertially moving bodies maximize the proper time elapsed between two events. He also wrote that the asymmetric aging is completely accounted for by the fact that the astronaut twin travels in two separate frames, while the Earth twin remains in one frame, and the time of acceleration can be made arbitrarily small compared with the time of inertial motion. Eventually, Lord Halsbury and others removed any acceleration by introducing the "three-brother" approach. The traveling twin transfers his clock reading to a third one, traveling in the opposite direction. Another way of avoiding acceleration effects is the use of the relativistic Doppler effect .
Neither Einstein nor Langevin considered such results to be problematic: Einstein only called it "peculiar" while Langevin presented it as a consequence of absolute acceleration. Both men argued that, from the time differential illustrated by the story of the twins, no self-contradiction could be constructed. In other words, neither Einstein nor Langevin saw the story of the twins as constituting a challenge to the self-consistency of relativistic physics.
Specific example
Consider a space ship traveling from Earth to the nearest star system: a distance years away, at a speed (i.e., 80% of the speed of light).
To make the numbers easy, the ship is assumed to attain full speed in a negligible time upon departure (even though it would actually take about 9 months accelerating at 1 g to get up to speed). Similarly, at the end of the outgoing trip, the change in direction needed to start the return trip is assumed to occur in a negligible time. This can also be modelled by assuming that the ship is already in motion at the beginning of the experiment and that the return event is modelled by a Dirac delta distribution acceleration.
The parties will observe the situation as follows:
Earth perspective
The Earth-based mission control reasons about the journey this way: the round trip will take in Earth time (i.e. everybody who stays on Earth will be 10 years older when the ship returns). The amount of time as measured on the ship's clocks and the aging of the travelers during their trip will be reduced by the factor , the reciprocal of the Lorentz factor (time dilation). In this case and the travelers will have aged only when they return.
Travellers' perspective
The ship's crew members also calculate the particulars of their trip from their perspective. They know that the distant star system and the Earth are moving relative to the ship at speed v during the trip. In their rest frame the distance between the Earth and the star system is years (length contraction), for both the outward and return journeys. Each half of the journey takes , and the round trip takes twice as long (6 years). Their calculations show that they will arrive home having aged 6 years. The travelers' final calculation about their aging is in complete agreement with the calculations of those on Earth, though they experience the trip quite differently from those who stay at home.
Conclusion
No matter what method they use to predict the clock readings, everybody will agree about them. If twins are born on the day the ship leaves, and one goes on the journey while the other stays on Earth, they will meet again when the traveler is 6 years old and the stay-at-home twin is 10 years old.
Resolution of the paradox in special relativity
The paradoxical aspect of the twins' situation arises from the fact that at any given moment the travelling twin's clock is running slow in the earthbound twin's inertial frame, but based on the relativity principle one could equally argue that the earthbound twin's clock is running slow in the travelling twin's inertial frame. One proposed resolution is based on the fact that the earthbound twin is at rest in the same inertial frame throughout the journey, while the travelling twin is not: in the simplest version of the thought-experiment, the travelling twin switches at the midpoint of the trip from being at rest in an inertial frame which moves in one direction (away from the Earth) to being at rest in an inertial frame which moves in the opposite direction (towards the Earth). In this approach, determining which observer switches frames and which does not is crucial. Although both twins can legitimately claim that they are at rest in their own frame, only the traveling twin experiences acceleration when the spaceship engines are turned on. This acceleration, measurable with an accelerometer, makes his rest frame temporarily non-inertial. This reveals a crucial asymmetry between the twins' perspectives: although we can predict the aging difference from both perspectives, we need to use different methods to obtain correct results.
Role of acceleration
Although some solutions attribute a crucial role to the acceleration of the travelling twin at the time of the turnaround, others note that the effect also arises if one imagines two separate travellers, one outward-going and one inward-coming, who pass each other and synchronize their clocks at the point corresponding to "turnaround" of a single traveller. In this version, physical acceleration of the travelling clock plays no direct role; "the issue is how long the world-lines are, not how bent". The length referred to here is the Lorentz-invariant length or "proper time interval" of a trajectory which corresponds to the elapsed time measured by a clock following that trajectory (see Section Difference in elapsed time as a result of differences in twins' spacetime paths below). In Minkowski spacetime, the travelling twin must feel a different history of accelerations from the earthbound twin, even if this just means accelerations of the same size separated by different amounts of time, however "even this role for acceleration can be eliminated in formulations of the twin paradox in curved spacetime, where the twins can fall freely along space-time geodesics between meetings".
Relativity of simultaneity
For a moment-by-moment understanding of how the time difference between the twins unfolds, one must understand that in special relativity there is no concept of absolute present. For different inertial frames there are different sets of events that are simultaneous in that frame. This relativity of simultaneity means that switching from one inertial frame to another requires an adjustment in what slice through spacetime counts as the "present". In the spacetime diagram on the right, drawn for the reference frame of the Earth-based twin, that twin's world line coincides with the vertical axis (his position is constant in space, moving only in time). On the first leg of the trip, the second twin moves to the right (black sloped line); and on the second leg, back to the left. Blue lines show the planes of simultaneity for the traveling twin during the first leg of the journey; red lines, during the second leg. Just before turnaround, the traveling twin calculates the age of the Earth-based twin by measuring the interval along the vertical axis from the origin to the upper blue line. Just after turnaround, if he recalculates, he will measure the interval from the origin to the lower red line. In a sense, during the U-turn the plane of simultaneity jumps from blue to red and very quickly sweeps over a large segment of the world line of the Earth-based twin. When one transfers from the outgoing inertial frame to the incoming inertial frame there is a jump discontinuity in the age of the Earth-based twin (6.4 years in the example above).
A non-spacetime approach
As mentioned above, an "out and back" twin paradox adventure may incorporate the transfer of clock reading from an "outgoing" astronaut to an "incoming" astronaut, thus eliminating the effect of acceleration. Also, the physical acceleration of clocks does not contribute to the kinematical effects of special relativity. Rather, in special relativity, the time differential between two reunited clocks is produced purely by uniform inertial motion, as discussed in Einstein's original 1905 relativity paper, as well as in all subsequent kinematical derivations of the Lorentz transformations.
Because spacetime diagrams incorporate Einstein's clock synchronization (with its lattice of clocks methodology), there will be a requisite jump in the reading of the Earth clock time made by a "suddenly returning astronaut" who inherits a "new meaning of simultaneity" in keeping with a new clock synchronization dictated by the transfer to a different inertial frame, as explained in Spacetime Physics by John A. Wheeler.
If, instead of incorporating Einstein's clock synchronization (lattice of clocks), the astronaut (outgoing and incoming) and the Earth-based party regularly update each other on the status of their clocks by way of sending radio signals (which travel at light speed), then all parties will note an incremental buildup of asymmetry in time-keeping, beginning at the "turn around" point. Prior to the "turn around", each party regards the other party's clock to be recording time differently from his own, but the noted difference is symmetrical between the two parties. After the "turn around", the noted differences are not symmetrical, and the asymmetry grows incrementally until the two parties are reunited. Upon finally reuniting, this asymmetry can be seen in the actual difference showing on the two reunited clocks.
The equivalence of biological aging and clock time-keeping
All processes—chemical, biological, measuring apparatus functioning, human perception involving the eye and brain, the communication of force—are constrained by the speed of light. There is clock functioning at every level, dependent on light speed and the inherent delay at even the atomic level. Biological aging, therefore, is in no way different from clock time-keeping. This means that biological aging would be slowed in the same manner as a clock.
What it looks like: the relativistic Doppler shift
In view of the frame-dependence of simultaneity for events at different locations in space, some treatments prefer a more phenomenological approach, describing what the twins would observe if each sent out a series of regular radio pulses, equally spaced in time according to the emitter's clock. This is equivalent to asking, if each twin sent a video feed of themselves to each other, what do they see in their screens? Or, if each twin always carried a clock indicating his age, what time would each see in the image of their distant twin and his clock?
Shortly after departure, the traveling twin sees the stay-at-home twin with no time delay. At arrival, the image in the ship screen shows the staying twin as he was 1 year after launch, because radio emitted from Earth 1 year after launch gets to the other star 4 years afterwards and meets the ship there. During this leg of the trip, the traveling twin sees his own clock advance 3 years and the clock in the screen advance 1 year, so it seems to advance at the normal rate, just 20 image seconds per ship minute. This combines the effects of time dilation due to motion (by factor , five years on Earth are 3 years on ship) and the effect of increasing light-time-delay (which grows from 0 to 4 years).
Of course, the observed frequency of the transmission is also the frequency of the transmitter (a reduction in frequency; "red-shifted"). This is called the relativistic Doppler effect. The frequency of clock-ticks (or of wavefronts) which one sees from a source with rest frequency frest is
when the source is moving directly away. This is fobs = frest for v/c = 0.8.
As for the stay-at-home twin, he gets a slowed signal from the ship for 9 years, at a frequency the transmitter frequency. During these 9 years, the clock of the traveling twin in the screen seems to advance 3 years, so both twins see the image of their sibling aging at a rate only their own rate. Expressed in other way, they would both see the other's clock run at their own clock speed. If they factor out of the calculation the fact that the light-time delay of the transmission is increasing at a rate of 0.8 seconds per second, both can work out that the other twin is aging slower, at 60% rate.
Then the ship turns back toward home. The clock of the staying twin shows "1 year after launch" in the screen of the ship, and during the 3 years of the trip back it increases up to "10 years after launch", so the clock in the screen seems to be advancing 3 times faster than usual.
When the source is moving towards the observer, the observed frequency is higher ("blue-shifted") and given by
This is fobs = 3frest for v/c = 0.8.
As for the screen on Earth, it shows that trip back beginning 9 years after launch, and the traveling clock in the screen shows that 3 years have passed on the ship. One year later, the ship is back home and the clock shows 6 years. So, during the trip back, both twins see their sibling's clock going 3 times faster than their own. Factoring out the fact that the light-time-delay is decreasing by 0.8 seconds every second, each twin calculates that the other twin is aging at 60% his own aging speed.
The x–t (space–time) diagrams at right show the paths of light signals traveling between Earth and ship (1st diagram) and between ship and Earth (2nd diagram). These signals carry the images of each twin and his age-clock to the other twin. The vertical black line is the Earth's path through spacetime and the other two sides of the triangle show the ship's path through spacetime (as in the Minkowski diagram above). As far as the sender is concerned, he transmits these at equal intervals (say, once an hour) according to his own clock; but according to the clock of the twin receiving these signals, they are not being received at equal intervals.
After the ship has reached its cruising speed of 0.8c, each twin would see 1 second pass in the received image of the other twin for every 3 seconds of his own time. That is, each would see the image of the other's clock going slow, not just slow by the factor 0.6, but even slower because light-time-delay is increasing 0.8 seconds per second. This is shown in the figures by red light paths. At some point, the images received by each twin change so that each would see 3 seconds pass in the image for every second of his own time. That is, the received signal has been increased in frequency by the Doppler shift. These high frequency images are shown in the figures by blue light paths.
The asymmetry in the Doppler shifted images
The asymmetry between the Earth and the space ship is manifested in this diagram by the fact that more blue-shifted (fast aging) images are received by the ship. Put another way, the space ship sees the image change from a red-shift (slower aging of the image) to a blue-shift (faster aging of the image) at the midpoint of its trip (at the turnaround, 3 years after departure); the Earth sees the image of the ship change from red-shift to blue shift after 9 years (almost at the end of the period that the ship is absent). In the next section, one will see another asymmetry in the images: the Earth twin sees the ship twin age by the same amount in the red and blue shifted images; the ship twin sees the Earth twin age by different amounts in the red and blue shifted images.
Calculation of elapsed time from the Doppler diagram
The twin on the ship sees low frequency (red) images for 3 years. During that time, he would see the Earth twin in the image grow older by . He then sees high frequency (blue) images during the back trip of 3 years. During that time, he would see the Earth twin in the image grow older by When the journey is finished, the image of the Earth twin has aged by
The Earth twin sees 9 years of slow (red) images of the ship twin, during which the ship twin ages (in the image) by He then sees fast (blue) images for the remaining 1 year until the ship returns. In the fast images, the ship twin ages by The total aging of the ship twin in the images received by Earth is , so the ship twin returns younger (6 years as opposed to 10 years on Earth).
The distinction between what they see and what they calculate
To avoid confusion, note the distinction between what each twin sees and what each would calculate. Each sees an image of his twin which he knows originated at a previous time and which he knows is Doppler shifted. He does not take the elapsed time in the image as the age of his twin now.
If he wants to calculate when his twin was the age shown in the image (i.e. how old he himself was then), he has to determine how far away his twin was when the signal was emitted—in other words, he has to consider simultaneity for a distant event.
If he wants to calculate how fast his twin was aging when the image was transmitted, he adjusts for the Doppler shift. For example, when he receives high frequency images (showing his twin aging rapidly) with frequency , he does not conclude that the twin was aging that rapidly when the image was generated, any more than he concludes that the siren of an ambulance is emitting the frequency he hears. He knows that the Doppler effect has increased the image frequency by the factor 1 / (1 − v/c). Therefore, he calculates that his twin was aging at the rate of
when the image was emitted. A similar calculation reveals that his twin was aging at the same reduced rate of εfrest in all low frequency images.
Simultaneity in the Doppler shift calculation
It may be difficult to see where simultaneity came into the Doppler shift calculation, and indeed the calculation is often preferred because one does not have to worry about simultaneity. As seen above, the ship twin can convert his received Doppler-shifted rate to a slower rate of the clock of the distant clock for both red and blue images. If he ignores simultaneity, he might say his twin was aging at the reduced rate throughout the journey and therefore should be younger than he is. He is now back to square one, and has to take into account the change in his notion of simultaneity at the turnaround. The rate he can calculate for the image (corrected for Doppler effect) is the rate of the Earth twin's clock at the moment it was sent, not at the moment it was received. Since he receives an unequal number of red and blue shifted images, he should realize that the red and blue shifted emissions were not emitted over equal time periods for the Earth twin, and therefore he must account for simultaneity at a distance.
Viewpoint of the traveling twin
During the turnaround, the traveling twin is in an accelerated reference frame. According to the equivalence principle, the traveling twin may analyze the turnaround phase as if the stay-at-home twin were freely falling in a gravitational field and as if the traveling twin were stationary. A 1918 paper by Einstein presents a conceptual sketch of the idea. From the viewpoint of the traveler, a calculation for each separate leg, ignoring the turnaround, leads to a result in which the Earth clocks age less than the traveler. For example, if the Earth clocks age 1 day less on each leg, the amount that the Earth clocks will lag behind amounts to 2 days. The physical description of what happens at turnaround has to produce a contrary effect of double that amount: 4 days' advancing of the Earth clocks. Then the traveler's clock will end up with a net 2-day delay on the Earth clocks, in agreement with calculations done in the frame of the stay-at-home twin.
The mechanism for the advancing of the stay-at-home twin's clock is gravitational time dilation. When an observer finds that inertially moving objects are being accelerated with respect to themselves, those objects are in a gravitational field insofar as relativity is concerned. For the traveling twin at turnaround, this gravitational field fills the universe. In a weak field approximation, clocks tick at a rate of where Φ is the difference in gravitational potential. In this case, where g is the acceleration of the traveling observer during turnaround and h is the distance to the stay-at-home twin. The rocket is firing towards the stay-at-home twin, thereby placing that twin at a higher gravitational potential. Due to the large distance between the twins, the stay-at-home twin's clocks will appear to be sped up enough to account for the difference in proper times experienced by the twins. It is no accident that this speed-up is enough to account for the simultaneity shift described above. The general relativity solution for a static homogeneous gravitational field and the special relativity solution for finite acceleration produce identical results.
Other calculations have been done for the traveling twin (or for any observer who sometimes accelerates), which do not involve the equivalence principle, and which do not involve any gravitational fields. Such calculations are based only on the special theory, not the general theory, of relativity. One approach calculates surfaces of simultaneity by considering light pulses, in accordance with Hermann Bondi's idea of the k-calculus. A second approach calculates a straightforward but technically complicated integral to determine how the traveling twin measures the elapsed time on the stay-at-home clock. An outline of this second approach is given in a separate section below.
Difference in elapsed time as a result of differences in twins' spacetime paths
The following paragraph shows several things:
how to employ a precise mathematical approach in calculating the differences in the elapsed time
how to prove exactly the dependency of the elapsed time on the different paths taken through spacetime by the twins
how to quantify the differences in elapsed time
how to calculate proper time as a function (integral) of coordinate time
Let clock K be associated with the "stay at home twin".
Let clock K' be associated with the rocket that makes the trip.
At the departure event both clocks are set to 0.
Phase 1: Rocket (with clock K') embarks with constant proper acceleration a during a time Ta as measured by clock K until it reaches some velocity V.
Phase 2: Rocket keeps coasting at velocity V during some time Tc according to clock K.
Phase 3: Rocket fires its engines in the opposite direction of K during a time Ta according to clock K until it is at rest with respect to clock K. The constant proper acceleration has the value −a, in other words the rocket is decelerating.
Phase 4: Rocket keeps firing its engines in the opposite direction of K, during the same time Ta according to clock K, until K' regains the same speed V with respect to K, but now towards K (with velocity −V).
Phase 5: Rocket keeps coasting towards K at speed V during the same time Tc according to clock K.
Phase 6: Rocket again fires its engines in the direction of K, so it decelerates with a constant proper acceleration a during a time Ta, still according to clock K, until both clocks reunite.
Knowing that the clock K remains inertial (stationary), the total accumulated proper time Δτ of clock K' will be given by the integral function of coordinate time Δt
where v(t) is the coordinate velocity of clock K' as a function of t according to clock K, and, e.g. during phase 1, given by
This integral can be calculated for the 6 phases:
Phase 1
Phase 2
Phase 3
Phase 4
Phase 5
Phase 6
where a is the proper acceleration, felt by clock K' during the acceleration phase(s) and where the following relations hold between V, a and Ta:
So the traveling clock K' will show an elapsed time of
which can be expressed as
whereas the stationary clock K shows an elapsed time of
which is, for every possible value of a, Ta, Tc and V, larger than the reading of clock K':
Difference in elapsed times: how to calculate it from the ship
In the standard proper time formula
Δτ represents the time of the non-inertial (travelling) observer K' as a function of the elapsed time Δt of the inertial (stay-at-home) observer K for whom observer K' has velocity v(t) at time t.
To calculate the elapsed time Δt of the inertial observer K as a function of the elapsed time Δτ of the non-inertial observer K', where only quantities measured by K' are accessible, the following formula can be used:
where a(τ) is the proper acceleration of the non-inertial observer K' as measured by himself (for instance with an accelerometer) during the whole round-trip. The Cauchy–Schwarz inequality can be used to show that the inequality follows from the previous expression:
Using the Dirac delta function to model the infinite acceleration phase in the standard case of the traveller having constant speed v during the outbound and the inbound trip, the formula produces the known result:
In the case where the accelerated observer K' departs from K with zero initial velocity, the general equation reduces to the simpler form:
which, in the smooth version of the twin paradox where the traveller has constant proper acceleration phases, successively given by a, −a, −a, a, results in
where the convention c = 1 is used, in accordance with the above expression with acceleration phases and inertial (coasting) phases
A rotational version
Twins Bob and Alice inhabit a space station in circular orbit around a massive body in space. Bob suits up and exits the station. While Alice remains inside the station, continuing to orbit with it as before, Bob uses a rocket propulsion system to cease orbiting and hover where he was. When the station completes an orbit and returns to Bob, he rejoins Alice. Alice is now younger than Bob. In addition to rotational acceleration, Bob must decelerate to become stationary and then accelerate again to match the orbital speed of the space station.
No twin paradox in an absolute frame of reference
Einstein's conclusion of an actual difference in registered clock times (or aging) between reunited parties caused Paul Langevin to posit an actual, albeit experimentally indiscernible, absolute frame of reference:
In 1911, Langevin wrote: "A uniform translation in the aether has no experimental sense. But because of this it should not be concluded, as has sometimes happened prematurely, that the concept of aether must be abandoned, that the aether is non-existent and inaccessible to experiment. Only a uniform velocity relative to it cannot be detected, but any change of velocity ... has an absolute sense."
In 1913, Henri Poincaré's posthumous Last Essays were published and there he had restated his position: "Today some physicists want to adopt a new convention. It is not that they are constrained to do so; they consider this new convention more convenient; that is all. And those who are not of this opinion can legitimately retain the old one."
In the relativity of Poincaré and Hendrik Lorentz, which assumes an absolute (though experimentally indiscernible) frame of reference, no paradox arises due to the fact that clock slowing (along with length contraction and velocity) is regarded as an actuality, hence the actual time differential between the reunited clocks.
In that interpretation, a party at rest with the totality of the cosmos (at rest with the barycenter of the universe, or at rest with a possible ether) would have the maximum rate of time-keeping and have non-contracted length. All the effects of Einstein's special relativity (consistent light-speed measure, as well as symmetrically measured clock-slowing and length-contraction across inertial frames) fall into place.
That interpretation of relativity, which John A. Wheeler calls "ether theory B (length contraction plus time contraction)", did not gain as much traction as Einstein's, which simply disregarded any deeper reality behind the symmetrical measurements across inertial frames. There is no physical test which distinguishes one interpretation from the other.
In 2005, Robert B. Laughlin (Physics Nobel Laureate, Stanford University), wrote about the nature of space: "It is ironic that Einstein's most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed ... The word 'ether' has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum. ... Relativity actually says nothing about the existence or nonexistence of matter pervading the universe, only that any such matter must have relativistic symmetry (i.e., as measured)."
In Special Relativity (1968), A. P. French wrote: "Note, though, that we are appealing to the reality of A's acceleration, and to the observability of the inertial forces associated with it. Would such effects as the twin paradox (specifically -- the time keeping differential between reunited clocks) exist if the framework of fixed stars and distant galaxies were not there? Most physicists would say no. Our ultimate definition of an inertial frame may indeed be that it is a frame having zero acceleration with respect to the matter of the universe at large."
| Physical sciences | Theory of relativity | Physics |
31439 | https://en.wikipedia.org/wiki/Tokamak | Tokamak | A tokamak (; ) is a device which uses a powerful magnetic field generated by external magnets to confine plasma in the shape of an axially symmetrical torus. The tokamak is one of several types of magnetic confinement devices being developed to produce controlled thermonuclear fusion power. The tokamak concept is currently one of the leading candidates for a practical fusion reactor.
The proposal to use controlled thermonuclear fusion for industrial purposes and a specific scheme using thermal insulation of high-temperature plasma by an electric field was first formulated by the Soviet physicist Oleg Lavrentiev in a mid-1950 paper. In 1951, Andrei Sakharov and Igor Tamm modified the scheme by proposing a theoretical basis for a thermonuclear reactor, where the plasma would have the shape of a torus and be held by a magnetic field.
The first tokamak was built in 1954,. In 1968 the electronic plasma temperature of 1 keV was reached on the tokamak T-3, built at the I. V. Kurchatov Institute of Atomic Energy under the leadership of academician L. A. Artsimovich.
By the mid-1960s, the tokamak designs began to show greatly improved performance. The initial results were released in 1965, but were ignored; Lyman Spitzer dismissed them out of hand after noting potential problems in their system for measuring temperatures. A second set of results was published in 1968, this time claiming performance far in advance of any other machine. When these were also met skeptically, the Soviets invited British scientists from the laboratory in Culham Centre for Fusion Energy (Nicol Peacock et al.) to the USSR with their equipment. Measurements on the T-3 confirmed the results, spurring a worldwide stampede of tokamak construction. It had been demonstrated that a stable plasma equilibrium requires magnetic field lines that wind around the torus in a helix. Devices like the z-pinch and stellarator had attempted this, but demonstrated serious instabilities. It was the development of the concept now known as the safety factor (labelled q in mathematical notation) that guided tokamak development; by arranging the reactor so this critical factor q was always greater than 1, the tokamaks strongly suppressed the instabilities which plagued earlier designs.
By the mid-1970s, dozens of tokamaks were in use around the world. By the late 1970s, these machines had reached all of the conditions needed for practical fusion, although not at the same time nor in a single reactor. With the goal of breakeven (a fusion energy gain factor equal to 1) now in sight, a new series of machines were designed that would run on a fusion fuel of deuterium and tritium. These machines, notably the Joint European Torus (JET) and Tokamak Fusion Test Reactor (TFTR), had the explicit goal of reaching breakeven.
Instead, these machines demonstrated new problems that limited their performance. Solving these would require a much larger and more expensive machine, beyond the abilities of any one country. After an initial agreement between Ronald Reagan and Mikhail Gorbachev in November 1985, the International Thermonuclear Experimental Reactor (ITER) effort emerged and remains the primary international effort to develop practical fusion power. Many smaller designs, and offshoots like the spherical tokamak, continue to be used to investigate performance parameters and other issues. , JET remains the record holder for fusion output, with 69 MJ of energy output over a 5-second period.
Etymology
The word tokamak is a transliteration of the Russian word , an acronym of either:
or:
The term "tokamak" was coined in 1957 by Igor Golovin, a student of academician Igor Kurchatov. It originally sounded like "tokamag" ("токамаг") — an acronym of the words "toroidal chamber magnetic" ("тороидальная камера магнитная"), but Natan Yavlinsky, the author of the first toroidal system, proposed replacing "-mag" with "-mak" for euphony. Later, this name was borrowed by many languages.
History
First steps
In 1934, Mark Oliphant, Paul Harteck and Ernest Rutherford were the first to achieve fusion on Earth, using a particle accelerator to shoot deuterium nuclei into metal foil containing deuterium or other atoms. This allowed them to measure the nuclear cross section of various fusion reactions, and determined that the deuterium–deuterium reaction occurred at a lower energy than other reactions, peaking at about 100,000 electronvolts (100 keV).
Accelerator-based fusion is not practical because the reactor cross section is tiny; most of the particles in the accelerator will scatter off the fuel, not fuse with it. These scatterings cause the particles to lose energy to the point where they can no longer undergo fusion. The energy put into these particles is thus lost, and it is easy to demonstrate this is much more energy than the resulting fusion reactions can release.
To maintain fusion and produce net energy output, the bulk of the fuel must be raised to high temperatures so its atoms are constantly colliding at high speed; this gives rise to the name thermonuclear due to the high temperatures needed to bring it about. In 1944, Enrico Fermi calculated the reaction would be self-sustaining at about 50,000,000 K; at that temperature, the rate that energy is given off by the reactions is high enough that they heat the surrounding fuel rapidly enough to maintain the temperature against losses to the environment, continuing the reaction.
During the Manhattan Project, the first practical way to reach these temperatures was created, using an atomic bomb. In 1944, Fermi gave a talk on the physics of fusion in the context of a then-hypothetical hydrogen bomb. However, some thought had already been given to a controlled fusion device, and James L. Tuck and Stanislaw Ulam had attempted such using shaped charges driving a metal foil infused with deuterium, although without success.
The first attempts to build a practical fusion machine took place in the United Kingdom, where George Paget Thomson had selected the pinch effect as a promising technique in 1945. After several failed attempts to gain funding, he gave up and asked two graduate students, Stanley (Stan) W. Cousins and Alan Alfred Ware (1924–2010), to build a device out of surplus radar equipment. This was successfully operated in 1948, but showed no clear evidence of fusion and failed to gain the interest of the Atomic Energy Research Establishment.
Lavrentiev's letter
In 1950, Oleg Lavrentiev, then a Red Army sergeant stationed on Sakhalin, wrote a letter to the Central Committee of the Communist Party of the Soviet Union. The letter outlined the idea of using an atomic bomb to ignite a fusion fuel, and then went on to describe a system that used electrostatic fields to contain a hot plasma in a steady state for energy production.
The letter was sent to Andrei Sakharov for comment. Sakharov noted that "the author formulates a very important and not necessarily hopeless problem", and found his main concern in the arrangement was that the plasma would hit the electrode wires, and that "wide meshes and a thin current-carrying part which will have to reflect almost all incident nuclei back into the reactor. In all likelihood, this requirement is incompatible with the mechanical strength of the device."
Some indication of the importance given to Lavrentiev's letter can be seen in the speed with which it was processed; the letter was received by the Central Committee on 29 July, Sakharov sent his review in on 18 August, by October, Sakharov and Igor Tamm had completed the first detailed study of a fusion reactor, and they had asked for funding to build it in January 1951.
Magnetic confinement
When heated to fusion temperatures, the electrons in atoms dissociate, resulting in a fluid of nuclei and electrons known as plasma. Unlike electrically neutral atoms, a plasma is electrically conductive, and can, therefore, be manipulated by electrical or magnetic fields.
Sakharov's concern about the electrodes led him to consider using magnetic confinement instead of electrostatic. In the case of a magnetic field, the particles will circle around the lines of force. As the particles are moving at high speed, their resulting paths look like a helix. If one arranges a magnetic field so lines of force are parallel and close together, the particles orbiting adjacent lines may collide, and fuse.
Such a field can be created in a solenoid, a cylinder with magnets wrapped around the outside. The combined fields of the magnets create a set of parallel magnetic lines running down the length of the cylinder. This arrangement prevents the particles from moving sideways to the wall of the cylinder, but it does not prevent them from running out the end. The obvious solution to this problem is to bend the cylinder around into a donut shape, or torus, so that the lines form a series of continual rings. In this arrangement, the particles circle endlessly.
Sakharov discussed the concept with Igor Tamm, and by the end of October 1950 the two had written a proposal and sent it to Igor Kurchatov, the director of the atomic bomb project within the USSR, and his deputy, Igor Golovin. However, this initial proposal ignored a fundamental problem; when arranged along a straight solenoid, the external magnets are evenly spaced, but when bent around into a torus, they are closer together on the inside of the ring than the outside. This leads to uneven forces that cause the particles to drift away from their magnetic lines.
During visits to the Laboratory of Measuring Instruments of the USSR Academy of Sciences (LIPAN), the Soviet nuclear research centre, Sakharov suggested two possible solutions to this problem. One was to suspend a current-carrying ring in the centre of the torus. The current in the ring would produce a magnetic field that would mix with the one from the magnets on the outside. The resulting field would be twisted into a helix, so that any given particle would find itself repeatedly on the outside, then inside, of the torus. The drifts caused by the uneven fields are in opposite directions on the inside and outside, so over the course of multiple orbits around the long axis of the torus, the opposite drifts would cancel out. Alternately, he suggested using an external magnet to induce a current in the plasma itself, instead of a separate metal ring, which would have the same effect.
In January 1951, Kurchatov arranged a meeting at LIPAN to consider Sakharov's concepts. They found widespread interest and support, and in February a report on the topic was forwarded to Lavrentiy Beria, who oversaw the atomic efforts in the USSR. For a time, nothing was heard back.
Richter and the birth of fusion research
On 25 March 1951, Argentine President Juan Perón announced that a former German scientist, Ronald Richter, had succeeded in producing fusion at a laboratory scale as part of what is now known as the Huemul Project. Scientists around the world were excited by the announcement, but soon concluded it was not true; simple calculations showed that his experimental setup could not produce enough energy to heat the fusion fuel to the needed temperatures.
Although dismissed by nuclear researchers, the widespread news coverage meant politicians were suddenly aware of, and receptive to, fusion research. In the UK, Thomson was suddenly granted considerable funding. Over the next months, two projects based on the pinch system were up and running. In the US, Lyman Spitzer read the Huemul story, realized it was false, and set about designing a machine that would work. In May he was awarded $50,000 to begin research on his stellarator concept. Jim Tuck had returned to the UK briefly and saw Thomson's pinch machines. When he returned to Los Alamos he also received $50,000 directly from the Los Alamos budget.
Similar events occurred in the USSR. In mid-April, Dmitri Efremov of the Scientific Research Institute of Electrophysical Apparatus stormed into Kurchatov's study with a magazine containing a story about Richter's work, demanding to know why they were beaten by the Argentines. Kurchatov immediately contacted Beria with a proposal to set up a separate fusion research laboratory with Lev Artsimovich as director. Only days later, on 5 May, the proposal had been signed by Joseph Stalin.
New ideas
By October, Sakharov and Tamm had completed a much more detailed consideration of their original proposal, calling for a device with a major radius (of the torus as a whole) of and a minor radius (the interior of the cylinder) of . The proposal suggested the system could produce of tritium a day, or breed of U233 a day.
As the idea was further developed, it was realized that a current in the plasma could create a field that was strong enough to confine the plasma as well, removing the need for the external coils. At this point, the Soviet researchers had re-invented the pinch system being developed in the UK, although they had come to this design from a very different starting point.
Once the idea of using the pinch effect for confinement had been proposed, a much simpler solution became evident. Instead of a large toroid, one could simply induce the current into a linear tube, which could cause the plasma within to collapse down into a filament. This had a huge advantage; the current in the plasma would heat it through normal resistive heating, but this would not heat the plasma to fusion temperatures. However, as the plasma collapsed, the adiabatic process would result in the temperature rising dramatically, more than enough for fusion. With this development, only Golovin and Natan Yavlinsky continued considering the more static toroidal arrangement.
Instability
On 4 July 1952, Nikolai Filippov's group measured neutrons being released from a linear pinch machine. Lev Artsimovich demanded that they check everything before concluding fusion had occurred, and during these checks, they found that the neutrons were not from fusion at all. This same linear arrangement had also occurred to researchers in the UK and US, and their machines showed the same behaviour. But the great secrecy surrounding the type of research meant that none of the groups were aware that others were also working on it, let alone having the identical problem.
After much study, it was found that some of the released neutrons were produced by instabilities in the plasma. There were two common types of instability, the sausage that was seen primarily in linear machines, and the kink which was most common in the toroidal machines. Groups in all three countries began studying the formation of these instabilities and potential ways to address them. Important contributions to the field were made by Martin David Kruskal and Martin Schwarzschild in the US, and Shafranov in the USSR.
One idea that came from these studies became known as the "stabilized pinch". This concept added additional coils to the outside of the chamber, which created a magnetic field that would be present in the plasma before the pinch discharge. In most concepts, the externally induced field was relatively weak, and because a plasma is diamagnetic, it penetrated only the outer areas of the plasma. When the pinch discharge occurred and the plasma quickly contracted, this field became "frozen in" to the resulting filament, creating a strong field in its outer layers. In the US, this was known as "giving the plasma a backbone".
Sakharov revisited his original toroidal concepts and came to a slightly different conclusion about how to stabilize the plasma. The layout would be the same as the stabilized pinch concept, but the role of the two fields would be reversed. Instead of weak externally induced magnetic fields providing stabilization and a strong pinch current responsible for confinement, in the new layout, the external field would be much more powerful in order to provide the majority of confinement, while the current would be much smaller and responsible for the stabilizing effect.
Steps toward declassification
In 1955, with the linear approaches still subject to instability, the first toroidal device was built in the USSR. TMP was a classic pinch machine, similar to models in the UK and US of the same era. The vacuum chamber was made of ceramic, and the spectra of the discharges showed silica, meaning the plasma was not perfectly confined by magnetic field and hitting the walls of the chamber. Two smaller machines followed, using copper shells. The conductive shells were intended to help stabilize the plasma, but were not completely successful in any of the machines that tried it.
With progress apparently stalled, in 1955, Kurchatov called an All Union conference of Soviet researchers with the ultimate aim of opening up fusion research within the USSR. In April 1956, Kurchatov travelled to the UK as part of a widely publicized visit by Nikita Khrushchev and Nikolai Bulganin. He offered to give a talk at Atomic Energy Research Establishment, at the former RAF Harwell, where he shocked the hosts by presenting a detailed historical overview of the Soviet fusion efforts. He took time to note, in particular, the neutrons seen in early machines and warned that neutrons did not mean fusion.
Unknown to Kurchatov, the British ZETA stabilized pinch machine was being built at the far end of the former runway. ZETA was, by far, the largest and most powerful fusion machine to date. Supported by experiments on earlier designs that had been modified to include stabilization, ZETA intended to produce low levels of fusion reactions. This was apparently a great success, and in January 1958, they announced the fusion had been achieved in ZETA based on the release of neutrons and measurements of the plasma temperature.
Vitaly Shafranov and Stanislav Braginskii examined the news reports and attempted to figure out how it worked. One possibility they considered was the use of weak "frozen in" fields, but rejected this, believing the fields would not last long enough. They then concluded ZETA was essentially identical to the devices they had been studying, with strong external fields.
First tokamaks
By this time, Soviet researchers had decided to build a larger toroidal machine along the lines suggested by Sakharov. In particular, their design considered one important point found in Kruskal's and Shafranov's works; if the helical path of the particles made them circulate around the plasma's circumference more rapidly than they circulated the long axis of the torus, the kink instability would be strongly suppressed.
(To be clear, Electrical current in coils wrapping around the torus produces a toroidal magnetic field inside the torus; a pulsed magnetic field through the hole in the torus induces the axial current in the torus which has a poloidal magnetic field surrounding it; there may also be rings of current above and below the torus that create additional poloidal magnetic field. The combined magnetic fields form a helical magnetic structure inside the torus.)
Today this basic concept is known as the safety factor. The ratio of the number of times the particle orbits the major axis compared to the minor axis is denoted q, and the Kruskal-Shafranov Limit stated that the kink will be suppressed as long as q > 1. This path is controlled by the relative strengths of the externally induced magnetic field compared to the field created by the internal current. To have q > 1, the external magnets must be much more powerful, or alternatively, the internal current has to be reduced.
Following this criterion, design began on a new reactor, T-1, which today is known as the first real tokamak. T-1 used both stronger external magnetic fields and a reduced current compared to stabilized pinch machines like ZETA. The success of the T-1 resulted in its recognition as the first working tokamak.
For his work on "powerful impulse discharges in a gas, to obtain unusually high temperatures needed for thermonuclear processes", Yavlinskii was awarded the Lenin Prize and the Stalin Prize in 1958. Yavlinskii was already preparing the design of an even larger model, later built as T-3. With the apparently successful ZETA announcement, Yavlinskii's concept was viewed very favourably.
Details of ZETA became public in a series of articles in Nature later in January. To Shafranov's surprise, the system did use the "frozen in" field concept. He remained sceptical, but a team at the Ioffe Institute in St. Petersberg began plans to build a similar machine known as Alpha. Only a few months later, in May, the ZETA team issued a release stating they had not achieved fusion, and that they had been misled by erroneous measures of the plasma temperature.
T-1 began operation at the end of 1958. It demonstrated very high energy losses through radiation. This was traced to impurities in the plasma due to the vacuum system causing outgassing from the container materials. In order to explore solutions to this problem, another small device was constructed, T-2. This used an internal liner of corrugated metal that was baked at to cook off trapped gasses.
Atoms for Peace and the doldrums
As part of the second Atoms for Peace meeting in Geneva in September 1958, the Soviet delegation released many papers covering their fusion research. Among them was a set of initial results on their toroidal machines, which at that point had shown nothing of note.
The "star" of the show was a large model of Spitzer's stellarator, which immediately caught the attention of the Soviets. In contrast to their designs, the stellarator produced the required twisted paths in the plasma without driving a current through it, using a series of external coils (producing internal magnetic fields) that could operate in the steady state rather than the pulses of the induction system that produced the axial current. Kurchatov began asking Yavlinskii to change their T-3 design to a stellarator, but they convinced him that the current provided a useful second role in heating, something the stellarator lacked.
At the time of the show, the stellarator had suffered a long string of minor problems that were just being solved. Solving these revealed that the diffusion rate of the plasma was much faster than theory predicted. Similar problems were seen in all the contemporary designs, for one reason or another. The stellarator, various pinch concepts and the magnetic mirror machines in both the US and USSR all demonstrated problems that limited their confinement times.
From the first studies of controlled fusion, there was a problem lurking in the background. During the Manhattan Project, David Bohm had been part of the team working on isotopic separation of uranium. In the post-war era he continued working with plasmas in magnetic fields. Using basic theory, one would expect the plasma to diffuse across the lines of force at a rate inversely proportional to the square of the strength of the field, meaning that small increases in force would greatly improve confinement. But based on their experiments, Bohm developed an empirical formula, now known as Bohm diffusion, that suggested the rate was linear with the magnetic force, not its square.
If Bohm's formula was correct, there was no hope one could build a fusion reactor based on magnetic confinement. To confine the plasma at the temperatures needed for fusion, the magnetic field would have to be orders of magnitude greater than any known magnet. Spitzer ascribed the difference between the Bohm and classical diffusion rates to turbulence in the plasma, and believed the steady fields of the stellarator would not suffer from this problem. Various experiments at that time suggested the Bohm rate did not apply, and that the classical formula was correct.
But by the early 1960s, with all of the various designs leaking plasma at a prodigious rate, Spitzer himself concluded that the Bohm scaling was an inherent quality of plasmas, and that magnetic confinement would not work. The entire field descended into what became known as "the doldrums", a period of intense pessimism.
Progress in the 1960s
In contrast to the other designs, the experimental tokamaks appeared to be progressing well, so well that a minor theoretical problem was now a real concern. In the presence of gravity, there is a small pressure gradient in the plasma, formerly small enough to ignore but now becoming something that had to be addressed. This led to the addition of yet another set of coils in 1962, which produced a vertical magnetic field that offset these effects. These were a success, and by the mid-1960s the machines began to show signs that they were beating the Bohm limit.
At the 1965 Second International Atomic Energy Agency Conference on fusion at the UK's newly opened Culham Centre for Fusion Energy, Artsimovich reported that their systems were surpassing the Bohm limit by 10 times. Spitzer, reviewing the presentations, suggested that the Bohm limit may still apply; the results were within the range of experimental error of results seen on the stellarators, and the temperature measurements, based on the magnetic fields, were simply not trustworthy.
The next major international fusion meeting was held in August 1968 in Novosibirsk. By this time two additional tokamak designs had been completed, TM-2 in 1965, and T-4 in 1968. Results from T-3 had continued to improve, and similar results were coming from early tests of the new reactors. At the meeting, the Soviet delegation announced that T-3 was producing electron temperatures of 1000 eV (equivalent to 10 million degrees Celsius) and that confinement time was at least 50 times the Bohm limit.
These results were at least 10 times that of any other machine. If correct, they represented an enormous leap for the fusion community. Spitzer remained skeptical, noting that the temperature measurements were still based on the indirect calculations from the magnetic properties of the plasma. Many concluded they were due to an effect known as runaway electrons, and that the Soviets were measuring only those extremely energetic electrons and not the bulk temperature. The Soviets countered with several arguments suggesting the temperature they were measuring was Maxwellian, and the debate raged.
Culham Five
In the aftermath of ZETA, the UK teams began the development of new plasma diagnostic tools to provide more accurate measurements. Among these was the use of a laser to directly measure the temperature of the bulk electrons using Thomson scattering. This technique was well known and respected in the fusion community; Artsimovich had publicly called it "brilliant". Artsimovich invited Bas Pease, the head of Culham, to use their devices on the Soviet reactors. At the height of the Cold War, in what is still considered a major political manoeuvre on Artsimovich's part, British physicists were allowed to visit the Kurchatov Institute, the heart of the Soviet nuclear bomb effort.
The British team, nicknamed "The Culham Five", arrived late in 1968. After a lengthy installation and calibration process, the team measured the temperatures over a period of many experimental runs. Initial results were available by August 1969; the Soviets were correct, their results were accurate. The team phoned the results home to Culham, who then passed them along in a confidential phone call to Washington. The final results were published in Nature in November 1969. The results of this announcement have been described as a "veritable stampede" of tokamak construction around the world.
One serious problem remained. Because the electrical current in the plasma was much lower and produced much less compression than a pinch machine, this meant the temperature of the plasma was limited to the resistive heating rate of the current. First proposed in 1950, Spitzer resistivity stated that the electrical resistance of a plasma was reduced as the temperature increased, meaning the heating rate of the plasma would slow as the devices improved and temperatures were pressed higher. Calculations demonstrated that the resulting maximum temperatures while staying within q > 1 would be limited to the low millions of degrees. Artsimovich had been quick to point this out in Novosibirsk, stating that future progress would require new heating methods to be developed.
US turmoil
One of the people attending the Novosibirsk meeting in 1968 was Amasa Stone Bishop, one of the leaders of the US fusion program. One of the few other devices to show clear evidence of beating the Bohm limit at that time was the multipole concept. Both Lawrence Livermore and the Princeton Plasma Physics Laboratory (PPPL), home of Spitzer's stellarator, were building variations on the multipole design. While moderately successful on their own, T-3 greatly outperformed either machine. Bishop was concerned that the multipoles were redundant and thought the US should consider a tokamak of its own.
When he raised the issue at a December 1968 meeting, directors of the labs refused to consider it. Melvin B. Gottlieb of Princeton was exasperated, asking "Do you think that this committee can out-think the scientists?" With the major labs demanding they control their own research, one lab found itself left out. Oak Ridge had originally entered the fusion field with studies for reactor fueling systems, but branched out into a mirror program of their own. By the mid-1960s, their DCX designs were running out of ideas, offering nothing that the similar program at the more prestigious and politically powerful Livermore did not. This made them highly receptive to new concepts.
After a considerable internal debate, Herman Postma formed a small group in early 1969 to consider the tokamak. They came up with a new design, later christened Ormak, that had several novel features. Primary among them was the way the external field was created in a single large copper block, fed power from a large transformer below the torus. This was as opposed to traditional designs that used electric current windings on the outside. They felt the single block would produce a much more uniform field. It would also have the advantage of allowing the torus to have a smaller major radius, lacking the need to route cables through the donut hole, leading to a lower aspect ratio, which the Soviets had already suggested would produce better results.
Tokamak race in the US
In early 1969, Artsimovich visited MIT, where he was hounded by those interested in fusion. He finally agreed to give several lectures in April and then allowed lengthy question-and-answer sessions. As these went on, MIT itself grew interested in the tokamak, having previously stayed out of the fusion field for a variety of reasons. Bruno Coppi was at MIT at the time, and following the same concepts as Postma's team, came up with his own low-aspect-ratio concept, Alcator. Instead of Ormak's toroidal transformer, Alcator used traditional ring-shaped magnetic field coils but required them to be much smaller than existing designs. MIT's Francis Bitter Magnet Laboratory was the world leader in magnet design and they were confident they could build them.
During 1969, two additional groups entered the field. At General Atomics, Tihiro Ohkawa had been developing multipole reactors, and submitted a concept based on these ideas. This was a tokamak that would have a non-circular plasma cross-section; the same math that suggested a lower aspect-ratio would improve performance also suggested that a C or D-shaped plasma would do the same. He called the new design Doublet. Meanwhile, a group at University of Texas at Austin was proposing a relatively simple tokamak to explore heating the plasma through deliberately induced turbulence, the Texas Turbulent Tokamak.
When the members of the Atomic Energy Commissions' Fusion Steering Committee met again in June 1969, they had "tokamak proposals coming out of our ears". The only major lab working on a toroidal design that was not proposing a tokamak was Princeton, who refused to consider it in spite of their Model C stellarator being just about perfect for such a conversion. They continued to offer a long list of reasons why the Model C should not be converted. When these were questioned, a furious debate broke out about whether the Soviet results were reliable.
Watching the debate take place, Gottlieb had a change of heart. There was no point moving forward with the tokamak if the Soviet electron temperature measurements were not accurate, so he formulated a plan to either prove or disprove their results. While swimming in the pool during the lunch break, he told Harold Furth his plan, to which Furth replied: "well, maybe you're right." After lunch, the various teams presented their designs, at which point Gottlieb presented his idea for a "stellarator-tokamak" based on the Model C.
The Standing Committee noted that this system could be complete in six months, while Ormak would take a year. It was only a short time later that the confidential results from the Culham Five were released. When they met again in October, the Standing Committee released funding for all of these proposals. The Model C's new configuration, soon named Symmetrical Tokamak, intended to simply verify the Soviet results, while the others would explore ways to go well beyond T-3.
Heating: US takes the lead
Experiments on the Symmetric Tokamak began in May 1970, and by early the next year they had confirmed the Soviet results and then surpassed them. The stellarator was abandoned, and PPPL turned its considerable expertise to the problem of heating the plasma. Two concepts seemed to hold promise. PPPL proposed using magnetic compression, a pinch-like technique to compress a warm plasma to raise its temperature, but providing that compression through magnets rather than current. Oak Ridge suggested neutral beam injection, small particle accelerators that would shoot fuel atoms through the surrounding magnetic field where they would collide with the plasma and heat it.
PPPL's Adiabatic Toroidal Compressor (ATC) began operation in May 1972, followed shortly thereafter by a neutral-beam equipped Ormak. Both demonstrated significant problems, but PPPL leapt past Oak Ridge by fitting beam injectors to ATC and provided clear evidence of successful heating in 1973. This success "scooped" Oak Ridge, who fell from favour within the Washington Steering Committee.
By this time a much larger design based on beam heating was under construction, the Princeton Large Torus, or PLT. PLT was designed specifically to "give a clear indication whether the tokamak concept plus auxiliary heating can form a basis for a future fusion reactor". PLT was an enormous success, continually raising its internal temperature until it hit 60 million Celsius (8,000 eV, eight times T-3's record) in 1978. This is a key point in the development of the tokamak; fusion reactions become self-sustaining at temperatures between 50 and 100 million Celsius, PLT demonstrated that this was technically achievable.
These experiments, especially PLT, put the US far in the lead in tokamak research. This is due largely to budget; a tokamak cost about $500,000 and the US annual fusion budget was around $25 million at that time. They could afford to explore all of the promising methods of heating, ultimately discovering neutral beams to be among the most effective.
During this period, Robert Hirsch took over the Directorate of fusion development in the U.S. Atomic Energy Commission. Hirsch felt that the program could not be sustained at its current funding levels without demonstrating tangible results. He began to reformulate the entire program. What had once been a lab-led effort of mostly scientific exploration was now a Washington-led effort to build a working power-producing reactor. This was given a boost by the 1973 oil crisis, which led to greatly increased research into alternative energy systems.
1980s: great hope, great disappointment
By the late-1970s, tokamaks had reached all the conditions needed for a practical fusion reactor; in 1978 PLT had demonstrated ignition temperatures, the next year the Soviet T-7 successfully used superconducting magnets for the first time, Doublet proved to be a success and led to almost all future designs adopting this "shaped plasma" approach. It appeared all that was needed to build a power-producing reactor was to put all of these design concepts into a single machine, one that would be capable of running with the radioactive tritium in its fuel mix.
During the 1970s, four major second-generation proposals were funded worldwide. The Soviets continued their development lineage with the T-15, while a pan-European effort was developing the Joint European Torus (JET) and Japan began the JT-60 effort (originally known as the "Breakeven Plasma Test Facility"). In the US, Hirsch began formulating plans for a similar design, skipping over proposals for another stepping-stone design directly to a tritium-burning one. This emerged as the Tokamak Fusion Test Reactor (TFTR), run directly from Washington and not linked to any specific lab. Originally favouring Oak Ridge as the host, Hirsch moved it to PPPL after others convinced him they would work the hardest on it because they had the most to lose.
The excitement was so widespread that several commercial ventures to produce commercial tokamaks began around this time. Best known among these, in 1978, Bob Guccione, publisher of Penthouse Magazine, met Robert Bussard and became the world's biggest and most committed private investor in fusion technology, ultimately putting $20 million of his own money into Bussard's Compact Tokamak. Funding by the Riggs Bank led to this effort being known as the Riggatron.
TFTR won the construction race and began operation in 1982, followed shortly by JET in 1983 and JT-60 in 1985. JET quickly took the lead in critical experiments, moving from test gases to deuterium and increasingly powerful "shots". But it soon became clear that none of the new systems were working as expected. A host of new instabilities appeared, along with a number of more practical problems that continued to interfere with their performance. On top of this, dangerous "excursions" of the plasma hitting with the walls of the reactor were evident in both TFTR and JET. Even when working perfectly, plasma confinement at fusion temperatures, the so-called "fusion triple product", continued to be far below what would be needed for a practical reactor design.
Through the mid-1980s the reasons for many of these problems became clear, and various solutions were offered. However, these would significantly increase the size and complexity of the machines. A follow-on design incorporating these changes would be both enormous and vastly more expensive than either JET or TFTR. A new period of pessimism descended on the fusion field.
ITER
At the same time these experiments were demonstrating problems, much of the impetus for the US's massive funding disappeared; in 1986 Ronald Reagan declared the 1970s energy crisis was over, and funding for advanced energy sources had been slashed in the early 1980s.
Some thought of an international reactor design had been ongoing since June 1973 under the name INTOR, for INternational TOkamak Reactor. This was originally started through an agreement between Richard Nixon and Leonid Brezhnev, but had been moving slowly since its first real meeting on 23 November 1978.
During the Geneva Summit in November 1985, Reagan raised the issue with Mikhail Gorbachev and proposed reforming the organization. "... The two leaders emphasized the potential importance of the work aimed at utilizing controlled thermonuclear fusion for peaceful purposes and, in this connection, advocated the widest practicable development of international cooperation in obtaining this source of energy, which is essentially inexhaustible, for the benefit for all mankind."
The next year, an agreement was signed between the US, Soviet Union, European Union and Japan, creating the International Thermonuclear Experimental Reactor organization.
Design work began in 1988, and since that time the ITER reactor has been the primary tokamak design effort worldwide.
High Field Tokamaks
It has been known for a long time that stronger field magnets would enable high energy gain in a much smaller tokamak, with concepts such as FIRE, IGNITOR, and the Compact Ignition Tokamak (CIT) being proposed decades ago.
The commercial availability of high temperature superconductors (HTS) in the 2010s opened a promising pathway to building the higher field magnets required to achieve ITER-like levels of energy gain in a compact device. To leverage this new technology, the MIT Plasma Science and Fusion Center (PSFC) and MIT spinout Commonwealth Fusion Systems (CFS) successfully built and tested the Toroidal Field Model Coil (TFMC) in 2021 to demonstrate the necessary 20 Tesla magnetic field needed to build SPARC, a device designed to achieve a similar fusion gain as ITER but with only ~1/40th ITER's plasma volume.
British startup Tokamak Energy is also planning on building a net-energy tokamak using HTS magnets, but with the spherical tokamak variant.
The joint EU/Japan JT-60SA reactor achieved first plasma on October 23, 2023, after a two-year delay caused by an electrical short.
Design
Basic problem
Positively charged ions and negatively charged electrons in a fusion plasma are at very high temperatures, and have correspondingly large velocities. In order to maintain the fusion process, particles from the hot plasma must be confined in the central region, or the plasma will rapidly cool. Magnetic confinement fusion devices exploit the fact that charged particles in a magnetic field experience a Lorentz force and follow helical paths along the field lines.
The simplest magnetic confinement system is a solenoid. A plasma in a solenoid will spiral about the lines of field running down its center, preventing motion towards the sides. However, this does not prevent motion towards the ends. The obvious solution is to bend the solenoid around into a circle, forming a torus. However, it was demonstrated that such an arrangement is not uniform; for purely geometric reasons, the field on the outside edge of the torus is lower than on the inside edge. This asymmetry causes the electrons and ions to drift across the field, and eventually hit the walls of the torus.
The solution is to shape the lines so they do not simply run around the torus, but twist around like the stripes on a barber pole or candycane. In such a field any single particle will find itself at the outside edge where it will drift one way, say up, and then as it follows its magnetic line around the torus it will find itself on the inside edge, where it will drift the other way. This cancellation is not perfect, but calculations showed it was enough to allow the fuel to remain in the reactor for a useful time.
Tokamak solution
The two first solutions to making a design with the required twist were the stellarator which did so through a mechanical arrangement, twisting the entire torus, and the z-pinch design which ran an electrical current through the plasma to create a second magnetic field to the same end. Both demonstrated improved confinement times compared to a simple torus, but both also demonstrated a variety of effects that caused the plasma to be lost from the reactors at rates that were not sustainable.
The tokamak is essentially identical to the z-pinch concept in its physical layout. Its key innovation was the realization that the instabilities that were causing the pinch to lose its plasma could be controlled. The issue was how "twisty" the fields were; fields that caused the particles to transit inside and out more than once per orbit around the long axis torus were much more stable than devices that had less twist. This ratio of twists to orbits became known as the safety factor, denoted q. Previous devices operated at q about , while the tokamak operates at . This increases stability by orders of magnitude.
When the problem is considered even more closely, the need for a vertical (parallel to the axis of rotation) component of the magnetic field arises. The Lorentz force of the toroidal plasma current in the vertical field provides the inward force that holds the plasma torus in equilibrium.
Other issues
While the tokamak addresses the issue of plasma stability in a gross sense, plasmas are also subject to a number of dynamic instabilities. One of these, the kink instability, is strongly suppressed by the tokamak layout, a side-effect of the high safety factors of tokamaks. The lack of kinks allowed the tokamak to operate at much higher temperatures than previous machines, and this allowed a host of new phenomena to appear.
One of these, the banana orbits, is caused by the wide range of particle energies in a tokamak – much of the fuel is hot, but a certain percentage is much cooler. Due to the high twist of the fields in the tokamak, particles following their lines of force rapidly move towards the inner edge and then outer. As they move inward they are subject to increasing magnetic fields due to the smaller radius concentrating the field. The low-energy particles in the fuel will reflect off this increasing field and begin to travel backwards through the fuel, colliding with the higher energy nuclei and scattering them out of the plasma. This process causes fuel to be lost from the reactor, although this process is slow enough that a practical reactor is still well within reach.
Another instability is tearing instability. In 2024 researchers used reinforcement learning against a multimodal dynamic model to measure and forecast such instabilities based on signals from multiple diagnostics and actuators at 25 millisecond intervals. This forecast was used to reduce tearing instabilities in DIII-D6, in the US. The reward function balanced the conflicting objectives of maximum plasma pressure and instability risks. In particular, the plasma actively tracked the stable path while maintaining H-mode performance.
Breakeven, Q, and ignition
One of the first goals for any controlled fusion device is to reach breakeven, the point where the energy being released by the fusion reactions is equal to the amount of energy being used to maintain the reaction. The ratio of output to input energy is denoted Q, and breakeven corresponds to a Q of 1. A Q of more than one is needed for the reactor to generate net energy, but for practical reasons, it is desirable for it to be much higher.
Once breakeven is reached, further improvements in confinement generally lead to a rapidly increasing Q. That is because some of the energy being given off by the fusion reactions of the most common fusion fuel, a 50-50 mix of deuterium and tritium, is in the form of alpha particles. These can collide with the fuel nuclei in the plasma and heat it, reducing the amount of external heat needed. At some point, known as ignition, this internal self-heating is enough to keep the reaction going without any external heating, corresponding to an infinite Q.
In the case of the tokamak, this self-heating process is maximized if the alpha particles remain in the fuel long enough to guarantee they will collide with the fuel. As the alphas are electrically charged, they are subject to the same fields that are confining the fuel plasma. The amount of time they spend in the fuel can be maximized by ensuring their orbit in the field remains within the plasma. It can be demonstrated that this occurs when the electrical current in the plasma is about 3 MA.
Advanced tokamaks
In the early 1970s, studies at Princeton into the use of high-power superconducting magnets in future tokamak designs examined the layout of the magnets. They noticed that the arrangement of the main toroidal coils meant that there was significantly more tension between the magnets on the inside of the curvature where they were closer together. Considering this, they noted that the tensional forces within the magnets would be evened out if they were shaped like a D, rather than an O. This became known as the "Princeton D-coil".
This was not the first time this sort of arrangement had been considered, although for entirely different reasons. The safety factor varies across the axis of the machine; for purely geometrical reasons, it is always smaller at the inside edge of the plasma closest to the machine's center because the long axis is shorter there. That means that a machine with an average q = 2 might still be less than 1 in certain areas. In the 1970s, it was suggested that one way to counteract this and produce a design with a higher average q would be to shape the magnetic fields so that the plasma only filled the outer half of the torus, shaped like a D or C when viewed end-on, instead of the normal circular cross section.
One of the first machines to incorporate a D-shaped plasma was the JET, which began its design work in 1973. This decision was made both for theoretical reasons as well as practical; because the force is larger on the inside edge of the torus, there is a large net force pressing inward on the entire reactor. The D-shape also had the advantage of reducing the net force, as well as making the supported inside edge flatter so it was easier to support. Code exploring the general layout noticed that a non-circular shape would slowly drift vertically, which led to the addition of an active feedback system to hold it in the center. Once JET had selected this layout, the General Atomics Doublet III team redesigned that machine into the D-IIID with a D-shaped cross-section, and it was selected for the Japanese JT-60 design as well. This layout has been largely universal since then.
One problem seen in all fusion reactors is that the presence of heavier elements causes energy to be lost at an increased rate, cooling the plasma. During the very earliest development of fusion power, a solution to this problem was found, the divertor, essentially a large mass spectrometer that would cause the heavier elements to be flung out of the reactor. This was initially part of the stellarator designs, where it is easy to integrate into the magnetic windings. However, designing a divertor for a tokamak proved to be a very difficult design problem.
Another problem seen in all fusion designs is the heat load that the plasma places on the wall of the confinement vessel. There are materials that can handle this load, but they are generally undesirable and expensive heavy metals. When such materials are sputtered in collisions with hot ions, their atoms mix with the fuel and rapidly cool it. A solution used on most tokamak designs is the limiter, a small ring of light metal that projected into the chamber so that the plasma would hit it before hitting the walls. This eroded the limiter and caused its atoms to mix with the fuel, but these lighter materials cause less disruption than the wall materials.
When reactors moved to the D-shaped plasmas it was quickly noted that the escaping particle flux of the plasma could be shaped as well. Over time, this led to the idea of using the fields to create an internal divertor that flings the heavier elements out of the fuel, typically towards the bottom of the reactor. There, a pool of liquid lithium metal is used as a sort of limiter; the particles hit it and are rapidly cooled, remaining in the lithium. This internal pool is much easier to cool, due to its location, and although some lithium atoms are released into the plasma, its very low mass makes it a much smaller problem than even the lightest metals used previously.
As machines began to explore this newly shaped plasma, they noticed that certain arrangements of the fields and plasma parameters would sometimes enter what is now known as the high-confinement mode, or H-mode, which operated stably at higher temperatures and pressures. Operating in the H-mode, which can also be seen in stellarators, is now a major design goal of the tokamak design.
Finally, it was noted that when the plasma had a non-uniform density it would give rise to internal electrical currents. This is known as the bootstrap current. This allows a properly designed reactor to generate some of the internal current needed to twist the magnetic field lines without having to supply it from an external source. This has a number of advantages, and modern designs all attempt to generate as much of their total current through the bootstrap process as possible.
By the early 1990s, the combination of these features and others collectively gave rise to the "advanced tokamak" concept. This forms the basis of modern research, including ITER.
Plasma disruptions
Tokamaks are subject to events known as "disruptions" that cause confinement to be lost in milliseconds. There are two primary mechanisms. In one, the "vertical displacement event" (VDE), the entire plasma moves vertically until it touches the upper or lower section of the vacuum chamber. In the other, the "major disruption", long wavelength, non-axisymmetric magnetohydrodynamical instabilities cause the plasma to be forced into non-symmetrical shapes, often squeezed into the top and bottom of the chamber.
When the plasma touches the vessel walls it undergoes rapid cooling, or "thermal quenching". In the major disruption case, this is normally accompanied by a brief increase in plasma current as the plasma concentrates. Quenching ultimately causes the plasma confinement to break up. In the case of the major disruption the current drops again, the "current quench". The initial increase in current is not seen in the VDE, and the thermal and current quench occurs at the same time. In both cases, the thermal and electrical load of the plasma is rapidly deposited on the reactor vessel, which has to be able to handle these loads. ITER is designed to handle 2600 of these events over its lifetime.
For modern high-energy devices, where plasma currents are on the order of 15 megaamperes in ITER, it is possible the brief increase in current during a major disruption will cross a critical threshold. This occurs when the current produces a force on the electrons that is higher than the frictional forces of the collisions between particles in the plasma. In this event, electrons can be rapidly accelerated to relativistic velocities, creating so-called "runaway electrons" in the relativistic runaway electron avalanche. These retain their energy even as the current quench is occurring on the bulk of the plasma.
When confinement finally breaks down, these runaway electrons follow the path of least resistance and impact the side of the reactor. These can reach 12 megaamps of current deposited in a small area, well beyond the capabilities of any mechanical solution. In one famous case, the Tokamak de Fontenay aux Roses had a major disruption where the runaway electrons burned a hole through the vacuum chamber.
The occurrence of major disruptions in running tokamaks has always been rather high, of the order of a few percent of the total numbers of the shots. In currently operated tokamaks, the damage is often large but rarely dramatic. In the ITER tokamak, it is expected that the occurrence of a limited number of major disruptions will definitively damage the chamber with no possibility to restore the device. The development of systems to counter the effects of runaway electrons is considered a must-have piece of technology for the operational level ITER.
A large amplitude of the central current density can also result in internal disruptions, or sawteeth, which do not generally result in termination of the discharge.
Densities over the Greenwald limit, a bound depending on the plasma current and the minor radius, typically leads to disruptions. It has been exceeded up to factors of 10, but it remains an important concept describing the phenomenology of the transition of the plasma flow, which still needs to be understood.
Plasma heating
In an operating fusion reactor, part of the energy generated will serve to maintain the plasma temperature as fresh deuterium and tritium are introduced. However, in the startup of a reactor, either initially or after a temporary shutdown, the plasma will have to be heated to its operating temperature of greater than 10 keV (over 100 million degrees Celsius). In current tokamak (and other) magnetic fusion experiments, insufficient fusion energy is produced to maintain the plasma temperature, and constant external heating must be supplied. Chinese researchers set up the Experimental Advanced Superconducting Tokamak (EAST) in 2006, which can supposedly sustain a plasma temperature of 100 million degree Celsius for initiating fusion between hydrogen atoms, according to a November 2018 test.
Ohmic heating ~ inductive mode
Since the plasma is an electrical conductor, it is possible to heat the plasma by inducing a current through it; the induced current that provides most of the poloidal field is also a major source of initial heating.
The heating caused by the induced current is called ohmic (or resistive) heating; it is the same kind of heating that occurs in an electric light bulb or in an electric heater. The heat generated depends on the resistance of the plasma and the amount of electric current running through it. But as the temperature of heated plasma rises, the resistance decreases and ohmic heating becomes less effective. It appears that the maximum plasma temperature attainable by ohmic heating in a tokamak is 20–30 million degrees Celsius. To obtain still higher temperatures, additional heating methods must be used.
The current is induced by continually increasing the current through an electromagnetic winding linked with the plasma torus: the plasma can be viewed as the secondary winding of a transformer. This is inherently a pulsed process because there is a limit to the current through the primary (there are also other limitations on long pulses). Tokamaks must therefore either operate for short periods or rely on other means of heating and current drive.
Magnetic compression
A gas can be heated by sudden compression. In the same way, the temperature of a plasma is increased if it is compressed rapidly by increasing the confining magnetic field. In a tokamak, this compression is achieved simply by moving the plasma into a region of higher magnetic field (i.e., radially inward). Since plasma compression brings the ions closer together, the process has the additional benefit of facilitating attainment of the required density for a fusion reactor.
Magnetic compression was an area of research in the early "tokamak stampede", and was the purpose of one major design, the ATC. The concept has not been widely used since then, although a somewhat similar concept is part of the General Fusion design.
Neutral-beam injection
Neutral-beam injection involves the introduction of high energy (rapidly moving) atoms or molecules into an ohmically heated, magnetically confined plasma within the tokamak.
The high energy atoms originate as ions in an arc chamber before being extracted through a high voltage grid set. The term "ion source" is used to generally mean the assembly consisting of a set of electron emitting filaments, an arc chamber volume, and a set of extraction grids. A second device, similar in concept, is used to separately accelerate electrons to the same energy. The much lighter mass of the electrons makes this device much smaller than its ion counterpart. The two beams then intersect, where the ions and electrons recombine into neutral atoms, allowing them to travel through the magnetic fields.
Once the neutral beam enters the tokamak, interactions with the main plasma ions occur. This has two effects. One is that the injected atoms re-ionize and become charged, thereby becoming trapped inside the reactor and adding to the fuel mass. The other is that the process of being ionized occurs through impacts with the rest of the fuel, and these impacts deposit energy in that fuel, heating it.
This form of heating has no inherent energy (temperature) limitation, in contrast to the ohmic method, but its rate is limited to the current in the injectors. Ion source extraction voltages are typically on the order of 50–100 kV, and high voltage, negative ion sources (-1 MV) are being developed for ITER. The ITER Neutral Beam Test Facility in Padova will be the first ITER facility to start operation.
While neutral beam injection is used primarily for plasma heating, it can also be used as a diagnostic tool and in feedback control by making a pulsed beam consisting of a string of brief 2–10 ms beam blips. Deuterium is a primary fuel for neutral beam heating systems and hydrogen and helium are sometimes used for selected experiments.
Radio-frequency heating
High-frequency electromagnetic waves are generated by oscillators (often by gyrotrons or klystrons) outside the torus. If the waves have the correct frequency (or wavelength) and polarization, their energy can be transferred to the charged particles in the plasma, which in turn collide with other plasma particles, thus increasing the temperature of the bulk plasma. Various techniques exist including electron cyclotron resonance heating (ECRH) and ion cyclotron resonance heating. This energy is usually transferred by microwaves.
Particle inventory
Plasma discharges within the tokamak's vacuum chamber consist of energized ions and atoms. The energy from these particles eventually reaches the inner wall of the chamber through radiation, collisions, or lack of confinement. The heat from the particles is removed via conduction through the chamber's inner wall to a water-cooling system, where the heated water proceeds to an external cooling system through convection.
Turbomolecular or diffusion pumps allow for particles to be evacuated from the bulk volume and cryogenic pumps, consisting of a liquid helium-cooled surface, serve to effectively control the density throughout the discharge by providing an energy sink for condensation to occur. When done correctly, the fusion reactions produce large amounts of high energy neutrons. Being electrically neutral and relatively tiny, the neutrons are not affected by the magnetic fields nor are they stopped much by the surrounding vacuum chamber.
The neutron flux is reduced significantly at a purpose-built neutron shield boundary that surrounds the tokamak in all directions. Shield materials vary but are generally materials made of atoms which are close to the size of neutrons because these work best to absorb the neutron and its energy. Good candidate materials include those with much hydrogen, such as water and plastics. Boron atoms are also good absorbers of neutrons. Thus, concrete and polyethylene doped with boron make inexpensive neutron shielding materials.
Once freed, the neutron has a relatively short half-life of about 10 minutes before it decays into a proton and electron with the emission of energy. When the time comes to actually try to make electricity from a tokamak-based reactor, some of the neutrons produced in the fusion process would be absorbed by a liquid metal blanket and their kinetic energy would be used in heat transfer processes to ultimately turn a generator.
Experimental tokamaks
Currently in operation
(in chronological order of start of operations)
1960s: TM1-MH (since 1977 as Castor; since 2007 as Golem) in Prague, Czech Republic. In operation in Kurchatov Institute since the early 1960s but renamed to Castor in 1977 and moved to IPP CAS, Prague. In 2007 moved to FNSPE, Czech Technical University in Prague and renamed to Golem.
1975: T-10, in Kurchatov Institute, Moscow, Russia (formerly Soviet Union); 2 MW
1986: DIII-D, in San Diego, United States; operated by General Atomics since the late 1980s
1987: STOR-M, University of Saskatchewan, Canada; its predecessor, STOR1-M built in 1983, was used for the first demonstration of alternating current in a tokamak.
1988: Tore Supra, but renamed to WEST in 2016, at the CEA, Cadarache, France
1989: Aditya, at Institute for Plasma Research (IPR) in Gujarat, India
1989: COMPASS, in Prague, Czech Republic; in operation since 2008, previously operated from 1989 to 1999 in Culham, United Kingdom
1990: FTU, in Frascati, Italy
1991: ISTTOK, at the Instituto de Plasmas e Fusão Nuclear, Lisbon, Portugal
1991: ASDEX Upgrade, in Garching, Germany
1992: H-1NF (H-1 National Plasma Fusion Research Facility) based on the H-1 Heliac device built by Australia National University's plasma physics group and in operation since 1992
1992: Tokamak à configuration variable (TCV), at the Swiss Plasma Center, EPFL, Switzerland
1993: HBT-EP Tokamak, at Columbia University in New York City
1994: TCABR, at the University of São Paulo, São Paulo, Brazil; this tokamak was transferred from CRPP (now Swiss Plasma Center) in Switzerland
1996: Pegasus Toroidal Experiment at the University of Wisconsin–Madison; in operation since the late 1990s
1999: NSTX in Princeton, New Jersey
1999: Globus-M in Ioffe Institute, Saint Petersburg, Russia
2000: ETE at the National Institute for Space Research, São Paulo, Brazil
2002: HL-2A, in Chengdu, China
2006: EAST (HT-7U), in Hefei, at The Hefei Institutes of Physical Science, China (ITER member)
2007: QUEST, in Fukuoka, JAPAN https://www.triam.kyushu-u.ac.jp/QUEST_HP/suben/history.html
2008: KSTAR, in Daejon, South Korea (ITER member)
2010: JT-60SA, in Naka, Japan (ITER member); upgraded from the JT-60.
2012: Medusa CR, in Cartago, at the Costa Rica Institute of Technology, Costa Rica
2012: SST-1, in Gandhinagar, at the Institute for Plasma Research, India (ITER member)
2012: IR-T1, Islamic Azad University, Science and Research Branch, Tehran, Iran
2015: ST25-HTS at Tokamak Energy Ltd in Culham, United Kingdom
2017: KTM – this is an experimental thermonuclear facility for research and testing of materials under energy load conditions close to ITER and future energy fusion reactors, Kazakhstan
2018: ST40 at Tokamak Energy Ltd in Oxford, United Kingdom
2020: HL-2M China National Nuclear Corporation and the Southwestern Institute of Physics, China
2020: MAST Upgrade, in Culham, United Kingdom
Previously operated
1960s: T-3 and T-4, in Kurchatov Institute, Moscow, Russia (formerly Soviet Union); T-4 in operation in 1968.
1963: LT-1, Australia National University's plasma physics group built a device to explore toroidal configurations, independently discovering the tokamak layout
1970: Stellarator C reopens as the Symmetric Tokamak in May at PPPL
1971–1980: Texas Turbulent Tokamak, University of Texas at Austin, US
1972: The Adiabatic Toroidal Compressor begins operation at PPPL
1973–1976: Tokamak de Fontenay aux Roses (TFR), near Paris, France
1973–1979: Alcator A, MIT, US
1975: Princeton Large Torus begins operation at PPPL
1978–1987: Alcator C, MIT, US
1978–2013: TEXTOR, in Jülich, Germany
1979–1998: MT-1 Tokamak, Budapest, Hungary (Built at the Kurchatov Institute, Russia, transported to Hungary in 1979, rebuilt as MT-1M in 1991)
1980–1990: Tokoloshe Tokamak, Atomic Energy Board, South Africa
1980–2004: TEXT/TEXT-U, University of Texas at Austin, US
1982–1997: TFTR, Princeton University, US
1983–2023: Joint European Torus (JET), in Culham, United Kingdom
1983–2000: Novillo Tokamak, at the Instituto Nacional de Investigaciones Nucleares, in Mexico City, Mexico
1984–1992: HL-1 Tokamak, in Chengdu, China
1985–2010: JT-60, in Naka, Ibaraki Prefecture, Japan; (Being upgraded 2015–2018 to Super, Advanced model)
1987–1999: Tokamak de Varennes; Varennes, Canada; operated by Hydro-Québec and used by researchers from Institut de recherche en électricité du Québec (IREQ) and the Institut national de la recherche scientifique (INRS)
1988–2005: T-15, in Kurchatov Institute, Moscow, Russia (formerly Soviet Union); 10 MW
1991–1998: START, in Culham, United Kingdom
1990s–2001: COMPASS, in Culham, United Kingdom
1994–2001: HL-1M Tokamak, in Chengdu, China
1999–2006: UCLA Electric Tokamak, in Los Angeles, US
1999–2014: MAST, in Culham, United Kingdom
1992–2016: Alcator C-Mod, MIT, Cambridge, US
1995–2013: HT-7, at the Institute of Plasma Physics, Hefei, China
Planned
ITER, international project in Cadarache, France; 500 MW; construction began in 2010, first plasma expected in 2025. Expected fully operational by 2035.
DEMO; 2000 MW, continuous operation, connected to power grid. Planned successor to ITER; construction to begin in 2040 according to EUROfusion 2018 timetable.
CFETR, also known as "China Fusion Engineering Test Reactor"; 200 MW; Next generation Chinese fusion reactor, is a new tokamak device.
K-DEMO in South Korea; 2200–3000 MW, a net electric generation on the order of 500 MW is planned; construction is targeted by 2037.
Spherical Tokamak for Energy Production (STEP), a UK project planning to produce a burning plasma by 2035.
SPARC a development of Commonwealth Fusion Systems (CFS) in collaboration with the Massachusetts Institute of Technology (MIT) Plasma Science and Fusion Center (PSFC) in Devens, Massachusetts. Expected to achieve energy gain in 2026 with a fraction of ITERs size by utilizing high magnetic fields.
| Technology | Power generation | null |
31456 | https://en.wikipedia.org/wiki/Truck | Truck | A truck or lorry is a motor vehicle designed to transport freight, carry specialized payloads, or perform other utilitarian work. Trucks vary greatly in size, power, and configuration, but the vast majority feature body-on-frame construction, with a cabin that is independent of the payload portion of the vehicle. Smaller varieties may be mechanically similar to some automobiles. Commercial trucks can be very large and powerful and may be configured to be mounted with specialized equipment, such as in the case of refuse trucks, fire trucks, concrete mixers, and suction excavators. In American English, a commercial vehicle without a trailer or other articulation is formally a "straight truck" while one designed specifically to pull a trailer is not a truck but a "tractor".
The majority of trucks currently in use are powered by diesel engines, although small- to medium-size trucks with gasoline engines exist in North America. Electrically powered trucks are more popular in China and Europe than elsewhere. In the European Union, vehicles with a gross combination mass of up to are defined as light commercial vehicles, and those over as large goods vehicles.
History
Steam wagons
Trucks and cars have a common ancestor: the steam-powered fardier Nicolas-Joseph Cugnot built in 1769. However, steam wagons were not common until the mid-19th century. The roads of the time, built for horse and carriages, limited these vehicles to very short hauls, usually from a factory to the nearest railway station. The first semi-trailer appeared in 1881, towed by a steam tractor manufactured by De Dion-Bouton. Steam-powered wagons were sold in France and the United States until the eve of World War I, and 1935 in the United Kingdom, when a change in road tax rules made them uneconomic against the new diesel lorries.
Internal combustion
In 1895, Karl Benz designed and built the first internal combustion truck. Later that year some of Benz's trucks were modified to become busses by Netphener. A year later, in 1896, another internal combustion engine truck was built by Gottlieb Daimler, the Daimler Motor Lastwagen. Other companies, such as Peugeot, Renault and Büssing, also built their own versions. The first truck in the United States was built by Autocar in 1899 and was available with engines. Another early American truck was built by George Eldridge of Des Moines, Iowa, in 1903. It was powered by an engine with two opposed cylinders, and had a chain drive A 1903 Eldridge truck is displayed at the Iowa 80 Trucking Museum, Walcott, Iowa. Trucks of the era mostly used two-cylinder engines and had a carrying capacity of . After World War I, several advances were made: electric starters, and 4, 6, and 8 cylinder engines.
Diesel engines
Although it had been invented in 1897, the diesel engine did not appear in production trucks until Benz introduced it in 1923. The diesel engine was not common in trucks in Europe until the 1930s. In the United States, Autocar introduced diesel engines for heavy applications in the mid-1930s. Demand was high enough that Autocar launched the "DC" model (diesel conventional) in 1939. However, it took much longer for diesel engines to be broadly accepted in the US: gasoline engines were still in use on heavy trucks in the 1970s.
Electric motors
Electrically powered trucks predate internal combustion ones and have been continuously available since the mid-19th-century. In the 1920s Autocar Trucks was the first of the major truck manufacturers to offer a range of electric trucks for sale. Electric trucks were successful for urban delivery roles and as specialized work vehicles like forklifts and pushback tugs. The higher energy density of liquid fuels soon led to the decline of electric-powered trucks in favor of, first, gasoline, and then diesel and CNG-fueled engines until battery technology advanced in the 2000s when new chemistries and higher-volume production broadened the range of applicability of electric propulsion to trucks in many more roles. Today, manufacturers are electrifying all trucks ahead of national regulatory requirements, with long-range over-the-road trucks being the most challenging.
Etymology
Truck is used in American English; the British English equivalent is lorry.
The first known usage of "truck" was in 1611 when it referred to the small strong wheels on ships' cannon carriages, and comes from "Trokhos" (Greek) = "wheel". In its extended usage, it came to refer to carts for carrying heavy loads, a meaning known since 1771. Its expanded application to "motor-powered load carrier" has been in usage since 1930, shortened from "motor truck", which dates back to 1901.
"Lorry" has a more uncertain origin, but probably has its roots in the rail transport industry, where the word is known to have been used in 1838 to refer to a type of truck (a goods wagon as in British usage, not a bogie as in the American), specifically a large flat wagon. It might derive from the verb lurry (to carry or drag along, or to lug) which was in use as early as 1664, but that association is not definitive. The expanded meaning of lorry, "self-propelled vehicle for carrying goods", has been in usage since 1911.
International variance
In the United States, Canada, and the Philippines, "truck" is usually reserved for commercial vehicles larger than regular passenger cars, but includes large SUVs, pickups, and other vehicles with an open load bed.
In Australia, New Zealand and South Africa, the word "truck" is mostly reserved for larger vehicles. In Australia and New Zealand, a pickup truck is frequently called a ute (short for "utility" vehicle), while in South Africa it is called a bakkie (Afrikaans: "small open container").
In the United Kingdom, India, Malaysia, Singapore, Ireland, and Hong Kong lorry is used instead of truck, but only for the medium and heavy types, while truck is used almost exclusively to refer to pickups.
Types by size
Ultra light
Often produced as variations of golf cars, with internal combustion or battery electric drive, these are used typically for off-highway use on estates, golf courses, and parks. While not suitable for highway use some variations may be licensed as slow speed vehicles for operation on streets, generally as a body variation of a neighborhood electric vehicle. A few manufactures produce specialized chassis for this type of vehicle, while Zap Motors markets a version of their Xebra electric tricycle (licensable in the U.S. as a motorcycle).
Very light
Popular in Europe and Asia, many mini-trucks are factory redesigns of light automobiles, usually with monocoque bodies. Specialized designs with substantial frames such as the Italian Piaggio shown here are based upon Japanese designs (in this case by Daihatsu) and are popular for use in "old town" sections of European cities that often have very narrow alleyways.
Regardless of name, these small trucks serve a wide range of uses. In Japan, they are regulated under the Kei car laws, which allow vehicle owners a break in taxes for buying a smaller and less-powerful vehicle (currently, the engine is limited to 660 cc displacement). These vehicles are used as on-road utility vehicles in Japan. These Japanese-made mini-trucks that were manufactured for on-road use are competing with off-road ATVs in the United States, and import regulations require that these mini-trucks have a speed governor as they are classified as low-speed vehicles. These vehicles have found uses in construction, large campuses (government, university, and industrial), agriculture, cattle ranches, amusement parks, and replacements for golf carts.
Major mini-truck manufacturers and their brands include: Daihatsu Hijet, Honda Acty, Tata Ace, Mazda Scrum, Mitsubishi Minicab, Subaru Sambar, and Suzuki Carry.
Light
Light trucks are car-sized (in the US, no more than ) and are used by individuals and businesses alike. In the EU they may not weigh more than and are allowed to be driven with a driving licence for cars.
Pickup trucks, called utes in Australia and New Zealand, are common in North America and some regions of Latin America, Asia, and Africa, but not so in Europe, where this size of commercial vehicle is most often made as vans.
Medium
Medium trucks are larger than light but smaller than heavy trucks. In the US, they are defined as weighing between . For the UK and the EU the weight is between . Local delivery and public service (dump trucks, garbage trucks and fire-fighting trucks) are normally around this size.
Heavy
Heavy trucks are the largest on-road trucks, Class 8. These include vocational applications such as heavy dump trucks, concrete pump trucks, and refuse hauling, as well as ubiquitous long-haul 4x2 and 6×4 tractor units.
Road damage and wear increase very rapidly with the axle weight. The number of steering axles and the suspension type also influence the amount of the road wear. In many countries with good roads a six-axle truck may have a maximum weight of or more.
Off-road
Off-road trucks include standard, extra heavy-duty highway-legal trucks, typically outfitted with off-road features such as a front driving axle and special tires for applications such as logging and construction, and purpose-built off-road vehicles unconstrained by weight limits, such as the Liebherr T 282B mining truck.
Maximum sizes by country
Australia has complex regulations over weight and length, including axle spacing, type of axle/axle group, rear overhang, kingpin to rear of trailer, drawbar length, ground clearance, as well as height and width laws. These limits are some of the highest in the world, a B-double can weigh and be long, and road trains used in the outback can weigh and be long.
The European Union also has complex regulations. The number and spacing of axles, steering, single or dual tires, and suspension type all affect maximum weights. Length of a truck, of a trailer, from axle to hitch point, kingpin to rear of trailer, and turning radius are all regulated. In additions, there are special rules for carrying containers, and countries can set their own rules for local traffic.
The United States Federal Bridge Law deals with the relation between the gross weight of the truck, the number of axles, the weight on and the spacing between the axles that the truck can have on the Interstate highway system. Each State determines the maximum permissible vehicle, combination, and axle weight on state and local roads.
Uniquely, the State of Michigan has a gross vehicle weight limit of , which is twice the U.S. federal limit. A measure to change the law was defeated in the Michigan Senate in 2019.
Design
Almost all trucks share a common construction: they are made of a chassis, a cab, an area for placing cargo or equipment, axles, suspension and roadwheels, an engine and a drivetrain. Pneumatic, hydraulic, water, and electrical systems may also be present. Many also tow one or more trailers or semi-trailers.
Cab
The "cab", or "cabin" is an enclosed space where the driver is seated. A "sleeper" is a compartment attached to or integral with the cab where the driver can rest while not driving, sometimes seen in semi-trailer trucks.
There are several cab configurations:
"Cab over engine" (COE) or "flat nose"; where the driver is seated above the front axle and the engine. This design is almost ubiquitous in Europe, where overall truck lengths are strictly regulated, and is widely used in the rest of the world. They were common in North American heavy-duty trucks but lost prominence when permitted length was extended in the early 1980s. Nevertheless, this design is still popular in North America among medium- and light-duty trucks. To reach the engine, the whole cab tilts forward, earning this design the name of "tilt-cab". This type of cab is especially suited to the delivery conditions in Europe where many roads require the short turning radius afforded by the shorter wheelbase of the cab over engine layout.
"Cab-under" is where the driver is positioned at the front at the lowest point possible as means for maximum cargo space as possible. Examples were made by Hunslet, Leyland, Bussing, Strick and Steinwinter.
"Conventional" cabs seated the driver behind the engine, as in most passenger cars or pickup trucks. Many new cabs are very streamlined, with a sloped hood (bonnet) and other features to lower drag. Conventional cabs are the most common in North America, Australia, and China, and are known in the UK as "American cabs" and in the Netherlands as "torpedo cabs".
"Cab beside engine" designs are used for terminal tractors at shipping yards and for other specialist vehicles carrying long loads such as pipes. This type is often made by replacing the passenger side of a cab-over truck with an extended section of the load bed.
A further step from this is the side loading forklift that can be described as a specially fabricated vehicle with the same properties as a truck of this type, in addition to the ability to pick up its own load.
Engines and motors
Most small trucks such as sport utility vehicles (SUVs), vans or pickups, and even light medium-duty trucks in North America, China, and Russia use gasoline engines (petrol engines), but many diesel engined models are now being produced. Most of the heavier trucks use four-stroke diesel engines with a turbocharger and intercooler. Huge off-highway trucks use locomotive-type engines such as a V12 Detroit Diesel two stroke engine. A large proportion of refuse trucks in the United States employ CNG (compressed natural gas) engines for their low fuel cost and reduced carbon emissions.
A significant proportion of North American manufactured trucks use an engine built by the last remaining major independent engine manufacturer (Cummins) but most global OEMs such as Volvo Trucks and Daimler AG promote their own "captive" engines.
In the European Union, all new truck engines must comply with Euro VI emission regulations, and Euro 7 from the late 2020s has stricter exhaust limits and also limits air pollution from brakes and tires.
several alternative technologies are competing to displace the use of diesel engines in heavy trucks. CNG engines are widely used in the US refuse industry and in concrete mixers, among other short-range vocations, but range limitations have prevented their broader uptake in freight hauling applications. Heavy electric trucks and hydrogen-powered trucks are new to the market in 2021, but major freight haulers are interested. Although cars will be first the phase-out of fossil fuel vehicles includes trucks. According to The Economist magazine "Electric lorries will probably run on hydrogen, not batteries, which are too expensive." Other researchers say that once faster chargers are available batteries will become competitive against diesel for all, except perhaps the heaviest, trucks.
Drivetrain
Small trucks use the same type of transmissions as almost all cars, having either an automatic transmission or a manual transmission with synchromesh (synchronizers). Bigger trucks often use manual transmissions without synchronizers, saving bulk and weight, although synchromesh transmissions are used in larger trucks as well. Transmissions without synchronizers, known as "crash boxes", require double-clutching for each shift, (which can lead to repetitive motion injuries), or a technique known colloquially as "floating", a method of changing gears which does not use the clutch, except for starts and stops, due to the physical effort of double-clutching, especially with non-power-assisted clutches, faster shifts, and less clutch wear.
Double-clutching allows the driver to control the engine and transmission revolutions to synchronize so that a smooth shift can be made; for example, when upshifting, the accelerator pedal is released and the clutch pedal is depressed while the gear lever is moved into neutral, the clutch pedal is then released and quickly pushed down again while the gear lever is moved to the next higher gear. Finally, the clutch pedal is released and the accelerator pedal pushed down to obtain the required engine speed. Although this is a relatively fast movement, perhaps a second or so while the transmission is in neutral, it allows the engine speed to drop and synchronize engine and transmission revolutions relative to the road speed. Downshifting is performed in a similar fashion, except the engine speed is now required to increase (while the transmission is in neutral) just the right amount in order to achieve the synchronization for a smooth, non-collision gear change. "Skip changing" is also widely used; in principle, the operation is the same as double-clutching, but it requires neutral be held slightly longer than a single-gear change.
Common North American setups include 9, 10, 13, 15, and 18 speeds. Automatic and automated manual transmissions for heavy trucks are becoming more and more common, due to advances both in transmission and engine power. In Europe, 8, 10, 12, and 16 gears are common on larger trucks with a manual transmission, while conventional automatic or automated manual transmissions would have anything from 5 to 12 gears. Almost all heavy truck transmissions are of the "range and split" (double H shift pattern) type, where range change and so‑called half gears or splits are air operated and always preselected before the main gear selection.
Frame
A truck frame consists of two parallel boxed (tubular) or C‑shaped rails, or beams, held together by crossmembers. These frames are referred to as ladder frames due to their resemblance to a ladder if tipped on end. The rails consist of a tall vertical section (two if boxed) and two shorter horizontal flanges. The height of the vertical section provides opposition to vertical flex when weight is applied to the top of the frame (beam resistance). Though typically flat the whole length on heavy-duty trucks, the rails may sometimes be tapered or arched for clearance around the engine or over the axles. The holes in rails are used either for mounting vehicle components and running wires and hoses or measuring and adjusting the orientation of the rails at the factory or repair shop.
The frame is usually made of steel, but can be made (whole or in part) of aluminum for a lighter weight. A tow bar may be found attached at one or both ends, but heavy tractors almost always make use of a fifth wheel hitch.
Body types
Box trucks have walls and a roof, making an enclosed load space. The rear has doors for unloading; a side door is sometimes fitted.
Chassis cab trucks have a fully enclosed cab at the front, with bare chassis frame-rails behind, suitable for subsequent permanent attachment of a specialized payload, like a fire-truck or ambulance body.
Concrete mixers have a rotating drum on an inclined axis, rotating in one direction to mix, and in the other to discharge the concrete down chutes. Because of the weight and power requirements of the drum body and rough construction sites, mixers have to be very heavy duty.
Dual drive/Steer trucks are vehicles used to steer the rear of trailers.
Dump trucks ("tippers" in the UK) transport loose material such as sand, gravel, or dirt for construction. A typical dump truck has an open-box bed, which is hinged at the rear and lifts at the front, allowing the material in the bed to be unloaded ("dumped") on the ground behind the truck.
Flatbed trucks have an entirely flat, level platform body. This allows for quick and easy loading but has no protection for the load. Hanging or removable sides are sometimes fitted, often in the form of a stakebody.
Refrigerator trucks have insulated panels as walls and a roof and floor, used for transporting fresh and frozen cargo such as ice cream, food, vegetables, and prescription drugs. They are mostly equipped with double-wing rear doors, but a side door is sometimes fitted.
Refuse trucks have a specialized body for collecting and, often, compacting trash collected from municipal, commercial, and industrial sites. This application has the widest use of the cab-over configuration in North America, to provide better maneuverability in tight situations. They are also among the most severe-duty and highest GVWR trucks on public roads.
Semi-tractors ("artics" in the UK) have a fifth wheel for towing a semi-trailer instead of a body.
Tank trucks ("tankers" in the UK) are designed to carry liquids or gases. They usually have a cylindrical tank lying horizontally on the chassis. Many variants exist due to the wide variety of liquids and gases that can be transported.
Wreckers ("recovery lorries" in the UK) are used to recover and/or tow disabled vehicles. They are normally equipped with a boom with a cable; wheel/chassis lifts are becoming common on newer trucks.
Sales and sales issues
Manufacturers
Truck market worldwide
Driving
In many countries, driving a truck requires a special driving license. The requirements and limitations vary with each different jurisdiction.
Australia
In Australia, a truck driver's license is required for any motor vehicle with a Gross Vehicle Mass (GVM) exceeding . The motor vehicles classes are further expanded as:
Combination
HC: Heavy Combination, a typical prime mover plus semi-trailer combination.
MC: Multi Combination, e.g., B Doubles/road trains
Rigid
LR: Light rigid: a rigid vehicle with a GVM of more than but not more than . Any towed trailer must not weigh more than GVM.
MR: Medium rigid: a rigid vehicle with 2 axles and a GVM of more than . Any towed trailer must not weigh more than GVM. Also includes vehicles in class LR.
HR: Heavy Rigid: a rigid vehicle with three or more axles and a GVM of more than . Any towed trailer must not weigh more than GVM. Also includes articulated buses and vehicles in class MR.
Heavy vehicle transmission
There is also a heavy vehicle transmission condition for a license class HC, HR, or MC test passed in a vehicle fitted with an automatic or synchromesh transmission; a driver's license will be restricted to vehicles of that class fitted with a synchromesh or automatic transmission. To have the condition removed, a person needs to pass a practical driving test in a vehicle with non-synchromesh transmission (constant mesh or crash box).
Europe
Driving licensing has been harmonized throughout the European Union and the EEA (and practically all European non-member states), so that common rules apply within Europe (see European driving licence). As an overview, to drive a vehicle weighing more than for commercial purposes requires a specialist license (the type varies depending on the use of the vehicle and number of seats). For licenses first acquired after 1997, that weight was reduced to , not including trailers.
Since 2013, the C1 license category allows driving vehicles over 3.5 and up to 7.5 tonnes. The C license category allows driving vehicles over 3.5 tonnes with a trailer up to 750 kg, and the CE category allows driving category C vehicles with a trailer over 750 kg.
South Africa
To drive any vehicle with a GVM exceeding , a code C1 drivers license is required. Furthermore, if the vehicle exceeds a code C license becomes necessary.
To drive any vehicle in South Africa towing a trailer with a GVM more than , further restrictions apply and the driver must possess a license suitable for the GVM of the total combination as well as an articulated endorsement. This is indicated with the letter "E" prefixing the license code.
In addition, any vehicle designed to carry goods or passengers may only be driven by a driver possessing a Public Driver's Permit, (or PrDP) of the applicable type. This is an additional license that is added to the DL card of the operator and subject to annual renewal unlike the five-year renewal period of a normal license.
The requirements for obtaining the different classes are below.
"G": Required for the transport of general goods, requires a criminal record check and a fee on issuing and renewal.
"P": Required for the transport of paying passengers, requires a more stringent criminal record check, additionally the driver must be over the age of 21 at time of issue. A G class PrDP will be issued at the same time.
"D": Required for the transport of dangerous materials, requires all of the same checks as class P., and in addition the driver must be over 25 at time of issue.
United States
In the United States, a commercial driver's license is required to drive any type of commercial vehicle weighing or more. The federal government regulates how many hours a driver may be on the clock, how much rest and sleep time is required (e.g., 11 hours driving/14 hours on-duty followed by 10 hours off, with a maximum of 70 hours/8 days or 60 hours/7 days, 34 hours restart ) Violations are often subject to significant penalties. Instruments to track each driver's hours must sometimes be fitted.
In 2006, the US trucking industry employed 1.8 million drivers of heavy trucks.
There is a shortage of willing trained long-distance truck drivers. Part of the reason for this is the economic fallout from deregulation of the trucking industry. Michael H. Belzer, associate professor, in the economics department at Wayne State University and co-author of Sweatshops on Wheels: Winners and Losers in Trucking Deregulation, argues that low pay, bad working conditions and unsafe conditions have been a direct result of deregulation. The book cites poor working conditions and an unfair pay system as responsible for high annual employee turnover in the industry.
In 2018, in the US, 5,096 large trucks and buses were involved in fatal crashes:
The number of large trucks involved in fatal crashes is 4,862,
The number of large trucks involved in injury crashes is 112,000,
The number of large trucks involved in property damage only crashes is 414,000.
Environmental effects
Like cars, trucks contribute to air, noise, and water pollution. Unlike cars, , most trucks run on diesel, and diesel exhaust is especially dangerous for health. Some countries outside the EU have different vehicle emission standards for trucks and cars.
NOx and particulates emitted by trucks are very dangerous to health, causing thousands of early deaths annually in the US alone. As older trucks are usually the worst, many cities have banned 20th century trucks. Air pollution also threatens professional truck drivers.
Over a quarter of global transport emissions are from road freight, in 2021 over 1700 million tonnes from medium and heavy trucks, so many countries are further restricting truck emissions to help limit climate change. Many environmental organizations favor laws and incentives to encourage the switch from road to rail, especially in Europe. Several countries have pledged that 30% of sales of trucks and buses will be zero emission by 2030.
With respect to noise pollution, trucks emit considerably higher sound levels at all speeds compared to typical cars; this contrast is particularly strong with heavy-duty trucks. There are several aspects of truck operations that contribute to the overall sound that is emitted. Continuous sounds are those from tires rolling on the roadway and the constant hum of their diesel engines at highway speeds. Less frequent noises, but perhaps more noticeable, are things like the repeated sharp-pitched whistle of a turbocharger on acceleration, or the abrupt blare of an exhaust brake retarder when traversing a downgrade. There has been noise regulation put in place to help control where and when the use of engine braking retarders are allowed.
Operator health and safety
A truck cab is a hazard control that protects the truck operator from hazardous airborne pollutants. As an enclosure, it is an example of an engineering control. Enclosed operator cabs have been used on agriculture, mining, and construction vehicles for several decades. Most modern-day enclosed cabs have heating, ventilation, and air conditioning (HVAC) systems for primarily maintaining a comfortable temperature and providing breathable air for their occupants. Various levels of filtration can be incorporated into the HVAC system to remove airborne pollutants such as dusts, diesel particulate matter (DPM), and other aerosols.
Two key elements of an effective environmental enclosure are a good filtration system and an enclosure with good integrity (sealed isolation from the outside environment). It is recommended that a filtration system filter out at least 95% or greater of airborne respirable aerosols from the intake airflow, with an additional recirculation filtering component for the inside air. Good enclosure integrity is also needed to achieve positive pressure to prevent wind-driven aerosol penetration into the enclosure, as well as to minimize air leakage around the filtration system. Test methods and mathematical modeling of environmental enclosures are also beneficial for quantifying and optimizing filtration system designs, as well as maintaining optimum protection factor performance for enclosure occupants.
Operations issues
Taxes
Commercial trucks in the US pay higher road use taxes on a state level than other road vehicles and are subject to extensive regulation. A few reasons commercial trucks pay higher road use taxes: they are bigger and heavier than most other vehicles, and cause more wear and tear per hour on roadways; and trucks and their drivers are on the road for more hours per day. Rules on use taxes differ among jurisdictions.
Damage to pavement
The life of a pavement is measured by the number of passes of a vehicle axle. It may be evaluated using the Load Equivalency Factor, which states that the damage by the pass of a vehicle axle is proportional to the 4th power of the weight, so a ten-ton axle consumes 10,000 times the life of the pavement as a one-ton axle. For that reason, loaded trucks cost the same as thousands of cars in pavement costs, and are subject to higher taxes and highway tolls.
Safety
Trucking accidents
In 2002 and 2004, there were over 5,000 fatalities related to trucking accidents in the United States. The trucking industry has since made significant efforts in increasing safety regulations. In 2008, the industry had successfully lowered the fatality rate to just over 4,000 deaths, but trucking accidents are still an issue that causes thousands of deaths and injuries each year. Approximately 6,000 trucking accident fatalities occur annually in the United States. Fatalities are not the only issue caused by trucking accidents. Here are some of the environmental issues that arise with trucking accidents:
14.4% of trucking accidents cause cargo to spill
6.5% cause open flames
Following increased pressure from The Times "Cities Fit For Cycling" campaign and from other media in Spring 2012, warning signs are now displayed on the backs of many heavy goods vehicles (HGV). These signs are directed against a common type of accident that occurs when the large vehicle turns left at a junction: a cyclist trying to pass on the nearside can be crushed against the HGV's wheels, especially if the driver cannot see the cyclist. The signs, such as the winning design of the InTANDEM road safety competition launched in March 2012, advocate extra care when passing a large vehicle on the nearside.
HGV safety in the EU
In-vehicle speed limitation is required applying a 90 km/h limit to commercial vehicles over 3.5 tonnes.
Front, side, and rear underrun protection is required on commercial vehicles over 3.5 tonnes.
Trucks must be fitted with blind-spot mirrors that give drivers a wider field of vision than conventional mirrors.
| Technology | Road transport | null |
31474 | https://en.wikipedia.org/wiki/Transcription%20factor | Transcription factor | In molecular biology, a transcription factor (TF) (or sequence-specific DNA-binding factor) is a protein that controls the rate of transcription of genetic information from DNA to messenger RNA, by binding to a specific DNA sequence. The function of TFs is to regulate—turn on and off—genes in order to make sure that they are expressed in the desired cells at the right time and in the right amount throughout the life of the cell and the organism. Groups of TFs function in a coordinated fashion to direct cell division, cell growth, and cell death throughout life; cell migration and organization (body plan) during embryonic development; and intermittently in response to signals from outside the cell, such as a hormone. There are approximately 1600 TFs in the human genome. Transcription factors are members of the proteome as well as regulome.
TFs work alone or with other proteins in a complex, by promoting (as an activator), or blocking (as a repressor) the recruitment of RNA polymerase (the enzyme that performs the transcription of genetic information from DNA to RNA) to specific genes.
A defining feature of TFs is that they contain at least one DNA-binding domain (DBD), which attaches to a specific sequence of DNA adjacent to the genes that they regulate. TFs are grouped into classes based on their DBDs. Other proteins such as coactivators, chromatin remodelers, histone acetyltransferases, histone deacetylases, kinases, and methylases are also essential to gene regulation, but lack DNA-binding domains, and therefore are not TFs.
TFs are of interest in medicine because TF mutations can cause specific diseases, and medications can be potentially targeted toward them.
Number
Transcription factors are essential for the regulation of gene expression and are, as a consequence, found in all living organisms. The number of transcription factors found within an organism increases with genome size, and larger genomes tend to have more transcription factors per gene.
There are approximately 2800 proteins in the human genome that contain DNA-binding domains, and 1600 of these are presumed to function as transcription factors, though other studies indicate it to be a smaller number. Therefore, approximately 10% of genes in the genome code for transcription factors, which makes this family the single largest family of human proteins. Furthermore, genes are often flanked by several binding sites for distinct transcription factors, and efficient expression of each of these genes requires the cooperative action of several different transcription factors (see, for example, hepatocyte nuclear factors). Hence, the combinatorial use of a subset of the approximately 2000 human transcription factors easily accounts for the unique regulation of each gene in the human genome during development.
Mechanism
Transcription factors bind to either enhancer or promoter regions of DNA adjacent to the genes that they regulate based on recognizing specific DNA motifs. Depending on the transcription factor, the transcription of the adjacent gene is either up- or down-regulated. Transcription factors use a variety of mechanisms for the regulation of gene expression. These mechanisms include:
stabilize or block the binding of RNA polymerase to DNA
catalyze the acetylation or deacetylation of histone proteins. The transcription factor can either do this directly or recruit other proteins with this catalytic activity. Many transcription factors use one or the other of two opposing mechanisms to regulate transcription:
histone acetyltransferase (HAT) activity – acetylates histone proteins, which weakens the association of DNA with histones, which make the DNA more accessible to transcription, thereby up-regulating transcription
histone deacetylase (HDAC) activity – deacetylates histone proteins, which strengthens the association of DNA with histones, which make the DNA less accessible to transcription, thereby down-regulating transcription
recruit coactivator or corepressor proteins to the transcription factor DNA complex
Function
Transcription factors are one of the groups of proteins that read and interpret the genetic "blueprint" in the DNA. They bind to the DNA and help initiate a program of increased or decreased gene transcription. As such, they are vital for many important cellular processes. Below are some of the important functions and biological roles transcription factors are involved in:
Basal transcriptional regulation
In eukaryotes, an important class of transcription factors called general transcription factors (GTFs) are necessary for transcription to occur. Many of these GTFs do not actually bind DNA, but rather are part of the large transcription preinitiation complex that interacts with RNA polymerase directly. The most common GTFs are TFIIA, TFIIB, TFIID (see also TATA binding protein), TFIIE, TFIIF, and TFIIH. The preinitiation complex binds to promoter regions of DNA upstream to the gene that they regulate.
Differential enhancement of transcription
Other transcription factors differentially regulate the expression of various genes by binding to enhancer regions of DNA adjacent to regulated genes. These transcription factors are critical to making sure that genes are expressed in the right cell at the right time and in the right amount, depending on the changing requirements of the organism.
Development
Many transcription factors in multicellular organisms are involved in development. Responding to stimuli, these transcription factors turn on/off the transcription of the appropriate genes, which, in turn, allows for changes in cell morphology or activities needed for cell fate determination and cellular differentiation. The Hox transcription factor family, for example, is important for proper body pattern formation in organisms as diverse as fruit flies to humans. Another example is the transcription factor encoded by the sex-determining region Y (SRY) gene, which plays a major role in determining sex in humans.
Response to intercellular signals
Cells can communicate with each other by releasing molecules that produce signaling cascades within another receptive cell. If the signal requires upregulation or downregulation of genes in the recipient cell, often transcription factors will be downstream in the signaling cascade. Estrogen signaling is an example of a fairly short signaling cascade that involves the estrogen receptor transcription factor: Estrogen is secreted by tissues such as the ovaries and placenta, crosses the cell membrane of the recipient cell, and is bound by the estrogen receptor in the cell's cytoplasm. The estrogen receptor then goes to the cell's nucleus and binds to its DNA-binding sites, changing the transcriptional regulation of the associated genes.
Response to environment
Not only do transcription factors act downstream of signaling cascades related to biological stimuli but they can also be downstream of signaling cascades involved in environmental stimuli. Examples include heat shock factor (HSF), which upregulates genes necessary for survival at higher temperatures, hypoxia inducible factor (HIF), which upregulates genes necessary for cell survival in low-oxygen environments, and sterol regulatory element binding protein (SREBP), which helps maintain proper lipid levels in the cell.
Cell cycle control
Many transcription factors, especially some that are proto-oncogenes or tumor suppressors, help regulate the cell cycle and as such determine how large a cell will get and when it can divide into two daughter cells. One example is the Myc oncogene, which has important roles in cell growth and apoptosis.
Pathogenesis
Transcription factors can also be used to alter gene expression in a host cell to promote pathogenesis. A well studied example of this are the transcription-activator like effectors (TAL effectors) secreted by Xanthomonas bacteria. When injected into plants, these proteins can enter the nucleus of the plant cell, bind plant promoter sequences, and activate transcription of plant genes that aid in bacterial infection. TAL effectors contain a central repeat region in which there is a simple relationship between the identity of two critical residues in sequential repeats and sequential DNA bases in the TAL effector's target site. This property likely makes it easier for these proteins to evolve in order to better compete with the defense mechanisms of the host cell.
Regulation
It is common in biology for important processes to have multiple layers of regulation and control. This is also true with transcription factors: Not only do transcription factors control the rates of transcription to regulate the amounts of gene products (RNA and protein) available to the cell but transcription factors themselves are regulated (often by other transcription factors). Below is a brief synopsis of some of the ways that the activity of transcription factors can be regulated:
Synthesis
Transcription factors (like all proteins) are transcribed from a gene on a chromosome into RNA, and then the RNA is translated into protein. Any of these steps can be regulated to affect the production (and thus activity) of a transcription factor. An implication of this is that transcription factors can regulate themselves. For example, in a negative feedback loop, the transcription factor acts as its own repressor: If the transcription factor protein binds the DNA of its own gene, it down-regulates the production of more of itself. This is one mechanism to maintain low levels of a transcription factor in a cell.
Nuclear localization
In eukaryotes, transcription factors (like most proteins) are transcribed in the nucleus but are then translated in the cell's cytoplasm. Many proteins that are active in the nucleus contain nuclear localization signals that direct them to the nucleus. But, for many transcription factors, this is a key point in their regulation. Important classes of transcription factors such as some nuclear receptors must first bind a ligand while in the cytoplasm before they can relocate to the nucleus.
Activation
Transcription factors may be activated (or deactivated) through their signal-sensing domain by a number of mechanisms including:
ligand binding – Not only is ligand binding able to influence where a transcription factor is located within a cell but ligand binding can also affect whether the transcription factor is in an active state and capable of binding DNA or other cofactors (see, for example, nuclear receptors).
phosphorylation – Many transcription factors such as STAT proteins must be phosphorylated before they can bind DNA.
interaction with other transcription factors (e.g., homo- or hetero-dimerization) or coregulatory proteins
Accessibility of DNA-binding site
In eukaryotes, DNA is organized with the help of histones into compact particles called nucleosomes, where sequences of about 147 DNA base pairs make ~1.65 turns around histone protein octamers. DNA within nucleosomes is inaccessible to many transcription factors. Some transcription factors, so-called pioneer factors are still able to bind their DNA binding sites on the nucleosomal DNA. For most other transcription factors, the nucleosome should be actively unwound by molecular motors such as chromatin remodelers. Alternatively, the nucleosome can be partially unwrapped by thermal fluctuations, allowing temporary access to the transcription factor binding site. In many cases, a transcription factor needs to compete for binding to its DNA binding site with other transcription factors and histones or non-histone chromatin proteins. Pairs of transcription factors and other proteins can play antagonistic roles (activator versus repressor) in the regulation of the same gene.
Availability of other cofactors/transcription factors
Most transcription factors do not work alone. Many large TF families form complex homotypic or heterotypic interactions through dimerization. For gene transcription to occur, a number of transcription factors must bind to DNA regulatory sequences. This collection of transcription factors, in turn, recruit intermediary proteins such as cofactors that allow efficient recruitment of the preinitiation complex and RNA polymerase. Thus, for a single transcription factor to initiate transcription, all of these other proteins must also be present, and the transcription factor must be in a state where it can bind to them if necessary.
Cofactors are proteins that modulate the effects of transcription factors. Cofactors are interchangeable between specific gene promoters; the protein complex that occupies the promoter DNA and the amino acid sequence of the cofactor determine its spatial conformation. For example, certain steroid receptors can exchange cofactors with NF-κB, which is a switch between inflammation and cellular differentiation; thereby steroids can affect the inflammatory response and function of certain tissues.
Interaction with methylated cytosine
Transcription factors and methylated cytosines in DNA both have major roles in regulating gene expression. (Methylation of cytosine in DNA primarily occurs where cytosine is followed by guanine in the 5' to 3' DNA sequence, a CpG site.) Methylation of CpG sites in a promoter region of a gene usually represses gene transcription, while methylation of CpGs in the body of a gene increases expression. TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene.
The DNA binding sites of 519 transcription factors were evaluated. Of these, 169 transcription factors (33%) did not have CpG dinucleotides in their binding sites, and 33 transcription factors (6%) could bind to a CpG-containing motif but did not display a preference for a binding site with either a methylated or unmethylated CpG. There were 117 transcription factors (23%) that were inhibited from binding to their binding sequence if it contained a methylated CpG site, 175 transcription factors (34%) that had enhanced binding if their binding sequence had a methylated CpG site, and 25 transcription factors (5%) were either inhibited or had enhanced binding depending on where in the binding sequence the methylated CpG was located.
TET enzymes do not specifically bind to methylcytosine except when recruited (see DNA demethylation). Multiple transcription factors important in cell differentiation and lineage specification, including NANOG, SALL4A, WT1, EBF1, PU.1, and E2A, have been shown to recruit TET enzymes to specific genomic loci (primarily enhancers) to act on methylcytosine (mC) and convert it to hydroxymethylcytosine hmC (and in most cases marking them for subsequent complete demethylation to cytosine). TET-mediated conversion of mC to hmC appears to disrupt the binding of 5mC-binding proteins including MECP2 and MBD (Methyl-CpG-binding domain) proteins, facilitating nucleosome remodeling and the binding of transcription factors, thereby activating transcription of those genes. EGR1 is an important transcription factor in memory formation. It has an essential role in brain neuron epigenetic reprogramming. The transcription factor EGR1 recruits the TET1 protein that initiates a pathway of DNA demethylation. EGR1, together with TET1, is employed in programming the distribution of methylation sites on brain DNA during brain development and in learning (see Epigenetics in learning and memory).
Structure
Transcription factors are modular in structure and contain the following domains:
DNA-binding domain (DBD), which attaches to specific sequences of DNA (enhancer or promoter. Necessary component for all vectors. Used to drive transcription of the vector's transgene promoter sequences) adjacent to regulated genes. DNA sequences that bind transcription factors are often referred to as response elements.
Activation domain (AD), which contains binding sites for other proteins such as transcription coregulators. These binding sites are frequently referred to as activation functions (AFs), Transactivation domain (TAD) or Trans-activating domain TAD, not to be confused with topologically associating domain (TAD).
An optional signal-sensing domain (SSD) (e.g., a ligand-binding domain), which senses external signals and, in response, transmits these signals to the rest of the transcription complex, resulting in up- or down-regulation of gene expression. Also, the DBD and signal-sensing domains may reside on separate proteins that associate within the transcription complex to regulate gene expression.
DNA-binding domain
The portion (domain) of the transcription factor that binds DNA is called its DNA-binding domain. Below is a partial list of some of the major families of DNA-binding domains/transcription factors:
Response elements
The DNA sequence that a transcription factor binds to is called a transcription factor-binding site or response element.
Transcription factors interact with their binding sites using a combination of electrostatic (of which hydrogen bonds are a special case) and Van der Waals forces. Due to the nature of these chemical interactions, most transcription factors bind DNA in a sequence specific manner. However, not all bases in the transcription factor-binding site may actually interact with the transcription factor. In addition, some of these interactions may be weaker than others. Thus, transcription factors do not bind just one sequence but are capable of binding a subset of closely related sequences, each with a different strength of interaction.
For example, although the consensus binding site for the TATA-binding protein (TBP) is TATAAAA, the TBP transcription factor can also bind similar sequences such as TATATAT or TATATAA.
Because transcription factors can bind a set of related sequences and these sequences tend to be short, potential transcription factor binding sites can occur by chance if the DNA sequence is long enough. It is unlikely, however, that a transcription factor will bind all compatible sequences in the genome of the cell. Other constraints, such as DNA accessibility in the cell or availability of cofactors may also help dictate where a transcription factor will actually bind. Thus, given the genome sequence, it is still difficult to predict where a transcription factor will actually bind in a living cell.
Additional recognition specificity, however, may be obtained through the use of more than one DNA-binding domain (for example tandem DBDs in the same transcription factor or through dimerization of two transcription factors) that bind to two or more adjacent sequences of DNA.
Clinical significance
Transcription factors are of clinical significance for at least two reasons: (1) mutations can be associated with specific diseases, and (2) they can be targets of medications.
Disorders
Due to their important roles in development, intercellular signaling, and cell cycle, some human diseases have been associated with mutations in transcription factors.
Many transcription factors are either tumor suppressors or oncogenes, and, thus, mutations or aberrant regulation of them is associated with cancer. Three groups of transcription factors are known to be important in human cancer: (1) the NF-kappaB and AP-1 families, (2) the STAT family and (3) the steroid receptors.
Below are a few of the better-studied examples:
Potential drug targets
Approximately 10% of currently prescribed drugs directly target the nuclear receptor class of transcription factors. Examples include tamoxifen and bicalutamide for the treatment of breast and prostate cancer, respectively, and various types of anti-inflammatory and anabolic steroids. In addition, transcription factors are often indirectly modulated by drugs through signaling cascades. It might be possible to directly target other less-explored transcription factors such as NF-κB with drugs. Transcription factors outside the nuclear receptor family are thought to be more difficult to target with small molecule therapeutics since it is not clear that they are "drugable" but progress has been made on Pax2 and the notch pathway.
Role in evolution
Gene duplications have played a crucial role in the evolution of species. This applies particularly to transcription factors. Once they occur as duplicates, accumulated mutations encoding for one copy can take place without negatively affecting the regulation of downstream targets. However, changes of the DNA binding specificities of the single-copy Leafy transcription factor, which occurs in most land plants, have recently been elucidated. In that respect, a single-copy transcription factor can undergo a change of specificity through a promiscuous intermediate without losing function. Similar mechanisms have been proposed in the context of all alternative phylogenetic hypotheses, and the role of transcription factors in the evolution of all species.
Role in biocontrol activity
The transcription factors have a role in resistance activity which is important for successful biocontrol activity. The resistant to oxidative stress and alkaline pH sensing were contributed from the transcription factor Yap1 and Rim101 of the Papiliotrema terrestris LS28 as molecular tools revealed an understanding of the genetic mechanisms underlying the biocontrol activity which supports disease management programs based on biological and integrated control.
Analysis
There are different technologies available to analyze transcription factors. On the genomic level, DNA-sequencing and database research are commonly used. The protein version of the transcription factor is detectable by using specific antibodies. The sample is detected on a western blot. By using electrophoretic mobility shift assay (EMSA), the activation profile of transcription factors can be detected. A multiplex approach for activation profiling is a TF chip system where several different transcription factors can be detected in parallel.
The most commonly used method for identifying transcription factor binding sites is chromatin immunoprecipitation (ChIP). This technique relies on chemical fixation of chromatin with formaldehyde, followed by co-precipitation of DNA and the transcription factor of interest using an antibody that specifically targets that protein. The DNA sequences can then be identified by microarray or high-throughput sequencing (ChIP-seq) to determine transcription factor binding sites. If no antibody is available for the protein of interest, DamID may be a convenient alternative.
Classes
As described in more detail below, transcription factors may be classified by their (1) mechanism of action, (2) regulatory function, or (3) sequence homology (and hence structural similarity) in their DNA-binding domains. They are also classified by 3D structure of their DBD and the way it contacts DNA.
Mechanistic
There are two mechanistic classes of transcription factors:
General transcription factors are involved in the formation of a preinitiation complex. The most common are abbreviated as TFIIA, TFIIB, TFIID, TFIIE, TFIIF, and TFIIH. They are ubiquitous and interact with the core promoter region surrounding the transcription start site(s) of all class II genes.
Upstream transcription factors are proteins that bind somewhere upstream of the initiation site to stimulate or repress transcription. These are roughly synonymous with specific transcription factors, because they vary considerably depending on what recognition sequences are present in the proximity of the gene.
Functional
Transcription factors have been classified according to their regulatory function:
I. Constitutive – present in all cells at all times, constantly active, all being activators. Very likely playing an important facilitating role in the transcription of many chromosomal genes, possibly in genes that seem to be always transcribed (e.g., structural proteins like tubulin and actin, and ubiquitous metabolic enzymes such as glyceraldehyde phosphate dehydrogenase (GAPDH)). E.g.: general transcription factors, Sp1, NF1, CCAAT
II. Regulatory (conditionally active) – require activation.
II.A Developmental (cell-type specific) – beginning in a fertilized egg. Once expressed, require no additional activation. E.g.:GATA, HNF, PIT-1, MyoD, Myf5, Hox, Winged Helix
II.B Signal-dependent – may be either developmentally restricted in their expression or present in most or all cells, but all are inactive (or minimally active) until cells containing such proteins are exposed to the appropriate intra- or extracellular signal.
II.B.1 Extracellular ligand (endocrine or paracrine)-dependent – nuclear receptors.
II.B.2 Intracellular ligand (autocrine)-dependent – activated by small intracellular molecules. E.g.: SREBP, p53, orphan nuclear receptors.
II.B.3 Cell surface receptor-ligand interaction-dependent – activated by second messenger signaling cascades.
II.B.3.a Constitutive nuclear factors activated by serine phosphorylation – residing within the nucleus. The serine phosphorylation enzymes can be activated by two main routes:
G protein-coupled receptors upon ligand binding increase intracellular levels of second messengers (cAMP, IP3, DAG, calcium) which, in turn, activate protein serine-threonine kinase enzymes (such as PKA, PKC).
Receptor tyrosine kinases upon ligand binding trigger other pathways that finally terminate in serine phosphorylation of the abundant resident nuclear transcription factors.
Examples include: CREB, AP-1, Mef2
II.B.3.b Latent cytoplasmic factors – residing in the cytoplasm when inactive. Structurally and chemically very diverse group, and so are their activation pathways. E.g.: STAT, R-SMAD, NF-κB, Notch, TUBBY, NFAT
Structural
Transcription factors are often classified based on the sequence similarity and hence the tertiary structure of their DNA-binding domains. The following classification is based of the 3D structure of their DBD and the way it contacts DNA. It was first developed for Human TF and later extended to rodents and also to plants.
1 Superclass: Basic Domains
1.1 Class: Leucine zipper factors (bZIP)
1.1.1 Family: AP-1(-like) components; includes (c-Fos/c-Jun)
1.1.2 Family: CREB
1.1.3 Family: C/EBP-like factors
1.1.4 Family: bZIP / PAR
1.1.5 Family: Plant G-box binding factors
1.1.6 Family: ZIP only
1.2 Class: Helix-loop-helix factors (bHLH)
1.2.1 Family: Ubiquitous (class A) factors
1.2.2 Family: Myogenic transcription factors (MyoD)
1.2.3 Family: Achaete-Scute
1.2.4 Family: Tal/Twist/Atonal/Hen
1.3 Class: Helix-loop-helix / leucine zipper factors (bHLH-ZIP)
1.3.1 Family: Ubiquitous bHLH-ZIP factors; includes USF (USF1, USF2); SREBP (SREBP)
1.3.2 Family: Cell-cycle controlling factors; includes c-Myc
1.4 Class: NF-1
1.4.1 Family: NF-1 (A, B, C, X)
1.5 Class: RF-X
1.5.1 Family: RF-X (1, 2, 3, 4, 5, ANK)
1.6 Class: bHSH
2 Superclass: Zinc-coordinating DNA-binding domains
2.1 Class: Cys4 zinc finger of nuclear receptor type
2.1.1 Family: Steroid hormone receptors
2.1.2 Family: Thyroid hormone receptor-like factors
2.2 Class: diverse Cys4 zinc fingers
2.2.1 Family: GATA-Factors
2.3 Class: Cys2His2 zinc finger domain
2.3.1 Family: Ubiquitous factors, includes TFIIIA, Sp1
2.3.2 Family: Developmental / cell cycle regulators; includes Krüppel
2.3.4 Family: Large factors with NF-6B-like binding properties
2.4 Class: Cys6 cysteine-zinc cluster
2.5 Class: Zinc fingers of alternating composition
3 Superclass: Helix-turn-helix
3.1 Class: Homeo domain
3.1.1 Family: Homeo domain only; includes Ubx
3.1.2 Family: POU domain factors; includes Oct
3.1.3 Family: Homeo domain with LIM region
3.1.4 Family: homeo domain plus zinc finger motifs
3.2 Class: Paired box
3.2.1 Family: Paired plus homeo domain
3.2.2 Family: Paired domain only
3.3 Class: Fork head / winged helix
3.3.1 Family: Developmental regulators; includes forkhead
3.3.2 Family: Tissue-specific regulators
3.3.3 Family: Cell-cycle controlling factors
3.3.0 Family: Other regulators
3.4 Class: Heat Shock Factors
3.4.1 Family: HSF
3.5 Class: Tryptophan clusters
3.5.1 Family: Myb
3.5.2 Family: Ets-type
3.5.3 Family: Interferon regulatory factors
3.6 Class: TEA ( transcriptional enhancer factor) domain
3.6.1 Family: TEA (TEAD1, TEAD2, TEAD3, TEAD4)
4 Superclass: beta-Scaffold Factors with Minor Groove Contacts
4.1 Class: RHR (Rel homology region)
4.1.1 Family: Rel/ankyrin; NF-kappaB
4.1.2 Family: ankyrin only
4.1.3 Family: NFAT (Nuclear Factor of Activated T-cells) (NFATC1, NFATC2, NFATC3)
4.2 Class: STAT
4.2.1 Family: STAT
4.3 Class: p53
4.3.1 Family: p53
4.4 Class: MADS box
4.4.1 Family: Regulators of differentiation; includes (Mef2)
4.4.2 Family: Responders to external signals, SRF (serum response factor) ()
4.4.3 Family: Metabolic regulators (ARG80)
4.5 Class: beta-Barrel alpha-helix transcription factors
4.6 Class: TATA binding proteins
4.6.1 Family: TBP
4.7 Class: HMG-box
4.7.1 Family: SOX genes, SRY
4.7.2 Family: TCF-1 (TCF1)
4.7.3 Family: HMG2-related, SSRP1
4.7.4 Family: UBF
4.7.5 Family: MATA
4.8 Class: Heteromeric CCAAT factors
4.8.1 Family: Heteromeric CCAAT factors
4.9 Class: Grainyhead
4.9.1 Family: Grainyhead
4.10 Class: Cold-shock domain factors
4.10.1 Family: csd
4.11 Class: Runt
4.11.1 Family: Runt
0 Superclass: Other Transcription Factors
0.1 Class: Copper fist proteins
0.2 Class: HMGI(Y) (HMGA1)
0.2.1 Family: HMGI(Y)
0.3 Class: Pocket domain
0.4 Class: E1A-like factors
0.5 Class: AP2/EREBP-related factors
0.5.1 Family: AP2
0.5.2 Family: EREBP
0.5.3 Superfamily: AP2/B3
0.5.3.1 Family: ARF
0.5.3.2 Family: ABI
0.5.3.3 Family: RAV
Transcription factor databases
There are numerous databases cataloging information about transcription factors, but their scope and utility vary dramatically. Some may contain only information about the actual proteins, some about their binding sites, or about their target genes. Examples include the following:
footprintDB-- a metadatabase of multiple databases, including JASPAR and others
JASPAR: database of transcription factor binding sites for eukaryotes
PlantTFD: Plant transcription factor database
TcoF-DB: Database of transcription co-factors and transcription factor interactions
TFcheckpoint: database of human, mouse and rat TF candidates
transcriptionfactor.org (now commercial, selling reagents)
MethMotif.org: An integrative cell-specific database of transcription factor binding motifs coupled with DNA methylation profiles.
| Biology and health sciences | Molecular biology | Biology |
31482 | https://en.wikipedia.org/wiki/Tangent | Tangent | In geometry, the tangent line (or simply tangent) to a plane curve at a given point is, intuitively, the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve. More precisely, a straight line is tangent to the curve at a point if the line passes through the point on the curve and has slope , where f is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space.
The point where the tangent line and the curve meet or intersect is called the point of tangency. The tangent line is said to be "going in the same direction" as the curve, and is thus the best straight-line approximation to the curve at that point.
The tangent line to a point on a differentiable curve can also be thought of as a tangent line approximation, the graph of the affine function that best approximates the original function at the given point.
Similarly, the tangent plane to a surface at a given point is the plane that "just touches" the surface at that point. The concept of a tangent is one of the most fundamental notions in differential geometry and has been extensively generalized; .
The word "tangent" comes from the Latin , "to touch".
History
Euclid makes several references to the tangent ( ephaptoménē) to a circle in book III of the Elements (c. 300 BC). In Apollonius' work Conics (c. 225 BC) he defines a tangent as being a line such that no other straight line could fall between it and the curve.
Archimedes (c. 287 – c. 212 BC) found the tangent to an Archimedean spiral by considering the path of a point moving along the curve.
In the 1630s Fermat developed the technique of adequality to calculate tangents and other problems in analysis and used this to calculate tangents to the parabola. The technique of adequality is similar to taking the difference between and and dividing by a power of . Independently Descartes used his method of normals based on the observation that the radius of a circle is always normal to the circle itself.
These methods led to the development of differential calculus in the 17th century. Many people contributed. Roberval discovered a general method of drawing tangents, by considering a curve as described by a moving point whose motion is the resultant of several simpler motions.
René-François de Sluse and Johannes Hudde found algebraic algorithms for finding tangents. Further developments included those of John Wallis and Isaac Barrow, leading to the theory of Isaac Newton and Gottfried Leibniz.
An 1828 definition of a tangent was "a right line which touches a curve, but which when produced, does not cut it". This old definition prevents inflection points from having any tangent. It has been dismissed and the modern definitions are equivalent to those of Leibniz, who defined the tangent line as the line through a pair of infinitely close points on the curve; in modern terminology, this is expressed as: the tangent to a curve at a point on the curve is the limit of the line passing through two points of the curve when these two points tends to .
Tangent line to a plane curve
The intuitive notion that a tangent line "touches" a curve can be made more explicit by considering the sequence of straight lines (secant lines) passing through two points, A and B, those that lie on the function curve. The tangent at A is the limit when point B approximates or tends to A. The existence and uniqueness of the tangent line depends on a certain type of mathematical smoothness, known as "differentiability." For example, if two circular arcs meet at a sharp point (a vertex) then there is no uniquely defined tangent at the vertex because the limit of the progression of secant lines depends on the direction in which "point B" approaches the vertex.
At most points, the tangent touches the curve without crossing it (though it may, when continued, cross the curve at other places away from the point of tangent). A point where the tangent (at this point) crosses the curve is called an inflection point. Circles, parabolas, hyperbolas and ellipses do not have any inflection point, but more complicated curves do have, like the graph of a cubic function, which has exactly one inflection point, or a sinusoid, which has two inflection points per each period of the sine.
Conversely, it may happen that the curve lies entirely on one side of a straight line passing through a point on it, and yet this straight line is not a tangent line. This is the case, for example, for a line passing through the vertex of a triangle and not intersecting it otherwise—where the tangent line does not exist for the reasons explained above. In convex geometry, such lines are called supporting lines.
Analytical approach
The geometrical idea of the tangent line as the limit of secant lines serves as the motivation for analytical methods that are used to find tangent lines explicitly. The question of finding the tangent line to a graph, or the tangent line problem, was one of the central questions leading to the development of calculus in the 17th century. In the second book of his Geometry, René Descartes said of the problem of constructing the tangent to a curve, "And I dare say that this is not only the most useful and most general problem in geometry that I know, but even that I have ever desired to know".
Intuitive description
Suppose that a curve is given as the graph of a function, y = f(x). To find the tangent line at the point p = (a, f(a)), consider another nearby point q = (a + h, f(a + h)) on the curve. The slope of the secant line passing through p and q is equal to the difference quotient
As the point q approaches p, which corresponds to making h smaller and smaller, the difference quotient should approach a certain limiting value k, which is the slope of the tangent line at the point p. If k is known, the equation of the tangent line can be found in the point-slope form:
More rigorous description
To make the preceding reasoning rigorous, one has to explain what is meant by the difference quotient approaching a certain limiting value k. The precise mathematical formulation was given by Cauchy in the 19th century and is based on the notion of limit. Suppose that the graph does not have a break or a sharp edge at p and it is neither plumb nor too wiggly near p. Then there is a unique value of k such that, as h approaches 0, the difference quotient gets closer and closer to k, and the distance between them becomes negligible compared with the size of h, if h is small enough. This leads to the definition of the slope of the tangent line to the graph as the limit of the difference quotients for the function f. This limit is the derivative of the function f at x = a, denoted f ′(a). Using derivatives, the equation of the tangent line can be stated as follows:
Calculus provides rules for computing the derivatives of functions that are given by formulas, such as the power function, trigonometric functions, exponential function, logarithm, and their various combinations. Thus, equations of the tangents to graphs of all these functions, as well as many others, can be found by the methods of calculus.
How the method can fail
Calculus also demonstrates that there are functions and points on their graphs for which the limit determining the slope of the tangent line does not exist. For these points the function f is non-differentiable. There are two possible reasons for the method of finding the tangents based on the limits and derivatives to fail: either the geometric tangent exists, but it is a vertical line, which cannot be given in the point-slope form since it does not have a slope, or the graph exhibits one of three behaviors that precludes a geometric tangent.
The graph y = x1/3 illustrates the first possibility: here the difference quotient at a = 0 is equal to h1/3/h = h−2/3, which becomes very large as h approaches 0. This curve has a tangent line at the origin that is vertical.
The graph y = x2/3 illustrates another possibility: this graph has a cusp at the origin. This means that, when h approaches 0, the difference quotient at a = 0 approaches plus or minus infinity depending on the sign of x. Thus both branches of the curve are near to the half vertical line for which y=0, but none is near to the negative part of this line. Basically, there is no tangent at the origin in this case, but in some context one may consider this line as a tangent, and even, in algebraic geometry, as a double tangent.
The graph y = |x| of the absolute value function consists of two straight lines with different slopes joined at the origin. As a point q approaches the origin from the right, the secant line always has slope 1. As a point q approaches the origin from the left, the secant line always has slope −1. Therefore, there is no unique tangent to the graph at the origin. Having two different (but finite) slopes is called a corner.
Finally, since differentiability implies continuity, the contrapositive states discontinuity implies non-differentiability. Any such jump or point discontinuity will have no tangent line. This includes cases where one slope approaches positive infinity while the other approaches negative infinity, leading to an infinite jump discontinuity
Equations
When the curve is given by y = f(x) then the slope of the tangent is
so by the point–slope formula the equation of the tangent line at (X, Y) is
where (x, y) are the coordinates of any point on the tangent line, and where the derivative is evaluated at .
When the curve is given by y = f(x), the tangent line's equation can also be found by using polynomial division to divide by ; if the remainder is denoted by , then the equation of the tangent line is given by
When the equation of the curve is given in the form f(x, y) = 0 then the value of the slope can be found by implicit differentiation, giving
The equation of the tangent line at a point (X,Y) such that f(X,Y) = 0 is then
This equation remains true if
in which case the slope of the tangent is infinite. If, however,
the tangent line is not defined and the point (X,Y) is said to be singular.
For algebraic curves, computations may be simplified somewhat by converting to homogeneous coordinates. Specifically, let the homogeneous equation of the curve be g(x, y, z) = 0 where g is a homogeneous function of degree n. Then, if (X, Y, Z) lies on the curve, Euler's theorem implies
It follows that the homogeneous equation of the tangent line is
The equation of the tangent line in Cartesian coordinates can be found by setting z=1 in this equation.
To apply this to algebraic curves, write f(x, y) as
where each ur is the sum of all terms of degree r. The homogeneous equation of the curve is then
Applying the equation above and setting z=1 produces
as the equation of the tangent line. The equation in this form is often simpler to use in practice since no further simplification is needed after it is applied.
If the curve is given parametrically by
then the slope of the tangent is
giving the equation for the tangent line at as
If
the tangent line is not defined. However, it may occur that the tangent line exists and may be computed from an implicit equation of the curve.
Normal line to a curve
The line perpendicular to the tangent line to a curve at the point of tangency is called the normal line to the curve at that point. The slopes of perpendicular lines have product −1, so if the equation of the curve is y = f(x) then slope of the normal line is
and it follows that the equation of the normal line at (X, Y) is
Similarly, if the equation of the curve has the form f(x, y) = 0 then the equation of the normal line is given by
If the curve is given parametrically by
then the equation of the normal line is
Angle between curves
The angle between two curves at a point where they intersect is defined as the angle between their tangent lines at that point. More specifically, two curves are said to be tangent at a point if they have the same tangent at a point, and orthogonal if their tangent lines are orthogonal.
Multiple tangents at a point
The formulas above fail when the point is a singular point. In this case there may be two or more branches of the curve that pass through the point, each branch having its own tangent line. When the point is the origin, the equations of these lines can be found for algebraic curves by factoring the equation formed by eliminating all but the lowest degree terms from the original equation. Since any point can be made the origin by a change of variables (or by translating the curve) this gives a method for finding the tangent lines at any singular point.
For example, the equation of the limaçon trisectrix shown to the right is
Expanding this and eliminating all but terms of degree 2 gives
which, when factored, becomes
So these are the equations of the two tangent lines through the origin.
When the curve is not self-crossing, the tangent at a reference point may still not be uniquely defined because the curve is not differentiable at that point although it is differentiable elsewhere. In this case the left and right derivatives are defined as the limits of the derivative as the point at which it is evaluated approaches the reference point from respectively the left (lower values) or the right (higher values). For example, the curve y = |x | is not differentiable at x = 0: its left and right derivatives have respective slopes −1 and 1; the tangents at that point with those slopes are called the left and right tangents.
Sometimes the slopes of the left and right tangent lines are equal, so the tangent lines coincide. This is true, for example, for the curve y = x 2/3, for which both the left and right derivatives at x = 0 are infinite; both the left and right tangent lines have equation x = 0.
Tangent line to a space curve
Tangent circles
Two distinct circles lying in the same plane are said to be tangent to each other if they meet at exactly one point.
If points in the plane are described using Cartesian coordinates, then two circles, with radii and centers and are tangent to each other whenever
The two circles are called externally tangent if the distance between their centres is equal to the sum of their radii,
or internally tangent if the distance between their centres is equal to the difference between their radii:
Tangent plane to a surface
The tangent plane to a surface at a given point p is defined in an analogous way to the tangent line in the case of curves. It is the best approximation of the surface by a plane at p, and can be obtained as the limiting position of the planes passing through 3 distinct points on the surface close to p as these points converge to p. Mathematically, if the surface is given by a function , the equation of the tangent plane at point can be expressed as:
.
Here, and are the partial derivatives of the function with respect to and respectively, evaluated at the point . In essence, the tangent plane captures the local behavior of the surface at the specific point p. It's a fundamental concept used in calculus and differential geometry, crucial for understanding how functions change locally on surfaces.
Higher-dimensional manifolds
More generally, there is a k-dimensional tangent space at each point of a k-dimensional manifold in the n-dimensional Euclidean space.
| Mathematics | Two-dimensional space | null |
31491 | https://en.wikipedia.org/wiki/Theoretical%20chemistry | Theoretical chemistry | Theoretical chemistry is the branch of chemistry which develops theoretical generalizations that are part of the theoretical arsenal of modern chemistry: for example, the concepts of chemical bonding, chemical reaction, valence, the surface of potential energy, molecular orbitals, orbital interactions, and molecule activation.
Overview
Theoretical chemistry unites principles and concepts common to all branches of chemistry. Within the framework of theoretical chemistry, there is a systematization of chemical laws, principles and rules, their refinement and detailing, the construction of a hierarchy. The central place in theoretical chemistry is occupied by the doctrine of the interconnection of the structure and properties of molecular systems. It uses mathematical and physical methods to explain the structures and dynamics of chemical systems and to correlate, understand, and predict their thermodynamic and kinetic properties. In the most general sense, it is explanation of chemical phenomena by methods of theoretical physics. In contrast to theoretical physics, in connection with the high complexity of chemical systems, theoretical chemistry, in addition to approximate mathematical methods, often uses semi-empirical and empirical methods.
In recent years, it has consisted primarily of quantum chemistry, i.e., the application of quantum mechanics to problems in chemistry. Other major components include molecular dynamics, statistical thermodynamics and theories of electrolyte solutions, reaction networks, polymerization, catalysis, molecular magnetism and spectroscopy.
Modern theoretical chemistry may be roughly divided into the study of chemical structure and the study of chemical dynamics. The former includes studies of: electronic structure, potential energy surfaces, and force fields; vibrational-rotational motion; equilibrium properties of condensed-phase systems and macro-molecules. Chemical dynamics includes: bimolecular kinetics and the collision theory of reactions and energy transfer; unimolecular rate theory and metastable states; condensed-phase and macromolecular aspects of dynamics.
Branches of theoretical chemistry
Quantum chemistry The application of quantum mechanics or fundamental interactions to chemical and physico-chemical problems. Spectroscopic and magnetic properties are between the most frequently modelled.
Computational chemistryThe application of scientific computing to chemistry, involving approximation schemes such as Hartree–Fock, post-Hartree–Fock, density functional theory, semiempirical methods (such as PM3) or force field methods. Molecular shape is the most frequently predicted property. Computers can also predict vibrational spectra and vibronic coupling, but also acquire and Fourier transform Infra-red Data into frequency information. The comparison with predicted vibrations supports the predicted shape.
Molecular modelling Methods for modelling molecular structures without necessarily referring to quantum mechanics. Examples are molecular docking, protein-protein docking, drug design, combinatorial chemistry. The fitting of shape and electric potential are the driving factor in this graphical approach.
Molecular dynamics Application of classical mechanics for simulating the movement of the nuclei of an assembly of atoms and molecules. The rearrangement of molecules within an ensemble is controlled by Van der Waals forces and promoted by temperature.
Molecular mechanics Modeling of the intra- and inter-molecular interaction potential energy surfaces via potentials. The latter are usually parameterized from ab initio calculations.
Mathematical chemistry Discussion and prediction of the molecular structure using mathematical methods without necessarily referring to quantum mechanics. Topology is a branch of mathematics that allows researchers to predict properties of flexible finite size bodies like clusters.
Chemical kinetics Theoretical study of the dynamical systems associated to reactive chemicals, the activated complex and their corresponding differential equations.
Cheminformatics (also known as chemoinformatics) The use of computer and informational techniques, applied to crop information to solve problems in the field of chemistry.
Chemical engineering The application of chemistry to industrial processes to conduct research and development. This allows for development and improvement of new and existing products and manufacturing processes.
Chemical thermodynamics The study of the relationship between heat, work, and energy in chemical reactions and processes, with focus on entropy, enthalpy, and Gibbs free energy to understand reaction spontaneity and equilibrium.
Statistical mechanics The application of statistical mechanics to predict and explain thermodynamic properties of chemical systems, connecting molecular behavior with macroscopic properties.
Closely related disciplines
Historically, the major field of application of theoretical chemistry has been in the following fields of research:
Atomic physics: The discipline dealing with electrons and atomic nuclei.
Molecular physics: The discipline of the electrons surrounding the molecular nuclei and of movement of the nuclei. This term usually refers to the study of molecules made of a few atoms in the gas phase. But some consider that molecular physics is also the study of bulk properties of chemicals in terms of molecules.
Physical chemistry and chemical physics: Chemistry investigated via physical methods like laser techniques, scanning tunneling microscope, etc. The formal distinction between both fields is that physical chemistry is a branch of chemistry while chemical physics is a branch of physics. In practice this distinction is quite vague.
Many-body theory: The discipline studying the effects which appear in systems with large number of constituents. It is based on quantum physics – mostly second quantization formalism – and quantum electrodynamics.
Hence, theoretical chemistry has emerged as a branch of research. With the rise of the density functional theory and other methods like molecular mechanics, the range of application has been extended to chemical systems which are relevant to other fields of chemistry and physics, including biochemistry, condensed matter physics, nanotechnology or molecular biology.
| Physical sciences | Subdisciplines | Chemistry |
31537 | https://en.wikipedia.org/wiki/Transuranium%20element | Transuranium element | The transuranium (or transuranic) elements are the chemical elements with atomic number greater than 92, which is the atomic number of uranium. All of them are radioactively unstable and decay into other elements. Except for neptunium and plutonium, which have been found in trace amounts in nature, none occur naturally on Earth and they are synthetic.
Overview
Of the elements with atomic numbers 1 to 92, most can be found in nature, having stable isotopes (such as oxygen) or very long-lived radioisotopes (such as uranium), or existing as common decay products of the decay of uranium and thorium (such as radon). The exceptions are technetium, promethium, astatine, and francium; all four occur in nature, but only in very minor branches of the uranium and thorium decay chains, and thus all save francium were first discovered by synthesis in the laboratory rather than in nature.
All elements with higher atomic numbers have been first discovered in the laboratory, with neptunium and plutonium later discovered in nature. They are all radioactive, with a half-life much shorter than the age of the Earth, so any primordial (i.e. present at the Earth's formation) atoms of these elements, have long since decayed. Trace amounts of neptunium and plutonium form in some uranium-rich rock, and small amounts are produced during atmospheric tests of nuclear weapons. These two elements are generated by neutron capture in uranium ore with subsequent beta decays (e.g. U + n → U → Np → Pu).
All elements beyond plutonium are entirely synthetic; they are created in nuclear reactors or particle accelerators. The half-lives of these elements show a general trend of decreasing as atomic numbers increase. There are exceptions, however, including several isotopes of curium and dubnium. Some heavier elements in this series, around atomic numbers 110–114, are thought to break the trend and demonstrate increased nuclear stability, comprising the theoretical island of stability.
Transuranic elements are difficult and expensive to produce, and their prices increase rapidly with atomic number. As of 2008, the cost of weapons-grade plutonium was around $4,000/gram, and californium exceeded $60,000,000/gram. Einsteinium is the heaviest element that has been produced in macroscopic quantities.
Transuranic elements that have not been discovered, or have been discovered but are not yet officially named, use IUPAC's systematic element names. The naming of transuranic elements may be a source of controversy.
Discoveries
So far, essentially all transuranium elements have been discovered at four laboratories: Lawrence Berkeley National Laboratory (LBNL) in the United States (elements 93–101, 106, and joint credit for 103–105), the Joint Institute for Nuclear Research (JINR) in Russia (elements 102 and 114–118, and joint credit for 103–105), the GSI Helmholtz Centre for Heavy Ion Research in Germany (elements 107–112), and RIKEN in Japan (element 113).
The Radiation Laboratory (now LBNL) at University of California, Berkeley, led principally by Edwin McMillan, Glenn Seaborg, and Albert Ghiorso, during 1945-1974:
93. neptunium, Np, named after the planet Neptune, as it follows uranium and Neptune follows Uranus in the planetary sequence (1940).
94. plutonium, Pu, named after Pluto, following the same naming rule as it follows neptunium and Pluto follows Neptune in the Solar System (1940).
95. americium, Am, named because it is an analog to europium, and so was named after the continent where it was first produced (1944).
96. curium, Cm, named after Pierre and Marie Curie, scientists who separated out the first radioactive elements (1944), as its lighter analog gadolinium was named after Johan Gadolin.
97. berkelium, Bk, named after Berkeley, where the University of California, Berkeley is located (1949).
98. californium, Cf, named after California, where the university is located (1950).
99. einsteinium, Es, named after Albert Einstein (1952).
100. fermium, Fm, named after Enrico Fermi, the physicist who produced the first controlled chain reaction (1952).
101. mendelevium, Md, named after Russian chemist Dmitri Mendeleev, credited for being the primary creator of the periodic table of the chemical elements (1955).
102. nobelium, No, named after Alfred Nobel (1958). The element was originally claimed by a team at the Nobel Institute in Sweden (1957) – though it later became apparent that the Swedish team had not discovered the element, the LBNL team decided to adopt their name nobelium. This discovery was also claimed by JINR, which doubted the LBNL claim, and named the element joliotium (Jl) after Frédéric Joliot-Curie (1965). IUPAC concluded that the JINR had been the first to convincingly synthesize the element (1965), but retained the name nobelium as deeply entrenched in the literature.
103. lawrencium, Lr, named after Ernest Lawrence, a physicist best known for development of the cyclotron, and the person for whom Lawrence Livermore National Laboratory and LBNL (which hosted the creation of these transuranium elements) are named (1961). This discovery was also claimed by the JINR (1965), which doubted the LBNL claim and proposed the name rutherfordium (Rf) after Ernest Rutherford. IUPAC concluded that credit should be shared, retaining the name lawrencium as entrenched in the literature.
104. rutherfordium, Rf, named after Ernest Rutherford, who was responsible for the concept of the atomic nucleus (1969). This discovery was also claimed by JINR, led principally by Georgy Flyorov: they named the element kurchatovium (Ku), after Igor Kurchatov. IUPAC concluded that credit should be shared, and adopted the LBNL name rutherfordium.
105. dubnium, Db, an element that is named after Dubna, where JINR is located. Originally named hahnium (Ha) in honor of Otto Hahn by the Berkeley group (1970). This discovery was also claimed by JINR, which named it nielsbohrium (Ns) after Niels Bohr. IUPAC concluded that credit should be shared, and renamed the element dubnium to honour the JINR team.
106. seaborgium, Sg, named after Glenn T. Seaborg. This name caused controversy because Seaborg was still alive, but it eventually became accepted by international chemists (1974). This discovery was also claimed by JINR. IUPAC concluded that the Berkeley team had been the first to convincingly synthesize the element.
The Gesellschaft für Schwerionenforschung (Society for Heavy Ion Research) in Darmstadt, Hessen, Germany, led principally by Gottfried Münzenberg, Peter Armbruster, and Sigurd Hofmann, during 1980-2000:
107. bohrium, Bh, named after Danish physicist Niels Bohr, important in the elucidation of the structure of the atom (1981). This discovery was also claimed by JINR. IUPAC concluded that the GSI had been the first to convincingly synthesise the element. The GSI team had originally proposed nielsbohrium (Ns) to resolve the naming dispute on element 105, but this was changed by IUPAC as there was no precedent for using a scientist's first name in an element name.
108. hassium, Hs, named after the Latin form of the name of Hessen, the German Bundesland where this work was performed (1984). This discovery was also claimed by JINR. IUPAC concluded that the GSI had been the first to convincingly synthesize the element, while acknowledging the pioneering work at JINR.
109. meitnerium, Mt, named after Lise Meitner, an Austrian physicist who was one of the earliest scientists to study nuclear fission (1982).
110. darmstadtium, Ds, named after Darmstadt, Germany, the city in which this work was performed (1994). This discovery was also claimed by JINR, which proposed the name becquerelium after Henri Becquerel, and by LBNL, which proposed the name hahnium to resolve the dispute on element 105 (despite having protested the reusing of established names for different elements). IUPAC concluded that GSI had been the first to convincingly synthesize the element.
111. roentgenium, Rg, named after Wilhelm Röntgen, discoverer of X-rays (1994).
112. copernicium, Cn, named after astronomer Nicolaus Copernicus (1996).
RIKEN in Wakō, Saitama, Japan, led principally by Kōsuke Morita:
113. nihonium, Nh, named after Japan (Nihon in Japanese) where the element was discovered (2004). This discovery was also claimed by JINR. IUPAC concluded that RIKEN had been the first to convincingly synthesize the element.
JINR in Dubna, Russia, led principally by Yuri Oganessian, in collaboration with several other labs including Lawrence Livermore National Laboratory (LLNL), since 2000:
114. flerovium, Fl, named after Soviet physicist Georgy Flyorov, founder of JINR (1999).
115. moscovium, Mc, named after Moscow Oblast, where the element was discovered (2004).
116. livermorium, Lv, named after Lawrence Livermore National Laboratory, a collaborator with JINR in the discovery (2000).
117. tennessine, Ts, after Tennessee, where the berkelium target needed for the synthesis of the element was manufactured (2010).
118. oganesson, Og, after Yuri Oganessian, who led the JINR team in its discovery of elements 114 to 118 (2002).
Superheavy elements
Superheavy elements, (also known as superheavies, or superheavy atoms, commonly abbreviated SHE) usually refer to the transactinide elements beginning with rutherfordium (atomic number 104). (Lawrencium, the first 6d element, is sometimes but not always included as well.) They have only been made artificially and currently serve no practical purpose because their short half-lives cause them to decay after a very short time, ranging from a few hours to just milliseconds, which also makes them extremely hard to study.
Superheavies have all been created since the latter half of the 20th century and are continually being created during the 21st century as technology advances. They are created through the bombardment of elements in a particle accelerator, in quantities on the atomic scale, and no method of mass creation has been found.
Applications
Transuranic elements may be used to synthesize superheavy elements. Elements of the island of stability have potentially important military applications, including the development of compact nuclear weapons. The potential everyday applications are vast; americium is used in devices such as smoke detectors and spectrometers.
| Physical sciences | Periods | Chemistry |
31596 | https://en.wikipedia.org/wiki/Typhoid%20fever | Typhoid fever | Typhoid fever, also known simply as typhoid, is a disease caused by Salmonella enterica serotype Typhi bacteria, also called Salmonella Typhi. Symptoms vary from mild to severe, and usually begin six to 30 days after exposure. Often there is a gradual onset of a high fever over several days. This is commonly accompanied by weakness, abdominal pain, constipation, headaches, and mild vomiting. Some people develop a skin rash with rose colored spots. In severe cases, people may experience confusion. Without treatment, symptoms may last weeks or months. Diarrhea may be severe, but is uncommon. Other people may carry it without being affected, but are still contagious. Typhoid fever is a type of enteric fever, along with paratyphoid fever. Salmonella enterica Typhi is believed to infect and replicate only within humans.
Typhoid is caused by the bacterium Salmonella enterica subsp. enterica serovar Typhi growing in the intestines, Peyer's patches, mesenteric lymph nodes, spleen, liver, gallbladder, bone marrow and blood. Typhoid is spread by eating or drinking food or water contaminated with the feces of an infected person. Risk factors include limited access to clean drinking water and poor sanitation. Those who have not yet been exposed to it and ingest contaminated drinking water or food are most at risk for developing symptoms. Only humans can be infected; there are no known animal reservoirs. Salmonella Typhi which causes typhoid fever is different than the other Salmonella bacteria that usually cause salmonellosis, a common type of food poisoning.
Diagnosis is performed by culturing and identifying S. Typhi from patient samples or detecting an immune response to the pathogen from blood samples. Recently, new advances in large-scale data collection and analysis have allowed researchers to develop better diagnostics, such as detecting changing abundances of small molecules in the blood that may specifically indicate typhoid fever. Diagnostic tools in regions where typhoid is most prevalent are quite limited in their accuracy and specificity, and the time required for a proper diagnosis, the increasing spread of antibiotic resistance, and the cost of testing are also hardships for under-resourced healthcare systems.
A typhoid vaccine can prevent about 40–90% of cases during the first two years. The vaccine may have some effect for up to seven years. For those at high risk or people traveling to areas where it is common, vaccination is recommended. Other efforts to prevent it include providing clean drinking water, good sanitation, and handwashing. Until an infection is confirmed as cleared, the infected person should not prepare food for others. Typhoid is treated with antibiotics such as azithromycin, fluoroquinolones, or third-generation cephalosporins. Resistance to these antibiotics has been developing, which has made treatment more difficult.
In 2015, 12.5 million new typhoid cases were reported. The disease is most common in India. Children are most commonly affected. Typhoid decreased in the developed world in the 1940s as a result of improved sanitation and the use of antibiotics. Every year about 400 cases are reported in the U.S. and an estimated 6,000 people have typhoid. In 2015, it resulted in about 149,000 deaths worldwide – down from 181,000 in 1990. Without treatment, the risk of death may be as high as 20%. With treatment, it is between 1% and 4%.
Typhus is a different disease, caused by unrelated species of bacteria. Owing to their similar symptoms, they were not recognized as distinct diseases until the 1800s. "Typhoid" means "resembling typhus".
Signs and symptoms
Classically, the progression of untreated typhoid fever has three distinct stages, each lasting about a week. Over the course of these stages, the patient becomes exhausted and emaciated.
In the first week, the body temperature rises slowly, and fever fluctuations are seen with relative bradycardia (Faget sign), malaise, headache, and cough. A bloody nose (epistaxis) is seen in a quarter of cases, and abdominal pain is also possible. A decrease in the number of circulating white blood cells (leukopenia) occurs with eosinopenia and relative lymphocytosis; blood cultures are positive for S. enterica subsp. enterica serovar Typhi. The Widal test is usually negative.
In the second week, the person is often too tired to get up, with high fever in plateau around and bradycardia (sphygmothermic dissociation or Faget sign), classically with a dicrotic pulse wave. Delirium can occur, where the patient is often calm, but sometimes becomes agitated. This delirium has given typhoid the nickname "nervous fever". Rose spots appear on the lower chest and abdomen in around a third of patients. Rhonchi (rattling breathing sounds) are heard in the base of the lungs. The abdomen is distended and painful in the right lower quadrant, where a rumbling sound can be heard. Diarrhea can occur in this stage, but constipation is also common. The spleen and liver are enlarged (hepatosplenomegaly) and tender, and liver transaminases are elevated. The Widal test is strongly positive, with antiO and antiH antibodies. Blood cultures are sometimes still positive.
In the third week of typhoid fever, possible complications include:
The fever is still very high and oscillates very little over 24 hours. Dehydration ensues along with malnutrition, and the patient is delirious. A third of affected people develop a macular rash on the trunk.
Intestinal haemorrhage due to bleeding in congested Peyer's patches occurs; this can be very serious but is usually not fatal.
Intestinal perforation in the distal ileum is a critical complication and often fatal. It may occur without alarming symptoms until sepsis or diffuse peritonitis sets in.
Respiratory diseases such as pneumonia and acute bronchitis
Encephalitis
Neuropsychiatric symptoms (described as "muttering delirium" or "coma vigil"), with picking at bedclothes or imaginary objects.
Metastatic abscesses, cholecystitis, endocarditis, and osteitis.
Low platelet count (thrombocytopenia) is sometimes seen.
Causes
Bacteria
The Gram-negative bacterium that causes typhoid fever is Salmonella enterica subsp. enterica serovar Typhi. Based on the MLST subtyping scheme, the two main sequence types of the S. Typhi are ST1 and ST2, which are widespread globally. Global phylogeographical analysis showed dominance of a haplotype 58 (H58), which probably originated in India during the late 1980s and is now spreading through the world with multi-drug resistance. A more detailed genotyping scheme was reported in 2016 and is now widely used. This scheme reclassified the nomenclature of H58 to genotype 4.3.1.
Transmission
Unlike other strains of Salmonella, humans are the only known typhoid carriers. S. enterica subsp. enterica serovar Typhi is spread by the fecal-oral route from people who are infected and from asymptomatic carriers of the bacterium. An asymptomatic human carrier is someone who is still excreting typhoid bacteria in stool a year after the acute stage of the infection.
Diagnosis
Diagnosis is made by any blood, bone marrow, or stool cultures and with the Widal test (demonstration of antibodies against Salmonella antigens O-somatic and H-flagellar). In epidemics and less wealthy countries, after excluding malaria, dysentery, or pneumonia, a therapeutic trial time with chloramphenicol is generally undertaken while awaiting the results of the Widal test and blood and stool cultures.
Widal test
The Widal test is used to identify specific antibodies in the serum of people with typhoid by using antigen-antibody interactions.
In this test, the serum is mixed with a dead bacterial suspension of Salmonella with specific antigens. If the patient's serum contains antibodies against those antigens, they get attached to them, forming clumps. If clumping does not occur, the test is negative. The Widal test is time-consuming and prone to significant false positives. It may also be falsely negative in recently infected people. But unlike the Typhidot test, the Widal test quantifies the specimen with titres.
Rapid diagnostic tests
Rapid diagnostic tests such as Tubex, Typhidot, and Test-It have shown moderate diagnostic accuracy.
Typhidot
Typhidot is based on the presence of specific IgM and IgG antibodies to a specific 50Kd OMP antigen. This test is carried out on a cellulose nitrate membrane where a specific S. typhi outer membrane protein is attached as fixed test lines. It separately identifies IgM and IgG antibodies. IgM shows recent infection; IgG signifies remote infection.
The sample pad of this kit contains colloidal gold-anti-human IgG or gold-anti-human IgM. If the sample contains IgG and IgM antibodies against those antigens, they will react and turn red. The typhidot test becomes positive within 2–3 days of infection.
Two colored bands indicate a positive test. A single control band indicates a negative test. A single first fixed line or no band at all indicates an invalid test. Typhidot's biggest limitation is that it is not quantitative, just positive or negative.
Tubex test
The Tubex test contains two types of particles: brown magnetic particles coated with antigen and blue indicator particles coated with O9 antibody. During the test, if antibodies are present in the serum, they will attach to the brown magnetic particles and settle at the base, while the blue indicator particles remain in the solution, producing a blue color, which means the test is positive.
If the serum does not have an antibody in it, the blue particles attach to the brown particles and settle at the bottom, producing a colorless solution, which means the test is negative.
Prevention
Sanitation and hygiene are important to prevent typhoid. It can spread only in environments where human feces can come into contact with food or drinking water. Careful food preparation and washing of hands are crucial to prevent typhoid. Industrialization contributed greatly to the elimination of typhoid fever, as it eliminated the public health hazards associated with having horse manure in public streets, which led to a large number of flies, which are vectors of many pathogens, including Salmonella spp. According to statistics from the U.S. Centers for Disease Control and Prevention, the chlorination of drinking water has led to dramatic decreases in the transmission of typhoid fever.
Vaccination
Two typhoid vaccines are licensed for use in the prevention of typhoid: the live, oral Ty21a vaccine (sold as Vivotif by Crucell Switzerland AG) and the injectable typhoid polysaccharide vaccine (sold as Typhim Vi by Sanofi Pasteur and Typherix by GlaxoSmithKline). Both are efficacious and recommended for travelers to areas where typhoid is endemic. Boosters are recommended every five years for the oral vaccine and every two years for the injectable form. An older, killed whole-cell vaccine is still used in countries where the newer preparations are not available, but this vaccine is no longer recommended for use because it has more side effects (mainly pain and inflammation at the site of the injection).
To help decrease rates of typhoid fever in developing nations, the World Health Organization (WHO) endorsed the use of a vaccination program starting in 1999. Vaccination has proven effective at controlling outbreaks in high-incidence areas and is also very cost-effective: prices are normally less than US$1 per dose. Because the price is low, poverty-stricken communities are more willing to take advantage of the vaccinations. Although vaccination programs for typhoid have proven effective, they alone cannot eliminate typhoid fever. Combining vaccines with public health efforts is the only proven way to control this disease.
Since the 1990s, the WHO has recommended two typhoid fever vaccines. The ViPS vaccine is given by injection and the Ty21a by capsules. Only people over age two are recommended to be vaccinated with the ViPS vaccine, and it requires a revaccination after 2–3 years, with a 55–72% efficacy. The Ty21a vaccine is recommended for people five and older, lasting 5–7 years with 51–67% efficacy. The two vaccines have proved safe and effective for epidemic disease control in multiple regions.
A version of the vaccine combined with a hepatitis A vaccine is also available.
Results of a phase 3 trial of typhoid conjugate vaccine (TCV) in December 2019 reported 81% fewer cases among children.
Treatment
Oral rehydration therapy
The rediscovery of oral rehydration therapy in the 1960s provided a simple way to prevent many of the deaths of diarrheal diseases in general.
Antibiotics
Where resistance is uncommon, the treatment of choice is a fluoroquinolone such as ciprofloxacin. Otherwise, a third-generation cephalosporin such as ceftriaxone or cefotaxime is the first choice. Cefixime is a suitable oral alternative.
Properly treated, typhoid fever is not fatal in most cases. Antibiotics such as ampicillin, chloramphenicol, trimethoprim-sulfamethoxazole, amoxicillin, and ciprofloxacin have been commonly used to treat it. Treatment with antibiotics reduces the case-fatality rate to about 1%.
Without treatment, some patients develop sustained fever, bradycardia, hepatosplenomegaly, abdominal symptoms, and occasionally pneumonia. In white-skinned patients, pink spots, which fade on pressure, appear on the skin of the trunk in up to 20% of cases. In the third week, untreated cases may develop gastrointestinal and cerebral complications, which may prove fatal in 10–20% of cases. The highest case fatality rates are reported in children under 4. Around 2–5% of those who contract typhoid fever become chronic carriers, as bacteria persist in the biliary tract after symptoms have resolved.
Surgery
Surgery is usually indicated if intestinal perforation occurs. One study found a 30-day mortality rate of 9% (8/88), and surgical site infections at 67% (59/88), with the disease burden borne predominantly by low-resource countries.
For surgical treatment, most surgeons prefer simple closure of the perforation with drainage of the peritoneum. Small bowel resection is indicated for patients with multiple perforations. If antibiotic treatment fails to eradicate the hepatobiliary carriage, the gallbladder should be resected. Cholecystectomy is sometimes successful, especially in patients with gallstones, but is not always successful in eradicating the carrier state because of persisting hepatic infection.
Resistance
As resistance to ampicillin, chloramphenicol, trimethoprim-sulfamethoxazole, and streptomycin is now common, these agents are no longer used as first-line treatment of typhoid fever. Typhoid resistant to these agents is known as multidrug-resistant typhoid.
Ciprofloxacin resistance is an increasing problem, especially in the Indian subcontinent and Southeast Asia. Many centres are shifting from ciprofloxacin to ceftriaxone as the first line for treating suspected typhoid originating in South America, India, Pakistan, Bangladesh, Thailand, or Vietnam. Also, it has been suggested that azithromycin is better at treating resistant typhoid than both fluoroquinolone drugs and ceftriaxone. Azithromycin can be taken by mouth and is less expensive than ceftriaxone, which is given by injection.
A separate problem exists with laboratory testing for reduced susceptibility to ciprofloxacin; current recommendations are that isolates should be tested simultaneously against ciprofloxacin (CIP) and against nalidixic acid (NAL), that isolates sensitive to both CIP and NAL should be reported as "sensitive to ciprofloxacin", and that isolates sensitive to CIP but not to NAL should be reported as "reduced sensitivity to ciprofloxacin". But an analysis of 271 isolates found that around 18% of isolates with reduced susceptibility to fluoroquinolones, the class to which CIP belongs (MIC 0.125–1.0 mg/L), would not be detected by this method.
Epidemiology
In 2000, typhoid fever caused an estimated 21.7 million illnesses and 217,000 deaths. It occurs most often in children and young adults between 5 and 19 years old. In 2013, it resulted in about 161,000 deaths – down from 181,000 in 1990. Infants, children, and adolescents in south-central and Southeast Asia have the highest rates of typhoid. Outbreaks are also often reported in sub-Saharan Africa and Southeast Asia. In 2000, more than 90% of morbidity and mortality due to typhoid fever occurred in Asia. In the U.S., about 400 cases occur each year, 75% of which are acquired while traveling internationally.
Before the antibiotic era, the case fatality rate of typhoid fever was 10–20%. Today, with prompt treatment, it is less than 1%, but 3–5% of people who are infected develop a chronic infection in the gall bladder. Since S. enterica subsp. enterica serovar Typhi is human-restricted, these chronic carriers become the crucial reservoir, which can persist for decades for further spread of the disease, further complicating its identification and treatment. Lately, the study of S. enterica subsp. enterica serovar Typhi associated with a large outbreak and a carrier at the genome level provides new insight into the pathogenesis of the pathogen.
In industrialized nations, water sanitation and food handling improvements have reduced the number of typhoid cases. Third world nations have the highest rates. These areas lack access to clean water, proper sanitation systems, and proper healthcare facilities. In these areas, such access to basic public-health needs is not expected in the near future.
In 2004–2005 an outbreak in the Democratic Republic of Congo resulted in more than 42,000 cases and 214 deaths. Since November 2016, Pakistan has had an outbreak of extensively drug-resistant (XDR) typhoid fever.
In Europe, a report based on data for 2017 retrieved from The European Surveillance System (TESSy) on the distribution of confirmed typhoid and paratyphoid fever cases found that 22 EU/EEA countries reported a total of 1,098 cases, 90.9% of which were travel-related, mainly acquired during travel to South Asia.
Outbreaks
Plague of Athens (suspected)
Cocoliztli epidemics (suspected)
"Burning Fever" outbreak among indigenous Americans. Between 1607 and 1624, 85% of the population at the James River died from a typhoid epidemic. The World Health Organization estimates the death toll was over 6,000 during this time.
Maidstone, Kent outbreak in 1897–1898: 1,847 patients were recorded to have typhoid fever. This outbreak is notable because it was the first time a typhoid vaccine was deployed during a civilian outbreak. Almoth Edward Wright's vaccine was offered to 200 healthcare providers, and of the 84 individuals who received the vaccine, none developed typhoid whereas 4 who had not been vaccinated became ill.
American army in the Spanish-American war: government records estimate over 21,000 troops had typhoid, resulting in 2,200 deaths.
In 1902 guests at mayoral banquets in Southampton and Winchester, England became ill and four died, including the Dean of Winchester, after consuming oysters. The infection was due to oysters sourced from Emsworth, where the oyster beds had been contaminated with raw sewage.
Jamaica Plain neighborhood, Boston in 1908 – linked to milk delivery. See the history section, "carriers" for further details.
Outbreak in upper-class New Yorkers who employed Mary Mallon – 51 cases and 3 deaths from 1907 to 1915.
Aberdeen, Scotland, in summer 1964 – traced back to contaminated canned beef sourced from Argentina sold in markets. More than 500 patients were quarantined in the hospital for a minimum of four weeks, and the outbreak was contained without any deaths.
Dushanbe, Tajikistan, in 1996–1997: 10,677 cases reported, 108 deaths.
Kinshasa, Democratic Republic of the Congo, in 2004: 43,000 cases and over 200 deaths. A prospective study of specimens collected in the same region between 2007 and 2011 revealed about one-third of samples obtained from patient samples were resistant to multiple antibiotics.
Kampala, Uganda in 2015: 10,230 cases reported.
History
Early descriptions
The plague of Athens, during the Peloponnesian War, was most likely an outbreak of typhoid fever. During the war, Athenians retreated into a walled-in city to escape attack from the Spartans. This massive influx of humans into a concentrated space overwhelmed the water supply and waste infrastructure, likely leading to unsanitary conditions as fresh water became harder to obtain and waste became more difficult to collect and remove beyond the city walls. In 2006, examining the remains of a mass burial site from Athens from around the time of the plague (~430 B.C.) revealed that fragments of DNA similar to modern-day S. Typhi DNA were detected, whereas Yersinia pestis (plague), Rickettsia prowazekii (typhus), Mycobacterium tuberculosis, cowpox virus, and Bartonella henselae were not detected in any of the remains tested.
It is possible that the Roman emperor Augustus Caesar had either a liver abscess or typhoid fever, and survived by using ice baths and cold compresses as a means of treatment for his fever. There is a statue of the Greek physician, Antonius Musa, who treated his fever.
Definition and evidence of transmission
The French doctors Pierre-Fidele Bretonneau and Pierre-Charles-Alexandre Louis are credited with describing typhoid fever as a specific disease, unique from typhus. Both doctors performed autopsies on individuals who died in Paris due to fever – and indicated that many had lesions on the Peyer's patches which correlated with distinct symptoms before death. British medics were skeptical of the differentiation between typhoid and typhus because both were endemic to Britain at that time. However, in France, only typhoid was present circulating in the population. Pierre-Charlles-Alexandre Louis also performed case studies and statistical analysis to demonstrate that typhoid was contagious – and that persons who already had the disease seemed to be protected. Afterward, several American doctors confirmed these findings, and then Sir William Jenner convinced any remaining skeptics that typhoid is a specific disease recognizable by lesions in the Peyer's patches by examining sixty-six autopsies from fever patients and concluding that the symptoms of headaches, diarrhea, rash spots, and abdominal pain were present only in patients who were found to have intestinal lesions after death; these observations solidified the association of the disease with the intestinal tract and gave the first clue to the route of transmission.
In 1847 William Budd learned of an epidemic of typhoid fever in Clifton, and identified that all 13 of 34 residents who had contracted the disease drew their drinking water from the same well. Notably, this observation was two years before John Snow first published an early version of his theory that contaminated water was the central conduit for transmitting cholera. Budd later became health officer of Bristol ensured a clean water supply, and documented further evidence of typhoid as a water-borne illness throughout his career.
Cause
Polish scientist Tadeusz Browicz described a short bacillus in the organs and feces of typhoid victims in 1874. Browicz was able to isolate and grow the bacilli but did not go as far as to insinuate or prove that they caused the disease.
In April 1880, three months before Eberth's publication, Edwin Klebs described short and filamentous bacilli in the Peyer's patches in typhoid victims. The bacterium's role in disease was speculated but not confirmed.
In 1880, Karl Joseph Eberth described a bacillus that he suspected was the cause of typhoid. Eberth is given credit for discovering the bacterium definitively by successfully isolating the same bacterium from 18 of 40 typhoid victims and failing to discover the bacterium present in any "control" victims of other diseases. In 1884, pathologist Georg Theodor August Gaffky (1850–1918) confirmed Eberth's findings. Gaffky isolated the same bacterium as Eberth from the spleen of a typhoid victim, and was able to grow the bacterium on solid media. The organism was given names such as Eberth's bacillus, Eberthella Typhi, and Gaffky-Eberth bacillus. Today, the bacillus that causes typhoid fever goes by the scientific name Salmonella enterica serovar Typhi.
Chlorination of water
Most developed countries had declining rates of typhoid fever throughout the first half of the 20th century due to vaccinations and advances in public sanitation and hygiene. In 1893 attempts were made to chlorinate the water supply in Hamburg, Germany, and in 1897 Maidstone, England, was the first town to have its entire water supply chlorinated. In 1905, following an outbreak of typhoid fever, the City of Lincoln, England, instituted permanent water chlorination. The first permanent disinfection of drinking water in the US was made in 1908 to the Jersey City, New Jersey, water supply. Credit for the decision to build the chlorination system has been given to John L. Leal. The chlorination facility was designed by George W. Fuller.
Outbreaks in traveling military groups led to the creation of the Lyster bag in 1915: a bag with a faucet that can be hung from a tree or pole, filled with water, and comes with a chlorination tablet to drop into the water. The Lyster bag was essential for the survival of American soldiers in the Vietnam War.
Direct transmission and carriers
There were several occurrences of milk delivery men spreading typhoid fever throughout the communities they served. Although typhoid is not spread through milk itself, there were several examples of milk distributors in many locations watering their milk down with contaminated water, or cleaning the glass bottles the milk was placed in with contaminated water. Boston had two such cases around the turn of the 20th century. In 1899 there were 24 cases of typhoid traced to a single milkman, whose wife had died of typhoid fever a week before the outbreak. In 1908, J.J. Fallon, who was also a milkman, died of typhoid fever. Following his death and confirmation of the typhoid fever diagnosis, the city conducted an investigation of typhoid symptoms and cases along his route and found evidence of a significant outbreak. A month after the outbreak was first reported, the Boston Globe published a short statement declaring the outbreak over, stating "[a]t Jamaica Plain there is a slight increase, the total being 272 cases. Throughout the city, there is a total of 348 cases." There was at least one death reported during this outbreak: Mrs. Sophia S. Engstrom, aged 46. Typhoid continued to ravage the Jamaica Plain neighborhood in particular throughout 1908, and several more people were reported dead due to typhoid fever, although these cases were not explicitly linked to the outbreak. The Jamaica Plain neighborhood at that time was home to many working-class and poor immigrants, mostly from Ireland.
The most notorious carrier of typhoid fever, but by no means the most destructive, was Mary Mallon, known as Typhoid Mary. Although other cases of human-to-human spread of typhoid were known at the time, the concept of an asymptomatic carrier, who was able to transmit disease, had only been hypothesized and not yet identified or proven. Mary Mallon became the first known example of an asymptomatic carrier of an infectious disease, making typhoid fever the first known disease being transmissible through asymptomatic hosts. The cases and deaths caused by Mallon were mainly upper-class families in New York City. At the time of Mallon's tenure as a personal cook for upper-class families, New York City reported 3,000 to 4,500 cases of typhoid fever annually. In the summer of 1906 two daughters of a wealthy family and maids working in their home became ill with typhoid fever. After investigating their home water sources and ruling out water contamination, the family hired civil engineer George Soper to conduct an investigation of the possible source of typhoid fever in the home. Soper described himself as an "epidemic fighter". His investigation ruled out many sources of food, and led him to question if the cook the family hired just prior to their household outbreak, Mallon, was the source. Since she had already left and begun employment elsewhere, he proceeded to track her down in order to obtain a stool sample. When he was able to finally meet Mallon in person he described her by saying "Mary had a good figure and might have been called athletic had she not been a little too heavy." In recounts of Soper's pursuit of Mallon, his only remorse appears to be that he was not given enough credit for his relentless pursuit and publication of her personal identifying information, stating that the media "rob[s] me of whatever credit belongs to the discovery of the first typhoid fever carrier to be found in America." Ultimately, 51 cases and 3 deaths were suspected to be caused by Mallon.
In 1924 the city of Portland, Oregon, experienced an outbreak of typhoid fever, consisting of 26 cases and 5 deaths, all deaths due to intestinal hemorrhage. All cases were concluded to be due to a single milk farm worker, who was shedding large amounts of the typhoid pathogen in his urine. Misidentification of the disease, due to inaccurate Widal test results, delayed identification of the carrier and proper treatment. Ultimately, it took four samplings of different secretions from all of the dairy workers in order to successfully identify the carrier. Upon discovery, the dairy worker was forcibly quarantined for seven weeks, and regular samples were taken, most of the time the stool samples yielding no typhoid and often the urine yielding the pathogen. The carrier was reported as being 72 years old and appearing in excellent health with no symptoms. Pharmaceutical treatment decreased the amount of bacteria secreted, however, the infection was never fully cleared from the urine, and the carrier was released "under orders never again to engage in the handling of foods for human consumption." At the time of release, the authors noted "for more than fifty years he has earned his living chiefly by milking cows and knows little of other forms of labor, it must be expected that the closest surveillance will be necessary to make certain that he does not again engage in this occupation."
Overall, in the early 20th century the medical profession began to identify disease carriers and evidence of transmission independent of water contamination. In a 1933 American Medical Association publication, physicians' treatment of asymptomatic carriers is best summarized by the opening line "Carriers of typhoid bacilli are a menace". Within the same publication, the first official estimate of typhoid carriers is given: 2–5% of all typhoid patients, and distinguished between temporary carriers and chronic carriers. The authors further estimate that there are four to five chronic female carriers to every one male carrier, although offered no data to explain this assertion of a gender difference in the rate of typhoid carriers. As far as treatment, the authors suggest: "When recognized, carriers must be instructed as to the disposal of excreta as well as to the importance of personal cleanliness. They should be forbidden to handle food or drink intended for others, and their movements and whereabouts must be reported to the public health officers".
Today, typhoid carriers exist all over the world, but the highest incidence of asymptomatic infection is likely to occur in South/Southeast Asian and Sub-Saharan countries. The Los Angeles County department of public health tracks typhoid carriers and reports the number of carriers identified within the county yearly; between 2006 and 2016 0–4 new cases of typhoid carriers were identified per year. Cases of typhoid fever must be reported within one working day from identification. As of 2018, chronic typhoid carriers must sign a "Carrier Agreement" and are required to test for typhoid shedding twice yearly, ideally every 6 months. Carriers may be released from their agreements upon fulfilling "release" requirements, based on completion of a personalized treatment plan designed with medical professionals. Fecal or gallbladder carrier release requirements: 6 consecutive negative feces and urine specimens submitted at 1-month or greater intervals beginning at least 7 days after completion of therapy. Urinary or kidney carrier release requirements: 6 consecutive negative urine specimens submitted at 1-month or greater intervals beginning at least 7 days after completion of therapy.
Due to the nature of asymptomatic cases, many questions remain about how individuals can tolerate infection for long periods, how to identify such cases, and efficient options for treatment. Researchers are working to understand asymptomatic infection with Salmonella species by studying infections in laboratory animals, which will ultimately lead to improved prevention and treatment options for typhoid carriers. In 2002, John Gunn described the ability of Salmonella sp. to form biofilms on gallstones in mice, providing a model for studying carriage in the gallbladder. Denise Monack and Stanley Falkow described a mouse model of asymptomatic intestinal and systemic infection in 2004, and Monack went on to demonstrate that a subpopulation of superspreaders are responsible for the majority of transmission to new hosts, following the 80/20 rule of disease transmission, and that the intestinal microbiota likely plays a role in transmission. Monack's mouse model allows long-term carriage of Salmonella in mesenteric lymph nodes, spleen and liver.
Vaccine development
British bacteriologist Almroth Edward Wright first developed an effective typhoid vaccine at the Army Medical School in Netley, Hampshire. It was introduced in 1896 and used successfully by the British during the Second Boer War in South Africa. At that time, typhoid often killed more soldiers at war than were lost due to enemy combat. Wright further developed his vaccine at a newly opened research department at St Mary's Hospital Medical School in London in 1902, where he established a method for measuring protective substances (opsonin) in human blood. Wright's version of the typhoid vaccine was produced by growing the bacterium at body temperature in broth, then heating the bacteria to 60 °C to "heat inactivate" the pathogen, killing it, while keeping the surface antigens intact. The heat-killed bacteria was then injected into a patient. To show evidence of the vaccine's efficacy, Wright then collected serum samples from patients several weeks post-vaccination, and tested their serum's ability to agglutinate live typhoid bacteria. A "positive" result was represented by clumping of bacteria, indicating that the body was producing anti-serum (now called antibodies) against the pathogen.
Citing the example of the Second Boer War, during which many soldiers died from easily preventable diseases, Wright convinced the British Army that 10 million vaccine doses should be produced for the troops being sent to the Western Front, thereby saving up to half a million lives during World War I. The British Army was the only combatant at the outbreak of the war to have its troops fully immunized against the bacterium. For the first time, their casualties due to combat exceeded those from disease.
In 1909, Frederick F. Russell, a U.S. Army physician, adopted Wright's typhoid vaccine for use with the Army, and two years later, his vaccination program became the first in which an entire army was immunized. It eliminated typhoid as a significant cause of morbidity and mortality in the U.S. military. Typhoid vaccination for members of the American military became mandatory in 1911. Before the vaccine, the rate of typhoid fever in the military was 14,000 or greater per 100,000 soldiers. By World War I, the rate of typhoid in American soldiers was 37 per 100,000.
During the Second World War, the United States Army authorized the use of a trivalent vaccine – containing heat-inactivated Typhoid, Paratyphi A and Paratyphi B pathogens.
In 1934, the discovery of the Vi capsular antigen by Arthur Felix and Miss S. R. Margaret Pitt enabled the development of the safer Vi Antigen vaccine – which is widely in use today. Arthur Felix and Margaret Pitt also isolated the strain Ty2, which became the parent strain of Ty21a, the strain used as a live-attenuated vaccine for typhoid fever today.
Antibiotics and resistance
Chloramphenicol was isolated from Streptomyces by David Gotlieb during the 1940s. In 1948 American army doctors tested its efficacy in treating typhoid patients in Kuala Lumpur, Malaysia. Individuals who received a full course of treatment cleared the infection, whereas patients given a lower dose had a relapse. Asymptomatic carriers continued to shed bacilli despite chloramphenicol treatment – only ill patients were improved with chloramphenicol. Resistance to chloramphenicol became frequent in Southeast Asia by the 1950s, and today chloramphenicol is only used as a last resort due to the high prevalence of resistance.
Terminology
The disease has been referred to by various names, often associated with symptoms, such as gastric fever, enteric fever, abdominal typhus, infantile remittant fever, slow fever, nervous fever, pythogenic fever, drain fever, and low fever.
Society and culture
Notable people
Emperor Augustus of Rome (suspected based on historical record but not confirmed), survived.
Albert, Prince Consort, husband of Queen Victoria of the United Kingdom, died 24 days after the first record of "feeling horribly ill". Died 14 December 1861 after suffering loss of appetite, insomnia, fever, chills, profuse sweating, vomiting, rash spots, delusions, inability to recognize family members, worsening rash on abdomen, a change in tongue color, then finally a state of extreme fatigue. Attending physician William Jenner, an expert on typhoid fever at the time, diagnosed him.
Edward VII of the UK, son of Queen Victoria, while still Prince of Wales, had a near-fatal case of typhoid fever.
Tsar Nicholas II of Russia, survived, illness was circa 1900–1901.
Queen Wilhelmina of the Netherlands may have had an abortion in 1902 because of a typhoid infection she survived.
William Henry Harrison, the ninth President of the United States of America, died 32 days into his term, in 1841. This is the shortest term served by a United States President.
Wilbur Wright, co-inventor of the airplane with his brother Orville, died from typhoid in 1912 at the age of 45. Orville had typhoid in 1896, during which time Wilbur would read aloud to him, books by Otto Lilienthal, a German pioneer in human flight. This started the two men on their own pursuit of creating an airplane.
Stephen A. Douglas, a political opponent of Abraham Lincoln in 1858 and 1860, died of typhoid on June 3, 1861.
Ignacio Zaragoza, a Mexican general and politician, died at the age of 33 of typhoid fever on September 8, 1862.
Franz Schubert, songwriter and composer died of typhoid at age 31 on November 19, 1828.
William Wallace Lincoln, the son of US president Abraham and Mary Todd Lincoln, died of typhoid in 1862.
Princess Leopoldina of Brazil, daughter of Emperor Pedro II, died of typhoid in 1871.
Martha Bulloch Roosevelt, mother of president Theodore Roosevelt and paternal grandmother of Eleanor Roosevelt, died of typhoid fever in 1884.
Mary Mallon, "Typhoid Mary" – see history section, "carriers" for further details
Leland Stanford Jr., son of American tycoon and politician A. Leland Stanford and eponym of Leland Stanford Junior University, died of typhoid fever in 1884 at the age of 15.
Three of Louis Pasteur's five children died of typhoid fever.
Gerard Manley Hopkins, an English poet, died of typhoid fever in 1889.
Lizzie van Zyl, South African child inmate of the Bloemfontein concentration camp during the Second Boer War, died of typhoid fever in 1901.
Dr HJH 'Tup' Scott, captain of the 1886 Australian cricket team that toured England, died of typhoid in 1910.
Arnold Bennett, English novelist, died in 1932 of typhoid, two months after drinking a glass of water in a Paris hotel to prove it was safe.
Hakaru Hashimoto, a Japanese medical scientist, died of typhoid fever in 1934.
John Buford, Union cavalry officer during the Civil War, died of typhoid fever on December 16, 1863.
| Biology and health sciences | Infectious disease | null |
31734 | https://en.wikipedia.org/wiki/Urea | Urea | Urea, also called carbamide (because it is a diamide of carbonic acid), is an organic compound with chemical formula . This amide has two amino groups (–) joined by a carbonyl functional group (–C(=O)–). It is thus the simplest amide of carbamic acid.
Urea serves an important role in the cellular metabolism of nitrogen-containing compounds by animals and is the main nitrogen-containing substance in the urine of mammals. Urea is Neo-Latin, , , itself from Proto-Indo-European *h₂worsom.
It is a colorless, odorless solid, highly soluble in water, and practically non-toxic ( is 15 g/kg for rats). Dissolved in water, it is neither acidic nor alkaline. The body uses it in many processes, most notably nitrogen excretion. The liver forms it by combining two ammonia molecules () with a carbon dioxide () molecule in the urea cycle. Urea is widely used in fertilizers as a source of nitrogen (N) and is an important raw material for the chemical industry.
In 1828, Friedrich Wöhler discovered that urea can be produced from inorganic starting materials, which was an important conceptual milestone in chemistry. This showed for the first time that a substance previously known only as a byproduct of life could be synthesized in the laboratory without biological starting materials, thereby contradicting the widely held doctrine of vitalism, which stated that only living organisms could produce the chemicals of life.
Properties
Molecular and crystal structure
The structure of the molecule of urea is . The urea molecule is planar when in a solid crystal because of sp2 hybridization of the N orbitals. It is non-planar with C2 symmetry when in the gas phase or in aqueous solution, with C–N–H and H–N–H bond angles that are intermediate between the trigonal planar angle of 120° and the tetrahedral angle of 109.5°. In solid urea, the oxygen center is engaged in two N–H–O hydrogen bonds. The resulting hydrogen-bond network is probably established at the cost of efficient molecular packing: The structure is quite open, the ribbons forming tunnels with square cross-section. The carbon in urea is described as sp2 hybridized, the C-N bonds have significant double bond character, and the carbonyl oxygen is relatively basic. Urea's high aqueous solubility reflects its ability to engage in extensive hydrogen bonding with water.
By virtue of its tendency to form porous frameworks, urea has the ability to trap many organic compounds. In these so-called clathrates, the organic "guest" molecules are held in channels formed by interpenetrating helices composed of hydrogen-bonded urea molecules. In this way, urea-clathrates have been well investigated for separations.
Reactions
Urea is a weak base, with a pKb of 13.9. When combined with strong acids, it undergoes protonation at oxygen to form uronium salts. It is also a Lewis base, forming metal complexes of the type .
Urea reacts with malonic esters to make barbituric acids.
Thermolysis
Molten urea decomposes into ammonium cyanate at about 152 °C, and into ammonia and isocyanic acid above 160 °C:
Heating above 160 °C yields biuret and triuret via reaction with isocyanic acid:
At higher temperatures it converts to a range of condensation products, including cyanuric acid , guanidine , and melamine.
Aqueous stability
In aqueous solution, urea slowly equilibrates with ammonium cyanate. This elimination reaction cogenerates isocyanic acid, which can carbamylate proteins, in particular the N-terminal amino group, the side chain amino of lysine, and to a lesser extent the side chains of arginine and cysteine. Each carbamylation event adds 43 daltons to the mass of the protein, which can be observed in protein mass spectrometry. For this reason, pure urea solutions should be freshly prepared and used, as aged solutions may develop a significant concentration of cyanate (20 mM in 8 M urea). Dissolving urea in ultrapure water followed by removing ions (i.e. cyanate) with a mixed-bed ion-exchange resin and storing that solution at 4 °C is a recommended preparation procedure. However, cyanate will build back up to significant levels within a few days. Alternatively, adding 25–50 mM ammonium chloride to a concentrated urea solution decreases formation of cyanate because of the common ion effect.
Analysis
Urea is readily quantified by a number of different methods, such as the diacetyl monoxime colorimetric method, and the Berthelot reaction (after initial conversion of urea to ammonia via urease). These methods are amenable to high throughput instrumentation, such as automated flow injection analyzers and 96-well micro-plate spectrophotometers.
Related compounds
Ureas describes a class of chemical compounds that share the same functional group, a carbonyl group attached to two organic amine residues: , where groups are hydrogen (–H), organyl or other groups. Examples include carbamide peroxide, allantoin, and hydantoin. Ureas are closely related to biurets and related in structure to amides, carbamates, carbodiimides, and thiocarbamides.
Uses
Agriculture
More than 90% of world industrial production of urea is destined for use as a nitrogen-release fertilizer. Urea has the highest nitrogen content of all solid nitrogenous fertilizers in common use. Therefore, it has a low transportation cost per unit of nitrogen nutrient. The most common impurity of synthetic urea is biuret, which impairs plant growth. Urea breaks down in the soil to give ammonium ions (). The ammonium is taken up by the plant through its roots. In some soils, the ammonium is oxidized by bacteria to give nitrate (), which is also a nitrogen-rich plant nutrient. The loss of nitrogenous compounds to the atmosphere and runoff is wasteful and environmentally damaging so urea is sometimes modified to enhance the efficiency of its agricultural use. Techniques to make controlled-release fertilizers that slow the release of nitrogen include the encapsulation of urea in an inert sealant, and conversion of urea into derivatives such as urea-formaldehyde compounds, which degrade into ammonia at a pace matching plants' nutritional requirements.
Resins
Urea is a raw material for the manufacture of formaldehyde based resins, such as UF, MUF, and MUPF, used mainly in wood-based panels, for instance, particleboard, fiberboard, OSB, and plywood.
Explosives
Urea can be used in a reaction with nitric acid to make urea nitrate, a high explosive that is used industrially and as part of some improvised explosive devices.
Automobile systems
Urea is used in Selective Non-Catalytic Reduction (SNCR) and Selective Catalytic Reduction (SCR) reactions to reduce the pollutants in exhaust gases from combustion from diesel, dual fuel, and lean-burn natural gas engines. The BlueTec system, for example, injects a water-based urea solution into the exhaust system. Ammonia () produced by the hydrolysis of urea reacts with nitrogen oxides () and is converted into nitrogen gas () and water within the catalytic converter. The conversion of noxious to innocuous is described by the following simplified global equation:
When urea is used, a pre-reaction (hydrolysis) occurs to first convert it to ammonia:
Being a solid highly soluble in water (545 g/L at 25 °C), urea is much easier and safer to handle and store than the more irritant, caustic and hazardous ammonia (), so it is the reactant of choice. Trucks and cars using these catalytic converters need to carry a supply of diesel exhaust fluid, also sold as AdBlue, a solution of urea in water.
Laboratory uses
Urea in concentrations up to 10 M is a powerful protein denaturant as it disrupts the noncovalent bonds in the proteins. This property can be exploited to increase the solubility of some proteins. A mixture of urea and choline chloride is used as a deep eutectic solvent (DES), a substance similar to ionic liquid. When used in a deep eutectic solvent, urea gradually denatures the proteins that are solubilized.
Urea in concentrations up to 8 M can be used to make fixed brain tissue transparent to visible light while still preserving fluorescent signals from labeled cells. This allows for much deeper imaging of neuronal processes than previously obtainable using conventional one photon or two photon confocal microscopes.
Medical use
Urea-containing creams are used as topical dermatological products to promote rehydration of the skin. Urea 40% is indicated for psoriasis, xerosis, onychomycosis, ichthyosis, eczema, keratosis, keratoderma, corns, and calluses. If covered by an occlusive dressing, 40% urea preparations may also be used for nonsurgical debridement of nails. Urea 40% "dissolves the intercellular matrix" of the nail plate. Only diseased or dystrophic nails are removed, as there is no effect on healthy portions of the nail. This drug (as carbamide peroxide) is also used as an earwax removal aid.
Urea has also been studied as a diuretic. It was first used by Dr. W. Friedrich in 1892. In a 2010 study of ICU patients, urea was used to treat euvolemic hyponatremia and was found safe, inexpensive, and simple.
Like saline, urea has been injected into the uterus to induce abortion, although this method is no longer in widespread use.
The blood urea nitrogen (BUN) test is a measure of the amount of nitrogen in the blood that comes from urea. It is used as a marker of renal function, though it is inferior to other markers such as creatinine because blood urea levels are influenced by other factors such as diet, dehydration, and liver function.
Urea has also been studied as an excipient in Drug-coated Balloon (DCB) coating formulation to enhance local drug delivery to stenotic blood vessels. Urea, when used as an excipient in small doses (~3 μg/mm2) to coat DCB surface was found to form crystals that increase drug transfer without adverse toxic effects on vascular endothelial cells.
Urea labeled with carbon-14 or carbon-13 is used in the urea breath test, which is used to detect the presence of the bacterium Helicobacter pylori (H. pylori) in the stomach and duodenum of humans, associated with peptic ulcers. The test detects the characteristic enzyme urease, produced by H. pylori, by a reaction that produces ammonia from urea. This increases the pH (reduces the acidity) of the stomach environment around the bacteria. Similar bacteria species to H. pylori can be identified by the same test in animals such as apes, dogs, and cats (including big cats).
Miscellaneous uses
An ingredient in diesel exhaust fluid (DEF), which is 32.5% urea and 67.5% de-ionized water. DEF is sprayed into the exhaust stream of diesel vehicles to break down dangerous emissions into harmless nitrogen and water.
A component of animal feed, providing a relatively cheap source of nitrogen to promote growth
A non-corroding alternative to rock salt for road de-icing. It is often the main ingredient of pet friendly salt substitutes although it is less effective than traditional rock salt or calcium chloride.
A main ingredient in hair removers such as Nair and Veet
A browning agent in factory-produced pretzels
An ingredient in some skin cream, moisturizers, hair conditioners, and shampoos
A cloud seeding agent, along with other salts
A flame-proofing agent, commonly used in dry chemical fire extinguisher charges such as the urea-potassium bicarbonate mixture
An ingredient in many tooth whitening products
An ingredient in dish soap
Along with diammonium phosphate, as a yeast nutrient, for fermentation of sugars into ethanol
A nutrient used by plankton in ocean nourishment experiments for geoengineering purposes
As an additive to extend the working temperature and open time of hide glue
As a solubility-enhancing and moisture-retaining additive to dye baths for textile dyeing or printing
As an optical parametric oscillator in nonlinear optics
Physiology
Amino acids from ingested food (or produced from catabolism of muscle protein) that are used for the synthesis of proteins and other biological substances can be oxidized by the body as an alternative source of energy, yielding urea and carbon dioxide. The oxidation pathway starts with the removal of the amino group by a transaminase; the amino group is then fed into the urea cycle. The first step in the conversion of amino acids into metabolic waste in the liver is removal of the alpha-amino nitrogen, which produces ammonia. Because ammonia is toxic, it is excreted immediately by fish, converted into uric acid by birds, and converted into urea by mammals.
Ammonia () is a common byproduct of the metabolism of nitrogenous compounds. Ammonia is smaller, more volatile, and more mobile than urea. If allowed to accumulate, ammonia would raise the pH in cells to toxic levels. Therefore, many organisms convert ammonia to urea, even though this synthesis has a net energy cost. Being practically neutral and highly soluble in water, urea is a safe vehicle for the body to transport and excrete excess nitrogen.
Urea is synthesized in the body of many organisms as part of the urea cycle, either from the oxidation of amino acids or from ammonia. In this cycle, amino groups donated by ammonia and -aspartate are converted to urea, while -ornithine, citrulline, -argininosuccinate, and -arginine act as intermediates. Urea production occurs in the liver and is regulated by N-acetylglutamate. Urea is then dissolved into the blood (in the reference range of 2.5 to 6.7 mmol/L) and further transported and excreted by the kidney as a component of urine. In addition, a small amount of urea is excreted (along with sodium chloride and water) in sweat.
In water, the amine groups undergo slow displacement by water molecules, producing ammonia, ammonium ions, and bicarbonate ions. For this reason, old, stale urine has a stronger odor than fresh urine.
Humans
The cycling of and excretion of urea by the kidneys is a vital part of mammalian metabolism. Besides its role as carrier of waste nitrogen, urea also plays a role in the countercurrent exchange system of the nephrons, that allows for reabsorption of water and critical ions from the excreted urine. Urea is reabsorbed in the inner medullary collecting ducts of the nephrons, thus raising the osmolarity in the medullary interstitium surrounding the thin descending limb of the loop of Henle, which makes the water reabsorb.
By action of the urea transporter 2, some of this reabsorbed urea eventually flows back into the thin descending limb of the tubule, through the collecting ducts, and into the excreted urine. The body uses this mechanism, which is controlled by the antidiuretic hormone, to create hyperosmotic urine — i.e., urine with a higher concentration of dissolved substances than the blood plasma. This mechanism is important to prevent the loss of water, maintain blood pressure, and maintain a suitable concentration of sodium ions in the blood plasma.
The equivalent nitrogen content (in grams) of urea (in mmol) can be estimated by the conversion factor 0.028 g/mmol. Furthermore, 1 gram of nitrogen is roughly equivalent to 6.25 grams of protein, and 1 gram of protein is roughly equivalent to 5 grams of muscle tissue. In situations such as muscle wasting, 1 mmol of excessive urea in the urine (as measured by urine volume in litres multiplied by urea concentration in mmol/L) roughly corresponds to a muscle loss of 0.67 gram.
Other species
In aquatic organisms the most common form of nitrogen waste is ammonia, whereas land-dwelling organisms convert the toxic ammonia to either urea or uric acid. Urea is found in the urine of mammals and amphibians, as well as some fish. Birds and saurian reptiles have a different form of nitrogen metabolism that requires less water, and leads to nitrogen excretion in the form of uric acid. Tadpoles excrete ammonia, but shift to urea production during metamorphosis. Despite the generalization above, the urea pathway has been documented not only in mammals and amphibians, but in many other organisms as well, including birds, invertebrates, insects, plants, yeast, fungi, and even microorganisms.
Adverse effects
Urea can be irritating to skin, eyes, and the respiratory tract. Repeated or prolonged contact with urea in fertilizer form on the skin may cause dermatitis.
High concentrations in the blood can be damaging. Ingestion of low concentrations of urea, such as are found in typical human urine, are not dangerous with additional water ingestion within a reasonable time-frame. Many animals (e.g. camels, rodents or dogs) have a much more concentrated urine which may contain a higher urea amount than normal human urine.
Urea can cause algal blooms to produce toxins, and its presence in the runoff from fertilized land may play a role in the increase of toxic blooms.
The substance decomposes on heating above melting point, producing toxic gases, and reacts violently with strong oxidants, nitrites, inorganic chlorides, chlorites and perchlorates, causing fire and explosion.
History
Urea was first discovered in urine in 1727 by the Dutch scientist Herman Boerhaave, although this discovery is often attributed to the French chemist Hilaire Rouelle as well as William Cruickshank.
Boerhaave used the following steps to isolate urea:
Boiled off water, resulting in a substance similar to fresh cream
Used filter paper to squeeze out remaining liquid
Waited a year for solid to form under an oily liquid
Removed the oily liquid
Dissolved the solid in water
Used recrystallization to tease out the urea
In 1828, the German chemist Friedrich Wöhler obtained urea artificially by treating silver cyanate with ammonium chloride.
This was the first time an organic compound was artificially synthesized from inorganic starting materials, without the involvement of living organisms. The results of this experiment implicitly discredited vitalism, the theory that the chemicals of living organisms are fundamentally different from those of inanimate matter. This insight was important for the development of organic chemistry. His discovery prompted Wöhler to write triumphantly to Jöns Jakob Berzelius:
"I must tell you that I can make urea without the use of kidneys, either man or dog. Ammonium cyanate is urea."
In fact, his second sentence was incorrect. Ammonium cyanate and urea are two different chemicals with the same empirical formula , which are in chemical equilibrium heavily favoring urea under standard conditions. Regardless, with his discovery, Wöhler secured a place among the pioneers of organic chemistry.
Uremic frost was first described in 1865 by Harald Hirschsprung, the first Danish pediatrician in 1870 who also described the disease that carries his name in 1886. Uremic frost has become rare since the advent of dialysis. It is the classical pre-dialysis era description of crystallized urea deposits over the skin of patients with prolonged kidney failure and severe uremia.
Historical preparation
Urea was first noticed by Herman Boerhaave in the early 18th century from evaporates of urine. In 1773, Hilaire Rouelle obtained crystals containing urea from human urine by evaporating it and treating it with alcohol in successive filtrations. This method was aided by Carl Wilhelm Scheele's discovery that urine treated by concentrated nitric acid precipitated crystals. Antoine François, comte de Fourcroy and Louis Nicolas Vauquelin discovered in 1799 that the nitrated crystals were identical to Rouelle's substance and invented the term "urea." Berzelius made further improvements to its purification and finally William Prout, in 1817, succeeded in obtaining and determining the chemical composition of the pure substance. In the evolved procedure, urea was precipitated as urea nitrate by adding strong nitric acid to urine. To purify the resulting crystals, they were dissolved in boiling water with charcoal and filtered. After cooling, pure crystals of urea nitrate form. To reconstitute the urea from the nitrate, the crystals are dissolved in warm water, and barium carbonate added. The water is then evaporated and anhydrous alcohol added to extract the urea. This solution is drained off and evaporated, leaving pure urea.
Laboratory preparation
Ureas in the more general sense can be accessed in the laboratory by reaction of phosgene with primary or secondary amines:
These reactions proceed through an isocyanate intermediate. Non-symmetric ureas can be accessed by the reaction of primary or secondary amines with an isocyanate.
Urea can also be produced by heating ammonium cyanate to 60 °C.
Industrial production
In 2020, worldwide production capacity was approximately 180 million tonnes.
For use in industry, urea is produced from synthetic ammonia and carbon dioxide. As large quantities of carbon dioxide are produced during the ammonia manufacturing process as a byproduct of burning hydrocarbons to generate heat (predominantly natural gas, and less often petroleum derivatives or coal), urea production plants are almost always located adjacent to the site where the ammonia is manufactured.
Synthesis
The basic process, patented in 1922, is called the Bosch–Meiser urea process after its discoverers Carl Bosch and Wilhelm Meiser. The process consists of two main equilibrium reactions, with incomplete conversion of the reactants. The first is carbamate formation: the fast exothermic reaction of liquid ammonia with gaseous carbon dioxide () at high temperature and pressure to form ammonium carbamate ():
(ΔH = −117 kJ/mol at 110 atm and 160 °C)
The second is urea conversion: the slower endothermic decomposition of ammonium carbamate into urea and water:
(ΔH = +15.5 kJ/mol at 160–180 °C)
The overall conversion of and to urea is exothermic, with the reaction heat from the first reaction driving the second. The conditions that favor urea formation (high temperature) have an unfavorable effect on the carbamate formation equilibrium. The process conditions are a compromise: the ill-effect on the first reaction of the high temperature (around 190 °C) needed for the second is compensated for by conducting the process under high pressure (140–175 bar), which favors the first reaction. Although it is necessary to compress gaseous carbon dioxide to this pressure, the ammonia is available from the ammonia production plant in liquid form, which can be pumped into the system much more economically. To allow the slow urea formation reaction time to reach equilibrium, a large reaction space is needed, so the synthesis reactor in a large urea plant tends to be a massive pressure vessel.
Reactant recycling
Because the urea conversion is incomplete, the urea must be separated from the unconverted reactants, including the ammonium carbamate. Various commercial urea processes are characterized by the conditions under which urea forms and the way that unconverted reactants are further processed.
Conventional recycle processes
In early "straight-through" urea plants, reactant recovery (the first step in "recycling") was done by letting down the system pressure to atmospheric to let the carbamate decompose back to ammonia and carbon dioxide. Originally, because it was not economic to recompress the ammonia and carbon dioxide for recycle, the ammonia at least would be used for the manufacture of other products such as ammonium nitrate or ammonium sulfate, and the carbon dioxide was usually wasted. Later process schemes made recycling unused ammonia and carbon dioxide practical. This was accomplished by the "total recycle process", developed in the 1940s to 1960s and now called the "conventional recycle process". It proceeds by depressurizing the reaction solution in stages (first to 18–25 bar and then to 2–5 bar) and passing it at each stage through a steam-heated carbamate decomposer, then recombining the resulting carbon dioxide and ammonia in a falling-film carbamate condenser and pumping the carbamate solution back into the urea reaction vessel.
Stripping recycle process
The "conventional recycle process" for recovering and reusing the reactants has largely been supplanted by a stripping process, developed in the early 1960s by Stamicarbon in The Netherlands, that operates at or near the full pressure of the reaction vessel. It reduces the complexity of the multi-stage recycle scheme, and it reduces the amount of water recycled in the carbamate solution, which has an adverse effect on the equilibrium in the urea conversion reaction and thus on overall plant efficiency. Effectively all new urea plants use the stripper, and many total recycle urea plants have converted to a stripping process.
In the conventional recycle processes, carbamate decomposition is promoted by reducing the overall pressure, which reduces the partial pressure of both ammonia and carbon dioxide, allowing these gasses to be separated from the urea product solution. The stripping process achieves a similar effect without lowering the overall pressure, by suppressing the partial pressure of just one of the reactants in order to promote carbamate decomposition. Instead of feeding carbon dioxide gas directly to the urea synthesis reactor with the ammonia, as in the conventional process, the stripping process first routes the carbon dioxide through the stripper. The stripper is a carbamate decomposer that provides a large amount of gas-liquid contact. This flushes out free ammonia, reducing its partial pressure over the liquid surface and carrying it directly to a carbamate condenser (also under full system pressure). From there, reconstituted ammonium carbamate liquor is passed to the urea production reactor. That eliminates the medium-pressure stage of the conventional recycle process.
Side reactions
The three main side reactions that produce impurities have in common that they decompose urea.
Urea hydrolyzes back to ammonium carbamate in the hottest stages of the synthesis plant, especially in the stripper, so residence times in these stages are designed to be short.
Biuret is formed when two molecules of urea combine with the loss of a molecule of ammonia.
Normally this reaction is suppressed in the synthesis reactor by maintaining an excess of ammonia, but after the stripper, it occurs until the temperature is reduced. Biuret is undesirable in urea fertilizer because it is toxic to crop plants to varying degrees, but it is sometimes desirable as a nitrogen source when used in animal feed.
Isocyanic acid HNCO and ammonia results from the thermal decomposition of ammonium cyanate , which is in chemical equilibrium with urea:
This decomposition is at its worst when the urea solution is heated at low pressure, which happens when the solution is concentrated for prilling or granulation (see below). The reaction products mostly volatilize into the overhead vapours, and recombine when these condense to form urea again, which contaminates the process condensate.
Corrosion
Ammonium carbamate solutions are highly corrosive to metallic construction materials – even to resistant forms of stainless steel – especially in the hottest parts of the synthesis plant such as the stripper. Historically corrosion has been minimized (although not eliminated) by continuous injection of a small amount of oxygen (as air) into the plant to establish and maintain a passive oxide layer on exposed stainless steel surfaces. Highly corrosion resistant materials have been introduced to reduce the need for passivation oxygen, such as specialized duplex stainless steels in the 1990s, and zirconium or zirconium-clad titanium tubing in the 2000s.
Finishing
Urea can be produced in solid forms (prills, granules, pellets or crystals) or as solutions.
Solid forms
For its main use as a fertilizer urea is mostly marketed in solid form, either as prills or granules. Prills are solidified droplets, whose production predates satisfactory urea granulation processes. Prills can be produced more cheaply than granules, but the limited size of prills (up to about 2.1 mm in diameter), their low crushing strength, and the caking or crushing of prills during bulk storage and handling make them inferior to granules. Granules are produced by acretion onto urea seed particles by spraying liquid urea in a succession of layers. Formaldehyde is added during the production of both prills and granules in order to increase crushing strength and suppress caking. Other shaping techniques such as pastillization (depositing uniform-sized liquid droplets onto a cooling conveyor belt) are also used.
Liquid forms
Solutions of urea and ammonium nitrate in water (UAN) are commonly used as a liquid fertilizer. In admixture, the combined solubility of ammonium nitrate and urea is so much higher than that of either component alone that it gives a stable solution with a total nitrogen content (32%) approaching that of solid ammonium nitrate (33.5%), though not, of course, that of urea itself (46%). UAN allows use of ammonium nitrate without the explosion hazard. UAN accounts for 80% of the liquid fertilizers in the US.
| Physical sciences | Carbon–nitrogen bond | null |
31736 | https://en.wikipedia.org/wiki/Uric%20acid | Uric acid | Uric acid is a heterocyclic compound of carbon, nitrogen, oxygen, and hydrogen with the formula C5H4N4O3. It forms ions and salts known as urates and acid urates, such as ammonium acid urate. Uric acid is a product of the metabolic breakdown of purine nucleotides, and it is a normal component of urine. High blood concentrations of uric acid can lead to gout and are associated with other medical conditions, including diabetes and the formation of ammonium acid urate kidney stones.
Chemistry
Uric acid was first isolated from kidney stones in 1776 by Swedish chemist Carl Wilhelm Scheele. In 1882, the Ukrainian chemist Ivan Horbaczewski first synthesized uric acid by melting urea with glycine.
Uric acid displays lactam–lactim tautomerism. Uric acid crystallizes in the lactam form, with computational chemistry also indicating that tautomer to be the most stable. Uric acid is a diprotic acid with pKa1 = 5.4 and pKa2 = 10.3. At physiological pH, urate predominates in solution.
Biochemistry
The enzyme xanthine oxidase (XO) catalyzes the formation of uric acid from xanthine and hypoxanthine. XO, which is found in mammals, functions primarily as a dehydrogenase and rarely as an oxidase, despite its name. Xanthine in turn is produced from other purines. Xanthine oxidase is a large enzyme whose active site consists of the metal molybdenum bound to sulfur and oxygen. Uric acid is released in hypoxic conditions (low oxygen saturation).
Water solubility
In general, the water solubility of uric acid and its alkali metal and alkaline earth salts is rather low. All these salts exhibit greater solubility in hot water than cold, allowing for easy recrystallization. This low solubility is significant for the etiology of gout. The solubility of the acid and its salts in ethanol is very low or negligible. In ethanol/water mixtures, the solubilities are somewhere between the end values for pure ethanol and pure water.
{| class="wikitable sortable"
|+ Solubility of urate salts (grams of water per gram of compound)
! data-sort-type="text" | Compound
! data-sort-type="number" | Cold water
! data-sort-type="number" | Boiling water
|-
| Uric acid
| align=right | 15,000
| align=right | 2,000
|-
| Ammonium hydrogen urate
| align=center | —
| align=right | 1,600
|-
| Lithium hydrogen urate
| align=right | 370
| align=right | 39
|-
| Sodium hydrogen urate
| align=right | 1,175
| align=right | 124
|-
| Potassium hydrogen urate
| align=right | 790
| align=right | 75
|-
| Magnesium dihydrogen diurate
| align=right | 3,750
| align=right | 160
|-
| Calcium dihydrogen diurate
| align=right | 603
| align=right | 276
|-
| Disodium urate
| align=right | 77
| align=center | —
|-
| Dipotassium urate
| align=right | 44
| align=right | 35
|-
| Calcium urate
| align=right | 1,500
| align=right | 1,440
|-
| Strontium urate
| align=right | 4,300
| align=right | 1,790
|-
| Barium urate
| align=right | 7,900
| align=right | 2,700
|}
The figures given indicate what mass of water is required to dissolve a unit mass of compound indicated. The lower the number, the more soluble the substance in the said solvent.
Genetic and physiological diversity
Primates
In humans uric acid (actually hydrogen urate ion) is the final oxidation (breakdown) product of purine metabolism and is excreted in urine, whereas in most other mammals, the enzyme uricase further oxidizes uric acid to allantoin. The loss of uricase in higher primates parallels the similar loss of the ability to synthesize ascorbic acid, leading to the suggestion that urate may partially substitute for ascorbate in such species. Both uric acid and ascorbic acid are strong reducing agents (electron donors) and potent antioxidants. In humans, over half the antioxidant capacity of blood plasma comes from hydrogen urate ion.
Humans
The normal concentration range of uric acid (or hydrogen urate ion) in human blood is 25 to 80 mg/L for men and 15 to 60 mg/L for women (but see below for slightly different values). An individual can have serum values as high as 96 mg/L and not have gout. In humans, about 70% of daily uric acid disposal occurs via the kidneys, and in 5–25% of humans, impaired renal (kidney) excretion leads to hyperuricemia. Normal excretion of uric acid in the urine is 270 to 360 mg per day (concentration of 270 to 360 mg/L if one litre of urine is produced per day – higher than the solubility of uric acid because it is in the form of dissolved acid urates), roughly 1% as much as the daily excretion of urea.
Dogs
The Dalmatian has a genetic defect in uric acid uptake by the liver and kidneys, resulting in decreased conversion to allantoin, so this breed excretes uric acid, and not allantoin, in the urine.
Birds, reptiles and desert-dwelling mammals
In birds and reptiles, and in some desert-dwelling mammals (such as the kangaroo rat), uric acid also is the end product of purine metabolism, but it is excreted in feces as a dry mass. This involves a complex metabolic pathway that is energetically costly in comparison to processing of other nitrogenous wastes such as urea (from the urea cycle) or ammonia, but has the advantages of reducing water loss and preventing dehydration.
Invertebrates
Platynereis dumerilii, a marine polychaete worm, uses uric acid as a sexual pheromone. The female of the species releases uric acid into the water during mating, which induces males to release sperm.
Bacteria
Uric acid metabolism is done in the human gut by ∼1/5 of bacteria species that come from 4 of 6 major phyla. Such metabolism is anaerobic involving uncharacterized ammonia lyase, peptidase, carbamoyl transferase, and oxidoreductase enzymes. The result is that uric acid is converted into xanthine or lactate and the short chain fatty acids such as acetate and butyrate. Radioisotope studies suggest about 1/3 of uric acid is removed in healthy people in their gut with this being roughly 2/3 in those with kidney disease. In mouse models, such bacteria compensate for the loss of uricase leading researchers to raise the possibility "that antibiotics targeting anaerobic bacteria, which would ablate gut bacteria, increase the risk for developing gout in humans".
Genetics
Although foods such as meat and seafood can elevate serum urate levels, genetic variation is a much greater contributor to high serum urate. A proportion of people have mutations in the urate transport proteins responsible for the excretion of uric acid by the kidneys. Variants of a number of genes, linked to serum urate, have so far been identified: SLC2A9; ABCG2; SLC17A1; SLC22A11; SLC22A12; SLC16A9; GCKR; LRRC16A; and PDZK1. GLUT9, encoded by the SLC2A9 gene, is known to transport both uric acid and fructose.
Myogenic hyperuricemia, as a result of the purine nucleotide cycle running when ATP reservoirs in muscle cells are low, is a common pathophysiologic feature of glycogenoses, such as GSD-III, which is a metabolic myopathy impairing the ability of ATP (energy) production for muscle cells. In these metabolic myopathies, myogenic hyperuricemia is exercise-induced; inosine, hypoxanthine and uric acid increase in plasma after exercise and decrease over hours with rest. Excess AMP (adenosine monophosphate) is converted into uric acid.
AMP → IMP → Inosine → Hypoxanthine → Xanthine → Uric Acid
Clinical significance and research
In human blood plasma, the reference range of uric acid is typically 3.4–7.2 mg per 100 mL(200–430 μmol/L) for men, and 2.4–6.1 mg per 100 mL for women (140–360 μmol/L). Uric acid concentrations in blood plasma above and below the normal range are known as, respectively, hyperuricemia and hypouricemia. Likewise, uric acid concentrations in urine above and below normal are known as hyperuricosuria and hypouricosuria. Uric acid levels in saliva may be associated with blood uric acid levels.
High uric acid
Hyperuricemia (high levels of uric acid), which induces gout, has various potential origins:
Diet may be a factor. High intake of dietary purine, high-fructose corn syrup, and sucrose can increase levels of uric acid.
Serum uric acid can be elevated by reduced excretion via the kidneys.
Fasting or rapid weight loss can temporarily elevate uric acid levels.
Certain drugs, such as thiazide diuretics, can increase blood uric acid levels by interfering with renal clearance.
Tumor lysis syndrome, a metabolic complication of certain cancers or chemotherapy, due to nucleobase and potassium release into the plasma.
Pseudohypoxia (disrupted NADH/NAD+ ratio) caused by diabetic hyperglycemia and excessive alcohol consumption.
Gout
A 2011 survey in the United States indicated that 3.9% of the population had gout, whereas 21.4% had hyperuricemia without having symptoms.
Excess blood uric acid (serum urate) can induce gout, a painful condition resulting from needle-like crystals of uric acid termed monosodium urate crystals precipitating in joints, capillaries, skin, and other tissues. Gout can occur where serum uric acid levels are as low as 6 mg per 100 mL (357 μmol/L), but an individual can have serum values as high as 9.6 mg per 100 mL (565 μmol/L) and not have gout.
In humans, purines are metabolized into uric acid, which is then excreted in the urine. Consumption of large amounts of some types of purine-rich foods, particularly meat and seafood, increases gout risk. Purine-rich foods include liver, kidney, and sweetbreads, and certain types of seafood, including anchovies, herring, sardines, mussels, scallops, trout, haddock, mackerel, and tuna. Moderate intake of purine-rich vegetables, however, is not associated with an increased risk of gout.
One treatment for gout in the 19th century was administration of lithium salts; lithium urate is more soluble. Today, inflammation during attacks is more commonly treated with NSAIDs, colchicine, or corticosteroids, and urate levels are managed with allopurinol. Allopurinol, which weakly inhibits xanthine oxidase, is an analog of hypoxanthine that is hydroxylated by xanthine oxidoreductase at the 2-position to give oxipurinol.
Tumor lysis syndrome
Tumor lysis syndrome, an emergency condition that may result from blood cancers, produces high uric acid levels in blood when tumor cells release their contents into the blood, either spontaneously or following chemotherapy. Tumor lysis syndrome may lead to acute kidney injury when uric acid crystals are deposited in the kidneys. Treatment includes hyperhydration to dilute and excrete uric acid via urine, rasburicase to reduce levels of poorly soluble uric acid in blood, or allopurinol to inhibit purine catabolism from adding to uric acid levels.
Lesch–Nyhan syndrome
Lesch–Nyhan syndrome, a rare inherited disorder, is also associated with high serum uric acid levels. Spasticity, involuntary movement, and cognitive retardation as well as manifestations of gout are seen in this syndrome.
Cardiovascular disease
Hyperuricemia is associated with an increase in risk factors for cardiovascular disease. It is also possible that high levels of uric acid may have a causal role in the development of atherosclerotic cardiovascular disease, but this is controversial and the data are conflicting.
Uric acid stone formation
Kidney stones can form through deposits of sodium urate microcrystals.
Saturation levels of uric acid in blood may result in one form of kidney stones when the urate crystallizes in the kidney. These uric acid stones are radiolucent, so do not appear on an abdominal plain X-ray. Uric acid crystals can also promote the formation of calcium oxalate stones, acting as "seed crystals".
Diabetes
Hyperuricemia is associated with components of metabolic syndrome, including in children.
Low uric acid
Low uric acid (hypouricemia) can have numerous causes. Low dietary zinc intakes cause lower uric acid levels. This effect can be even more pronounced in women taking oral contraceptive medication. Sevelamer, a drug indicated for prevention of hyperphosphataemia in people with chronic kidney failure, can significantly reduce serum uric acid.
Multiple sclerosis
Meta-analysis of 10 case-control studies found that the serum uric acid levels of patients with multiple sclerosis were significantly lower compared to those of healthy controls, possibly indicating a diagnostic biomarker for multiple sclerosis.
Normalizing low uric acid
Correcting low or deficient zinc levels can help elevate serum uric acid.
| Physical sciences | Alkaloids | Chemistry |
31742 | https://en.wikipedia.org/wiki/Unicode | Unicode | Unicode, formally The Unicode Standard, is a text encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 16.0 of the standard defines characters and 168 scripts used in various ordinary, literary, academic, and technical contexts.
Many common characters, including numerals, punctuation, and other symbols, are unified within the standard and are not treated as specific to any given writing system. Unicode encodes 3790 emoji, with the continued development thereof conducted by the Consortium as a part of the standard. Moreover, the widespread adoption of Unicode was in large part responsible for the initial popularization of emoji outside of Japan. Unicode is ultimately capable of encoding more than 1.1 million characters.
Unicode has largely supplanted the previous environment of a myriad of incompatible character sets, each used within different locales and on different computer architectures. Unicode is used to encode the vast majority of text on the Internet, including most web pages, and relevant Unicode support has become a common consideration in contemporary software development.
The Unicode character repertoire is synchronized with ISO/IEC 10646, each being code-for-code identical with one another. However, The Unicode Standard is more than just a repertoire within which characters are assigned. To aid developers and designers, the standard also provides charts and reference data, as well as annexes explaining concepts germane to various scripts, providing guidance for their implementation. Topics covered by these annexes include character normalization, character composition and decomposition, collation, and directionality.
Unicode text is processed and stored as binary data using one of several encodings, which define how to translate the standard's abstracted codes for characters into sequences of bytes. The Unicode Standard itself defines three encodings: UTF-8, UTF-16, and UTF-32, though several others exist. Of these, UTF-8 is the most widely used by a large margin, in part due to its backwards-compatibility with ASCII.
Origin and development
Unicode was originally designed with the intent of transcending limitations present in all text encodings designed up to that point: each encoding was relied upon for use in its own context, but with no particular expectation of compatibility with any other. Indeed, any two encodings chosen were often totally unworkable when used together, with text encoded in one interpreted as garbage characters by the other. Most encodings had only been designed to facilitate interoperation between a handful of scripts—often primarily between a given script and Latin characters—not between a large number of scripts, and not with all of the scripts supported being treated in a consistent manner.
The philosophy that underpins Unicode seeks to encode the underlying characters—graphemes and grapheme-like units—rather than graphical distinctions considered mere variant glyphs thereof, that are instead best handled by the typeface, through the use of markup, or by some other means. In particularly complex cases, such as the treatment of orthographical variants in Han characters, there is considerable disagreement regarding which differences justify their own encodings, and which are only graphical variants of other characters.
At the most abstract level, Unicode assigns a unique number called a to each character. Many issues of visual representation—including size, shape, and style—are intended to be up to the discretion of the software actually rendering the text, such as a web browser or word processor. However, partially with the intent of encouraging rapid adoption, the simplicity of this original model has become somewhat more elaborate over time, and various pragmatic concessions have been made over the course of the standard's development.
The first 256 code points mirror the ISO/IEC 8859-1 standard, with the intent of trivializing the conversion of text already written in Western European scripts. To preserve the distinctions made by different legacy encodings, therefore allowing for conversion between them and Unicode without any loss of information, many characters nearly identical to others, in both appearance and intended function, were given distinct code points. For example, the Halfwidth and Fullwidth Forms block encompasses a full semantic duplicate of the Latin alphabet, because legacy CJK encodings contained both "fullwidth" (matching the width of CJK characters) and "halfwidth" (matching ordinary Latin script) characters.
The Unicode Bulldog Award is given to people deemed to be influential in Unicode's development, with recipients including Tatsuo Kobayashi, Thomas Milo, Roozbeh Pournader, Ken Lunde, and Michael Everson.
History
The origins of Unicode can be traced back to the 1980s, to a group of individuals with connections to Xerox's Character Code Standard (XCCS). In 1987, Xerox employee Joe Becker, along with Apple employees Lee Collins and Mark Davis, started investigating the practicalities of creating a universal character set. With additional input from Peter Fenwick and Dave Opstad, Becker published a draft proposal for an "international/multilingual text character encoding system in August 1988, tentatively called Unicode". He explained that "the name 'Unicode' is intended to suggest a unique, unified, universal encoding".
In this document, entitled Unicode 88, Becker outlined a scheme using 16-bit characters:
Unicode is intended to address the need for a workable, reliable world text encoding. Unicode could be roughly described as "wide-body ASCII" that has been stretched to 16 bits to encompass the characters of all the world's living languages. In a properly engineered design, 16 bits per character are more than sufficient for this purpose.
This design decision was made based on the assumption that only scripts and characters in "modern" use would require encoding:
Unicode gives higher priority to ensuring utility for the future than to preserving past antiquities. Unicode aims in the first instance at the characters published in the modern text (e.g. in the union of all newspapers and magazines printed in the world in 1988), whose number is undoubtedly far below 214 = 16,384. Beyond those modern-use characters, all others may be defined to be obsolete or rare; these are better candidates for private-use registration than for congesting the public list of generally useful Unicode.
In early 1989, the Unicode working group expanded to include Ken Whistler and Mike Kernaghan of Metaphor, Karen Smith-Yoshimura and Joan Aliprand of Research Libraries Group, and Glenn Wright of Sun Microsystems. In 1990, Michel Suignard and Asmus Freytag of Microsoft and NeXT's Rick McGowan had also joined the group. By the end of 1990, most of the work of remapping existing standards had been completed, and a final review draft of Unicode was ready.
The Unicode Consortium was incorporated in California on 3 January 1991, and the first volume of The Unicode Standard was published that October. The second volume, now adding Han ideographs, was published in June 1992.
In 1996, a surrogate character mechanism was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. This increased the Unicode codespace to over a million code points, which allowed for the encoding of many historic scripts, such as Egyptian hieroglyphs, and thousands of rarely used or obsolete characters that had not been anticipated for inclusion in the standard. Among these characters are various rarely used CJK characters—many mainly being used in proper names, making them far more necessary for a universal encoding than the original Unicode architecture envisioned.
Version 1.0 of Microsoft's TrueType specification, published in 1992, used the name "Apple Unicode" instead of "Unicode" for the Platform ID in the naming table.
Unicode Consortium
The Unicode Consortium is a nonprofit organization that coordinates Unicode's development. Full members include most of the main computer software and hardware companies (and few others) with any interest in text-processing standards, including Adobe, Apple, Google, IBM, Meta (previously as Facebook), Microsoft, Netflix, and SAP.
Over the years several countries or government agencies have been members of the Unicode Consortium.
The Consortium has the ambitious goal of eventually replacing existing character encoding schemes with Unicode and its standard Unicode Transformation Format (UTF) schemes, as many of the existing schemes are limited in size and scope and are incompatible with multilingual environments.
Scripts covered
Unicode currently covers most major writing systems in use today.
, a total of 168 scripts are included in the latest version of Unicode (covering alphabets, abugidas and syllabaries), although there are still scripts that are not yet encoded, particularly those mainly used in historical, liturgical, and academic contexts. Further additions of characters to the already encoded scripts, as well as symbols, in particular for mathematics and music (in the form of notes and rhythmic symbols), also occur.
The Unicode Roadmap Committee (Michael Everson, Rick McGowan, Ken Whistler, V.S. Umamaheswaran) maintain the list of scripts that are candidates or potential candidates for encoding and their tentative code block assignments on the Unicode Roadmap page of the Unicode Consortium website. For some scripts on the Roadmap, such as Jurchen and Khitan large script, encoding proposals have been made and they are working their way through the approval process. For other scripts, such as Numidian and Rongorongo, no proposal has yet been made, and they await agreement on character repertoire and other details from the user communities involved.
Some modern invented scripts which have not yet been included in Unicode (e.g., Tengwar) or which do not qualify for inclusion in Unicode due to lack of real-world use (e.g., Klingon) are listed in the ConScript Unicode Registry, along with unofficial but widely used Private Use Areas code assignments.
There is also a Medieval Unicode Font Initiative focused on special Latin medieval characters. Part of these proposals has been already included in Unicode.
Script Encoding Initiative
The Script Encoding Initiative, a project run by Deborah Anderson at the University of California, Berkeley was founded in 2002 with the goal of funding proposals for scripts not yet encoded in the standard. The project has become a major source of proposed additions to the standard in recent years.
Versions
The Unicode Consortium together with the ISO have developed a shared repertoire following the initial publication of The Unicode Standard: Unicode and the ISO's Universal Coded Character Set (UCS) use identical character names and code points. However, the Unicode versions do differ from their ISO equivalents in two significant ways.
While the UCS is a simple character map, Unicode specifies the rules, algorithms, and properties necessary to achieve interoperability between different platforms and languages. Thus, The Unicode Standard includes more information, covering in-depth topics such as bitwise encoding, collation, and rendering. It also provides a comprehensive catalog of character properties, including those needed for supporting bidirectional text, as well as visual charts and reference data sets to aid implementers. Previously, The Unicode Standard was sold as a print volume containing the complete core specification, standard annexes, and code charts. However, version 5.0, published in 2006, was the last version printed this way. Starting with version 5.2, only the core specification, published as a print-on-demand paperback, may be purchased. The full text, on the other hand, is published as a free PDF on the Unicode website.
A practical reason for this publication method highlights the second significant difference between the UCS and Unicode—the frequency with which updated versions are released and new characters added. The Unicode Standard has regularly released annual expanded versions, occasionally with more than one version released in a calendar year and with rare cases where the scheduled release had to be postponed. For instance, in April 2020, a month after version 13.0 was published, the Unicode Consortium announced they had changed the intended release date for version 14.0, pushing it back six months to September 2021 due to the COVID-19 pandemic.
Unicode 16.0, the latest version, was released on 10 September 2024. It added 5,185 characters and seven new scripts: Garay, Gurung Khema, Kirat Rai, Ol Onal, Sunuwar, Todhri, and Tulu-Tigalari.
Thus far, the following versions of The Unicode Standard have been published. Update versions, which do not include any changes to character repertoire, are signified by the third number (e.g., "version 4.0.1") and are omitted in the table below.
Projected versions
The Unicode Consortium normally releases a new version of The Unicode Standard once a year. Version 17.0, the next major version, is projected to include 4301 new unified CJK characters.
Architecture and terminology
Codespace and code points
The Unicode Standard defines a codespace: a sequence of integers called code points in the range from 0 to , notated according to the standard as –. The codespace is a systematic, architecture-independent representation of The Unicode Standard; actual text is processed as binary data via one of several Unicode encodings, such as UTF-8.
In this normative notation, the two-character prefix U+ always precedes a written code point, and the code points themselves are written as hexadecimal numbers. At least four hexadecimal digits are always written, with leading zeros prepended as needed. For example, the code point is padded with two leading zeros, but () is not padded.
There are a total of 220 + (216 − 211) = valid code points within the codespace. (This number arises from the limitations of the UTF-16 character encoding, which can encode the 216 code points in the range through except for the 211 code points in the range through , which are used as surrogate pairs to encode the 220 code points in the range through .)
Code planes and blocks
The Unicode codespace is divided into 17 planes, numbered 0 to 16. Plane 0 is the Basic Multilingual Plane (BMP), and contains the most commonly used characters. All code points in the BMP are accessed as a single code unit in UTF-16 encoding and can be encoded in one, two or three bytes in UTF-8. Code points in planes 1 through 16 (the supplementary planes) are accessed as surrogate pairs in UTF-16 and encoded in four bytes in UTF-8.
Within each plane, characters are allocated within named blocks of related characters. The size of a block is always a multiple of 16, and is often a multiple of 128, but is otherwise arbitrary. Characters required for a given script may be spread out over several different, potentially disjunct blocks within the codespace.
General Category property
Each code point is assigned a classification, listed as the code point's General Category property. Here, at the uppermost level code points are categorized as one of Letter, Mark, Number, Punctuation, Symbol, Separator, or Other. Under each category, each code point is then further subcategorized. In most cases, other properties must be used to adequately describe all the characteristics of any given code point.
The points in the range – are known as high-surrogate code points, and code points in the range – ( code points) are known as low-surrogate code points. A high-surrogate code point followed by a low-surrogate code point forms a surrogate pair in UTF-16 in order to represent code points greater than . In principle, these code points cannot otherwise be used, though in practice this rule is often ignored, especially when not using UTF-16.
A small set of code points are guaranteed never to be assigned to characters, although third-parties may make independent use of them at their discretion. There are 66 of these noncharacters: – and the last two code points in each of the 17 planes (e.g. , , , , ..., , ). The set of noncharacters is stable, and no new noncharacters will ever be defined. Like surrogates, the rule that these cannot be used is often ignored, although the operation of the byte order mark assumes that will never be the first code point in a text. The exclusion of surrogates and noncharacters leaves code points available for use.
Private-use code points are considered to be assigned, but they intentionally have no interpretation specified by The Unicode Standard such that any interchange of such code points requires an independent agreement between the sender and receiver as to their interpretation. There are three private-use areas in the Unicode codespace:
Private Use Area: – ( characters),
Supplementary Private Use Area-A: – ( characters),
Supplementary Private Use Area-B: – ( characters).
Graphic characters are those defined by The Unicode Standard to have particular semantics, either having a visible glyph shape or representing a visible space. As of Unicode 16.0, there are graphic characters.
Format characters are characters that do not have a visible appearance but may have an effect on the appearance or behavior of neighboring characters. For example, and may be used to change the default shaping behavior of adjacent characters (e.g. to inhibit ligatures or request ligature formation). There are 172 format characters in Unicode 16.0.
65 code points, the ranges – and –, are reserved as control codes, corresponding to the C0 and C1 control codes as defined in ISO/IEC 6429. , , and are widely used in texts using Unicode. In a phenomenon known as mojibake, the C1 code points are improperly decoded according to the Windows-1252 codepage, previously widely used in Western European contexts.
Together, graphic, format, control code, and private use characters are collectively referred to as assigned characters. Reserved code points are those code points that are valid and available for use, but have not yet been assigned. As of Unicode 15.1, there are reserved code points.
Abstract characters
The set of graphic and format characters defined by Unicode does not correspond directly to the repertoire of abstract characters representable under Unicode. Unicode encodes characters by associating an abstract character with a particular code point. However, not all abstract characters are encoded as a single Unicode character, and some abstract characters may be represented in Unicode by a sequence of two or more characters. For example, a Latin small letter "i" with an ogonek, a dot above, and an acute accent, which is required in Lithuanian, is represented by the character sequence ; ; . Unicode maintains a list of uniquely named character sequences for abstract characters that are not directly encoded in Unicode.
All assigned characters have a unique and immutable name by which they are identified. This immutability has been guaranteed since version 2.0 of The Unicode Standard by its Name Stability policy. In cases where a name is seriously defective and misleading, or has a serious typographical error, a formal alias may be defined that applications are encouraged to use in place of the official character name. For example, has the formal alias , and has the formal alias .
Ready-made versus composite characters
Unicode includes a mechanism for modifying characters that greatly extends the supported repertoire of glyphs. This covers the use of combining diacritical marks that may be added after the base character by the user. Multiple combining diacritics may be simultaneously applied to the same character. Unicode also contains precomposed versions of most letter/diacritic combinations in normal use. These make the conversion to and from legacy encodings simpler, and allow applications to use Unicode as an internal text format without having to implement combining characters. For example, é can be represented in Unicode as followed by ), and equivalently as the precomposed character . Thus, users often have multiple equivalent ways of encoding the same character. The mechanism of canonical equivalence within The Unicode Standard ensures the practical interchangeability of these equivalent encodings.
An example of this arises with the Korean alphabet Hangul: Unicode provides a mechanism for composing Hangul syllables from their individual Hangul Jamo subcomponents. However, it also provides combinations of precomposed syllables made from the most common jamo.
CJK characters presently only have codes for uncomposable radicals and precomposed forms. Most Han characters have either been intentionally composed from, or reconstructed as compositions of, simpler orthographic elements called radicals, so in principle Unicode could have enabled their composition as it did with Hangul. While this could have greatly reduced the number of required code points, as well as allowing the algorithmic synthesis of many arbitrary new characters, the complexities of character etymologies and the post-hoc nature of radical systems add immense complexity to the proposal. Indeed, attempts to design CJK encodings on the basis of composing radicals have been met with difficulties resulting from the reality that Chinese characters do not decompose as simply or as regularly as Hangul does.
The CJK Radicals Supplement block is assigned to the range –, and the Kangxi radicals are assigned to –. The Ideographic Description Sequences block covers the range –, but The Unicode Standard warns against using its characters as an alternate representation for characters encoded elsewhere:
Ligatures
Many scripts, including Arabic and Devanāgarī, have special orthographic rules that require certain combinations of letterforms to be combined into special ligature forms. The rules governing ligature formation can be quite complex, requiring special script-shaping technologies such as ACE (Arabic Calligraphic Engine by DecoType in the 1980s and used to generate all the Arabic examples in the printed editions of The Unicode Standard), which became the proof of concept for OpenType (by Adobe and Microsoft), Graphite (by SIL International), or AAT (by Apple).
Instructions are also embedded in fonts to tell the operating system how to properly output different character sequences. A simple solution to the placement of combining marks or diacritics is assigning the marks a width of zero and placing the glyph itself to the left or right of the left sidebearing (depending on the direction of the script they are intended to be used with). A mark handled this way will appear over whatever character precedes it, but will not adjust its position relative to the width or height of the base glyph; it may be visually awkward and it may overlap some glyphs. Real stacking is impossible but can be approximated in limited cases (for example, Thai top-combining vowels and tone marks can just be at different heights to start with). Generally, this approach is only effective in monospaced fonts but may be used as a fallback rendering method when more complex methods fail.
Standardized subsets
Several subsets of Unicode are standardized: Microsoft Windows since Windows NT 4.0 supports WGL-4 with 657 characters, which is considered to support all contemporary European languages using the Latin, Greek, or Cyrillic script. Other standardized subsets of Unicode include the Multilingual European Subsets: MES-1 (Latin scripts only; 335 characters), MES-2 (Latin, Greek, and Cyrillic; 1062 characters) and MES-3A & MES-3B (two larger subsets, not shown here). MES-2 includes every character in MES-1 and WGL-4.
The standard DIN 91379 specifies a subset of Unicode letters, special characters, and sequences of letters and diacritic signs to allow the correct representation of names and to simplify data exchange in Europe. This standard supports all of the official languages of all European Union countries, as well as the German minority languages and the official languages of Iceland, Liechtenstein, Norway, and Switzerland. To allow the transliteration of names in other writing systems to the Latin script according to the relevant ISO standards, all necessary combinations of base letters and diacritic signs are provided.
Rendering software that cannot process a Unicode character appropriately often displays it as an open rectangle, or as to indicate the position of the unrecognized character. Some systems have made attempts to provide more information about such characters. Apple's Last Resort font will display a substitute glyph indicating the Unicode range of the character, and the SIL International's Unicode fallback font will display a box showing the hexadecimal scalar value of the character.
Mapping and encodings
Several mechanisms have been specified for storing a series of code points as a series of bytes.
Unicode defines two mapping methods: the Unicode Transformation Format (UTF) encodings, and the Universal Coded Character Set (UCS) encodings. An encoding maps (possibly a subset of) the range of Unicode code points to sequences of values in some fixed-size range, termed code units. All UTF encodings map code points to a unique sequence of bytes. The numbers in the names of the encodings indicate the number of bits per code unit (for UTF encodings) or the number of bytes per code unit (for UCS encodings and UTF-1). UTF-8 and UTF-16 are the most commonly used encodings. UCS-2 is an obsolete subset of UTF-16; UCS-4 and UTF-32 are functionally equivalent.
UTF encodings include:
UTF-8, which uses one to four 8-bit units per code point, and has maximal compatibility with ASCII
UTF-16, which uses one 16-bit unit per code point below , and a surrogate pair of two 16-bit units per code point in the range to
UTF-32, which uses one 32-bit unit per code point
UTF-EBCDIC, not specified as part of The Unicode Standard, which uses one to five 8-bit units per code point, intended to maximize compatibility with EBCDIC
UTF-8 uses one to four 8-bit units (bytes) per code point and, being compact for Latin scripts and ASCII-compatible, provides the de facto standard encoding for the interchange of Unicode text. It is used by FreeBSD and most recent Linux distributions as a direct replacement for legacy encodings in general text handling.
The UCS-2 and UTF-16 encodings specify the Unicode byte order mark (BOM) for use at the beginnings of text files, which may be used for byte-order detection (or byte endianness detection). The BOM, encoded as , has the important property of unambiguity on byte reorder, regardless of the Unicode encoding used; (the result of byte-swapping ) does not equate to a legal character, and in places other than the beginning of text conveys the zero-width non-break space.
The same character converted to UTF-8 becomes the byte sequence EF BB BF. The Unicode Standard allows the BOM "can serve as a signature for UTF-8 encoded text where the character set is unmarked". Some software developers have adopted it for other encodings, including UTF-8, in an attempt to distinguish UTF-8 from local 8-bit code pages. However , the UTF-8 standard, recommends that byte order marks be forbidden in protocols using UTF-8, but discusses the cases where this may not be possible. In addition, the large restriction on possible patterns in UTF-8 (for instance there cannot be any lone bytes with the high bit set) means that it should be possible to distinguish UTF-8 from other character encodings without relying on the BOM.
In UTF-32 and UCS-4, one 32-bit code unit serves as a fairly direct representation of any character's code point (although the endianness, which varies across different platforms, affects how the code unit manifests as a byte sequence). In the other encodings, each code point may be represented by a variable number of code units. UTF-32 is widely used as an internal representation of text in programs (as opposed to stored or transmitted text), since every Unix operating system that uses the gcc compilers to generate software uses it as the standard "wide character" encoding. Some programming languages, such as Seed7, use UTF-32 as an internal representation for strings and characters. Recent versions of the Python programming language (beginning with 2.2) may also be configured to use UTF-32 as the representation for Unicode strings, effectively disseminating such encoding in high-level coded software.
Punycode, another encoding form, enables the encoding of Unicode strings into the limited character set supported by the ASCII-based Domain Name System (DNS). The encoding is used as part of IDNA, which is a system enabling the use of Internationalized Domain Names in all scripts that are supported by Unicode. Earlier and now historical proposals include UTF-5 and UTF-6.
GB18030 is another encoding form for Unicode, from the Standardization Administration of China. It is the official character set of the People's Republic of China (PRC). BOCU-1 and SCSU are Unicode compression schemes. The April Fools' Day RFC of 2005 specified two parody UTF encodings, UTF-9 and UTF-18.
Adoption
Unicode, in the form of UTF-8, has been the most common encoding for the World Wide Web since 2008. It has near-universal adoption, and much of the non-UTF-8 content is found in other Unicode encodings, e.g. UTF-16. , UTF-8 accounts for on average 98.3% of all web pages (and 983 of the top 1,000 highest-ranked web pages). Although many pages only use ASCII characters to display content, UTF-8 was designed with 8-bit ASCII as a subset and almost no websites now declare their encoding to only be ASCII instead of UTF-8. Over a third of the languages tracked have 100% UTF-8 use.
All internet protocols maintained by Internet Engineering Task Force, e.g. FTP, have required support for UTF-8 since the publication of in 1998, which specified that all IETF protocols "MUST be able to use the UTF-8 charset".
Operating systems
Unicode has become the dominant scheme for the internal processing and storage of text. Although a great deal of text is still stored in legacy encodings, Unicode is used almost exclusively for building new information processing systems. Early adopters tended to use UCS-2 (the fixed-length two-byte obsolete precursor to UTF-16) and later moved to UTF-16 (the variable-length current standard), as this was the least disruptive way to add support for non-BMP characters. The best known such system is Windows NT (and its descendants, 2000, XP, Vista, 7, 8, 10, and 11), which uses UTF-16 as the sole internal character encoding. The Java and .NET bytecode environments, macOS, and KDE also use it for internal representation. Partial support for Unicode can be installed on Windows 9x through the Microsoft Layer for Unicode.
UTF-8 (originally developed for Plan 9) has become the main storage encoding on most Unix-like operating systems (though others are also used by some libraries) because it is a relatively easy replacement for traditional extended ASCII character sets. UTF-8 is also the most common Unicode encoding used in HTML documents on the World Wide Web.
Multilingual text-rendering engines which use Unicode include Uniscribe and DirectWrite for Microsoft Windows, ATSUI and Core Text for macOS, and Pango for GTK+ and the GNOME desktop.
Input methods
Because keyboard layouts cannot have simple key combinations for all characters, several operating systems provide alternative input methods that allow access to the entire repertoire.
ISO/IEC 14755, which standardises methods for entering Unicode characters from their code points, specifies several methods. There is the Basic method, where a beginning sequence is followed by the hexadecimal representation of the code point and the ending sequence. There is also a screen-selection entry method specified, where the characters are listed in a table on a screen, such as with a character map program.
Online tools for finding the code point for a known character include Unicode Lookup by Jonathan Hedley and Shapecatcher by Benjamin Milde. In Unicode Lookup, one enters a search key (e.g. "fractions"), and a list of corresponding characters with their code points is returned. In Shapecatcher, based on Shape context, one draws the character in a box and a list of characters approximating the drawing, with their code points, is returned.
Email
MIME defines two different mechanisms for encoding non-ASCII characters in email, depending on whether the characters are in email headers (such as the "Subject:"), or in the text body of the message; in both cases, the original character set is identified as well as a transfer encoding. For email transmission of Unicode, the UTF-8 character set and the Base64 or the Quoted-printable transfer encoding are recommended, depending on whether much of the message consists of ASCII characters. The details of the two different mechanisms are specified in the MIME standards and generally are hidden from users of email software.
The IETF has defined a framework for internationalized email using UTF-8, and has updated several protocols in accordance with that framework.
The adoption of Unicode in email has been very slow. Some East Asian text is still encoded in encodings such as ISO-2022, and some devices, such as mobile phones, still cannot correctly handle Unicode data. Support has been improving, however. Many major free mail providers such as Yahoo! Mail, Gmail, and Outlook.com support it.
Web
All W3C recommendations have used Unicode as their document character set since HTML 4.0. Web browsers have supported Unicode, especially UTF-8, for many years. There used to be display problems resulting primarily from font related issues; e.g. v6 and older of Microsoft Internet Explorer did not render many code points unless explicitly told to use a font that contains them.
Although syntax rules may affect the order in which characters are allowed to appear, XML (including XHTML) documents, by definition, comprise characters from most of the Unicode code points, with the exception of:
FFFE or FFFF.
most of the C0 control codes,
the permanently unassigned code points D800–DFFF,
HTML characters manifest either directly as bytes according to the document's encoding, if the encoding supports them, or users may write them as numeric character references based on the character's Unicode code point. For example, the references Δ, Й, ק, م, ๗, あ, 叶, 葉, and 말 (or the same numeric values expressed in hexadecimal, with &#x as the prefix) should display on all browsers as Δ, Й, ק ,م, ๗, あ, 叶, 葉, and 말.
When specifying URIs, for example as URLs in HTTP requests, non-ASCII characters must be percent-encoded.
Fonts
Unicode is not in principle concerned with fonts per se, seeing them as implementation choices. Any given character may have many allographs, from the more common bold, italic and base letterforms to complex decorative styles. A font is "Unicode compliant" if the glyphs in the font can be accessed using code points defined in The Unicode Standard. The standard does not specify a minimum number of characters that must be included in the font; some fonts have quite a small repertoire.
Free and retail fonts based on Unicode are widely available, since TrueType and OpenType support Unicode (and Web Open Font Format (WOFF and WOFF2) is based on those). These font formats map Unicode code points to glyphs, but OpenType and TrueType font files are restricted to 65,535 glyphs. Collection files provide a "gap mode" mechanism for overcoming this limit in a single font file. (Each font within the collection still has the 65,535 limit, however.) A TrueType Collection file would typically have a file extension of ".ttc".
Thousands of fonts exist on the market, but fewer than a dozen fonts—sometimes described as "pan-Unicode" fonts—attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e., font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces.
Newlines
Unicode partially addresses the newline problem that occurs when trying to read a text file on different platforms. Unicode defines a large number of characters that conforming applications should recognize as line terminators.
In terms of the newline, Unicode introduced and . This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform-dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in macOS and also with W3C XML and HTML recommendations. In this approach, every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
Issues
Character unification
Han unification
The Ideographic Research Group (IRG) is tasked with advising the Consortium and ISO regarding Han unification, or Unihan, especially the further addition of CJK unified and compatibility ideographs to the repertoire. The IRG is composed of experts from each region that has historically used Chinese characters. However, despite the deliberation within the committee, Han unification has consistently been one of the most contested aspects of The Unicode Standard since the genesis of the project.
Existing character set standards such as the Japanese JIS X 0208 (encoded by Shift JIS) defined unification criteria, meaning rules for determining when a variant Chinese character is to be considered a handwriting/font difference (and thus unified), versus a spelling difference (to be encoded separately). Unicode's character model for CJK characters was based on the unification criteria used by JIS X 0208, as well as those developed by the Association for a Common Chinese Code in China.
Due to the standard's principle of encoding semantic instead of stylistic variants, Unicode has received criticism for not assigning code points to certain rare and archaic kanji variants, possibly complicating processing of ancient and uncommon Japanese names. Since it places particular emphasis on Chinese, Japanese and Korean sharing many characters in common, Han unification is also sometimes perceived as treating the three as the same thing. Regional differences in the expected forms of characters, in terms of typographical conventions and curricula for handwriting, do not always fall along language boundaries: although Hong Kong and Taiwan both write Chinese languages using Traditional Chinese characters, the preferred forms of characters differ between Hong Kong and Taiwan in some cases.
Less-frequently-used alternative encodings exist, often predating Unicode, with character models differing from this paradigm, aimed at preserving the various stylistic differences between regional and/or nonstandard character forms. One example is the TRON Code favored by some users for handling historical Japanese text, though not widely adopted among the Japanese public. Another is the CCCII encoding adopted by library systems in Hong Kong, Taiwan and the United States. These have their own drawbacks in general use, leading to the Big5 encoding (introduced in 1984, four years after CCCII) having become more common than CCCII outside of library systems. Although work at Apple based on Research Libraries Group's CJK Thesaurus, which was used to maintain the EACC variant of CCCII, was one of the direct predecessors of Unicode's Unihan set, Unicode adopted the JIS-style unification model.
The earliest version of Unicode had a repertoire of fewer than 21,000 Han characters, largely limited to those in relatively common modern usage. As of version 16.0, the standard now encodes more than 97,000 Han characters, and work is continuing to add thousands more—largely historical and dialectal variant characters used throughout the Sinosphere.
Modern typefaces provide a means to address some of the practical issues in depicting unified Han characters with various regional graphical representations. The 'locl' OpenType table allows a renderer to select a different glyph for each code point based on the text locale. The Unicode variation sequences can also provide in-text annotations for a desired glyph selection; this requires registration of the specific variant in the Ideographic Variation Database.
Italic or cursive characters in Cyrillic
If the appropriate glyphs for characters in the same script differ only in the italic, Unicode has generally unified them, as can be seen in the comparison among a set of seven characters' italic glyphs as typically appearing in Russian, traditional Bulgarian, Macedonian, and Serbian texts at right, meaning that the differences are displayed through smart font technology or manually changing fonts. The same OpenType 'locl' technique is used.
Localised case pairs
For use in the Turkish alphabet and Azeri alphabet, Unicode includes a separate dotless lowercase (ı) and a dotted uppercase (). However, the usual ASCII letters are used for the lowercase dotted and the uppercase dotless , matching how they are handled in the earlier ISO 8859-9. As such, case-insensitive comparisons for those languages have to use different rules than case-insensitive comparisons for other languages using the Latin script. This can have security implications if, for example, sanitization code or access control relies on case-insensitive comparison.
By contrast, the Icelandic eth (ð), the barred D (đ) and the retroflex D (ɖ), which usually look the same in uppercase (Đ), are given the opposite treatment, and encoded separately in both letter-cases (in contrast to the earlier ISO 6937, which unifies the uppercase forms). Although it allows for case-insensitive comparison without needing to know the language of the text, this approach also has issues, requiring security measures relating to homoglyph attacks.
Diacritics on lowercase
Whether the lowercase letter is expected to retain its tittle when a diacritic applies also depends on local conventions.
Security
Unicode has a large number of homoglyphs, many of which look very similar or identical to ASCII letters. Substitution of these can make an identifier or URL that looks correct, but directs to a different location than expected. Additionally, homoglyphs can also be used for manipulating the output of natural language processing (NLP) systems. Mitigation requires disallowing these characters, displaying them differently, or requiring that they resolve to the same identifier; all of this is complicated due to the huge and constantly changing set of characters.
A security advisory was released in 2021 by two researchers, one from the University of Cambridge and the other from the University of Edinburgh, in which they assert that the BiDi marks can be used to make large sections of code do something different from what they appear to do. The problem was named "Trojan Source". In response, code editors started highlighting marks to indicate forced text-direction changes.
The UTF-8 and UTF-16 encodings do not accept all possible sequences of code units. Implementations vary in what they do when reading an invalid sequence, which has led to security bugs.
Mapping to legacy character sets
Unicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be converted to Unicode and then back and get back the same file, without employing context-dependent interpretation. That has meant that inconsistent legacy architectures, such as combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combined sequence of already existing characters can no longer be added to the standard to preserve interoperability between software using different versions of Unicode.
Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '~' (1-33, WAVE DASH), heavily used in legacy database data, to either (in Microsoft Windows) or (other vendors).
Some Japanese computer programmers objected to Unicode because it requires them to separate the use of and , which was mapped to 0x5C in JIS X 0201, and a lot of legacy code exists with this usage. (This encoding also replaces tilde '~' 0x7E with macron '¯', now 0xAF.) The separation of these characters exists in ISO 8859-1, from long before Unicode.
Indic scripts
Indic scripts such as Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (also known as conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part, because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for the Tibetan script in 2003 when the Standardization Administration of China proposed encoding 956 precomposed Tibetan syllables, but these were rejected for encoding by the relevant ISO committee (ISO/IEC JTC 1/SC 2).
Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส.
Combining characters
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron (◌̄) and acute accent (◌́), but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic languages, will often be placed incorrectly. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded, the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType ('gsub'), or AAT technologies for advanced rendering features.
Anomalies
The Unicode Standard has imposed rules intended to guarantee stability. Depending on the strictness of a rule, a change can be prohibited or allowed. For example, a "name" given to a code point cannot and will not change. But a "script" property is more flexible, by Unicode's own rules. In version 2.0, Unicode changed many code point "names" from version 1. At the same moment, Unicode stated that, thenceforth, an assigned name to a code point would never change. This implies that when mistakes are published, these mistakes cannot be corrected, even if they are trivial (as happened in one instance with the spelling for in a character name). In 2006 a list of anomalies in character names was first published, and, as of June 2021, there were 104 characters with identified issues, for example:
: Does not join graphemes.
: This is a small letter. The capital is .
: This is not a Yi syllable, but a Yi iteration mark.
: bracket is spelled incorrectly. (Spelling errors are resolved by using Unicode alias names.)
While Unicode defines the script designator (name) to be "", in that script's character names, a hyphen is added: . This, however, is not an anomaly, but the rule: hyphens are replaced by underscores in script designators.
| Technology | Software development: General | null |
31743 | https://en.wikipedia.org/wiki/Uranium | Uranium | Uranium is a chemical element with the symbol U and atomic number 92. It is a silvery-grey metal in the actinide series of the periodic table. A uranium atom has 92 protons and 92 electrons, of which 6 are valence electrons. Uranium radioactively decays, usually by emitting an alpha particle. The half-life of this decay varies between 159,200 and 4.5 billion years for different isotopes, making them useful for dating the age of the Earth. The most common isotopes in natural uranium are uranium-238 (which has 146 neutrons and accounts for over 99% of uranium on Earth) and uranium-235 (which has 143 neutrons). Uranium has the highest atomic weight of the primordially occurring elements. Its density is about 70% higher than that of lead and slightly lower than that of gold or tungsten. It occurs naturally in low concentrations of a few parts per million in soil, rock and water, and is commercially extracted from uranium-bearing minerals such as uraninite.
Many contemporary uses of uranium exploit its unique nuclear properties. Uranium is used in nuclear power plants and nuclear weapons because it is the only naturally occurring element with a fissile isotope – uranium-235 – present in non-trace amounts. However, because of the low abundance of uranium-235 in natural uranium (which is overwhelmingly uranium-238), uranium needs to undergo enrichment so that enough uranium-235 is present. Uranium-238 is fissionable by fast neutrons and is fertile, meaning it can be transmuted to fissile plutonium-239 in a nuclear reactor. Another fissile isotope, uranium-233, can be produced from natural thorium and is studied for future industrial use in nuclear technology. Uranium-238 has a small probability for spontaneous fission or even induced fission with fast neutrons; uranium-235, and to a lesser degree uranium-233, have a much higher fission cross-section for slow neutrons. In sufficient concentration, these isotopes maintain a sustained nuclear chain reaction. This generates the heat in nuclear power reactors and produces the fissile material for nuclear weapons. The primary civilian use for uranium harnesses the heat energy to produce electricity. Depleted uranium (U) is used in kinetic energy penetrators and armor plating.
The 1789 discovery of uranium in the mineral pitchblende is credited to Martin Heinrich Klaproth, who named the new element after the recently discovered planet Uranus. Eugène-Melchior Péligot was the first person to isolate the metal, and its radioactive properties were discovered in 1896 by Henri Becquerel. Research by Otto Hahn, Lise Meitner, Enrico Fermi and others, such as J. Robert Oppenheimer starting in 1934 led to its use as a fuel in the nuclear power industry and in Little Boy, the first nuclear weapon used in war. An ensuing arms race during the Cold War between the United States and the Soviet Union produced tens of thousands of nuclear weapons that used uranium metal and uranium-derived plutonium-239. Dismantling of these weapons and related nuclear facilities is carried out within various nuclear disarmament programs and costs billions of dollars. Weapon-grade uranium obtained from nuclear weapons is diluted with uranium-238 and reused as fuel for nuclear reactors. Spent nuclear fuel forms radioactive waste, which mostly consists of uranium-238 and poses a significant health threat and environmental impact.
Characteristics
Uranium is a silvery white, weakly radioactive metal. It has a Mohs hardness of 6, sufficient to scratch glass and roughly equal to that of titanium, rhodium, manganese and niobium. It is malleable, ductile, slightly paramagnetic, strongly electropositive and a poor electrical conductor. Uranium metal has a very high density of 19.1 g/cm, denser than lead (11.3 g/cm), but slightly less dense than tungsten and gold (19.3 g/cm).
Uranium metal reacts with almost all non-metallic elements (except noble gases) and their compounds, with reactivity increasing with temperature. Hydrochloric and nitric acids dissolve uranium, but non-oxidizing acids other than hydrochloric acid attack the element very slowly. When finely divided, it can react with cold water; in air, uranium metal becomes coated with a dark layer of uranium oxide. Uranium in ores is extracted chemically and converted into uranium dioxide or other chemical forms usable in industry.
Uranium-235 was the first isotope that was found to be fissile. Other naturally occurring isotopes are fissionable, but not fissile. On bombardment with slow neutrons, uranium-235 most of the time splits into two smaller nuclei, releasing nuclear binding energy and more neutrons. If too many of these neutrons are absorbed by other uranium-235 nuclei, a nuclear chain reaction occurs that results in a burst of heat or (in some circumstances) an explosion. In a nuclear reactor, such a chain reaction is slowed and controlled by a neutron poison, absorbing some of the free neutrons. Such neutron absorbent materials are often part of reactor control rods (see nuclear reactor physics for a description of this process of reactor control).
As little as of uranium-235 can be used to make an atomic bomb. The nuclear weapon detonated over Hiroshima, called Little Boy, relied on uranium fission. However, the first nuclear bomb (the Gadget used at Trinity) and the bomb that was detonated over Nagasaki (Fat Man) were both plutonium bombs.
Uranium metal has three allotropic forms:
α (orthorhombic) stable up to . Orthorhombic, space group No. 63, Cmcm, lattice parameters a = 285.4 pm, b = 587 pm, c = 495.5 pm.
β (tetragonal) stable from . Tetragonal, space group P42/mnm, P42nm, or P4n2, lattice parameters a = 565.6 pm, b = c = 1075.9 pm.
γ (body-centered cubic) from to melting point—this is the most malleable and ductile state. Body-centered cubic, lattice parameter a = 352.4 pm.
Applications
Military
The major application of uranium in the military sector is in high-density penetrators. This ammunition consists of depleted uranium (DU) alloyed with 1–2% other elements, such as titanium or molybdenum. At high impact speed, the density, hardness, and pyrophoricity of the projectile enable the destruction of heavily armored targets. Tank armor and other removable vehicle armor can also be hardened with depleted uranium plates. The use of depleted uranium became politically and environmentally contentious after the use of such munitions by the US, UK and other countries during wars in the Persian Gulf and the Balkans raised questions concerning uranium compounds left in the soil (see Gulf War syndrome).
Depleted uranium is also used as a shielding material in some containers used to store and transport radioactive materials. While the metal itself is radioactive, its high density makes it more effective than lead in halting radiation from strong sources such as radium. Other uses of depleted uranium include counterweights for aircraft control surfaces, as ballast for missile re-entry vehicles and as a shielding material. Due to its high density, this material is found in inertial guidance systems and in gyroscopic compasses. Depleted uranium is preferred over similarly dense metals due to its ability to be easily machined and cast as well as its relatively low cost. The main risk of exposure to depleted uranium is chemical poisoning by uranium oxide rather than radioactivity (uranium being only a weak alpha emitter).
During the later stages of World War II, the entire Cold War, and to a lesser extent afterwards, uranium-235 has been used as the fissile explosive material to produce nuclear weapons. Initially, two major types of fission bombs were built: a relatively simple device that uses uranium-235 and a more complicated mechanism that uses plutonium-239 derived from uranium-238. Later, a much more complicated and far more powerful type of fission/fusion bomb (thermonuclear weapon) was built, that uses a plutonium-based device to cause a mixture of tritium and deuterium to undergo nuclear fusion. Such bombs are jacketed in a non-fissile (unenriched) uranium case, and they derive more than half their power from the fission of this material by fast neutrons from the nuclear fusion process.
Civilian
The main use of uranium in the civilian sector is to fuel nuclear power plants. One kilogram of uranium-235 can theoretically produce about 20 terajoules of energy (2 joules), assuming complete fission; as much energy as 1.5 million kilograms (1,500 tonnes) of coal.
Commercial nuclear power plants use fuel that is typically enriched to around 3% uranium-235. The CANDU and Magnox designs are the only commercial reactors capable of using unenriched uranium fuel. Fuel used for United States Navy reactors is typically highly enriched in uranium-235 (the exact values are classified). In a breeder reactor, uranium-238 can also be converted into plutonium-239 through the following reaction:
+ n + γ
Before (and, occasionally, after) the discovery of radioactivity, uranium was primarily used in small amounts for yellow glass and pottery glazes, such as uranium glass and in Fiestaware.
The discovery and isolation of radium in uranium ore (pitchblende) by Marie Curie sparked the development of uranium mining to extract the radium, which was used to make glow-in-the-dark paints for clock and aircraft dials. This left a prodigious quantity of uranium as a waste product, since it takes three tonnes of uranium to extract one gram of radium. This waste product was diverted to the glazing industry, making uranium glazes very inexpensive and abundant. Besides the pottery glazes, uranium tile glazes accounted for the bulk of the use, including common bathroom and kitchen tiles which can be produced in green, yellow, mauve, black, blue, red and other colors.
Uranium was also used in photographic chemicals (especially uranium nitrate as a toner), in lamp filaments for stage lighting bulbs, to improve the appearance of dentures, and in the leather and wood industries for stains and dyes. Uranium salts are mordants of silk or wool. Uranyl acetate and uranyl formate are used as electron-dense "stains" in transmission electron microscopy, to increase the contrast of biological specimens in ultrathin sections and in negative staining of viruses, isolated cell organelles and macromolecules.
The discovery of the radioactivity of uranium ushered in additional scientific and practical uses of the element. The long half-life of uranium-238 (4.47 years) makes it well-suited for use in estimating the age of the earliest igneous rocks and for other types of radiometric dating, including uranium–thorium dating, uranium–lead dating and uranium–uranium dating. Uranium metal is used for X-ray targets in the making of high-energy X-rays.
History
Pre-discovery use
The use of pitchblende, uranium in its natural oxide form, dates back to at least the year 79 AD, when it was used in the Roman Empire to add a yellow color to ceramic glazes. Yellow glass with 1% uranium oxide was found in a Roman villa on Cape Posillipo in the Gulf of Naples, Italy, by R. T. Gunther of the University of Oxford in 1912. Starting in the late Middle Ages, pitchblende was extracted from the Habsburg silver mines in Joachimsthal, Bohemia (now Jáchymov in the Czech Republic) in the Ore Mountains, and was used as a coloring agent in the local glassmaking industry. In the early 19th century, the world's only known sources of uranium ore were these mines.
Discovery
The discovery of the element is credited to the German chemist Martin Heinrich Klaproth. While he was working in his experimental laboratory in Berlin in 1789, Klaproth was able to precipitate a yellow compound (likely sodium diuranate) by dissolving pitchblende in nitric acid and neutralizing the solution with sodium hydroxide. Klaproth assumed the yellow substance was the oxide of a yet-undiscovered element and heated it with charcoal to obtain a black powder, which he thought was the newly discovered metal itself (in fact, that powder was an oxide of uranium). He named the newly discovered element after the planet Uranus (named after the primordial Greek god of the sky), which had been discovered eight years earlier by William Herschel.
In 1841, Eugène-Melchior Péligot, Professor of Analytical Chemistry at the Conservatoire National des Arts et Métiers (Central School of Arts and Manufactures) in Paris, isolated the first sample of uranium metal by heating uranium tetrachloride with potassium.
Henri Becquerel discovered radioactivity by using uranium in 1896. Becquerel made the discovery in Paris by leaving a sample of a uranium salt, KUO(SO) (potassium uranyl sulfate), on top of an unexposed photographic plate in a drawer and noting that the plate had become "fogged". He determined that a form of invisible light or rays emitted by uranium had exposed the plate.
During World War I when the Central Powers suffered a shortage of molybdenum to make artillery gun barrels and high speed tool steels, they routinely used ferrouranium alloy as a substitute, as it presents many of the same physical characteristics as molybdenum. When this practice became known in 1916 the US government requested several prominent universities to research the use of uranium in manufacturing and metalwork. Tools made with these formulas remained in use for several decades, until the Manhattan Project and the Cold War placed a large demand on uranium for fission research and weapon development.
Fission research
A team led by Enrico Fermi in 1934 found that bombarding uranium with neutrons produces beta rays (electrons or positrons from the elements produced; see beta particle). The fission products were at first mistaken for new elements with atomic numbers 93 and 94, which the Dean of the Sapienza University of Rome, Orso Mario Corbino, named ausenium and hesperium, respectively. The experiments leading to the discovery of uranium's ability to fission (break apart) into lighter elements and release binding energy were conducted by Otto Hahn and Fritz Strassmann in Hahn's laboratory in Berlin. Lise Meitner and her nephew, physicist Otto Robert Frisch, published the physical explanation in February 1939 and named the process "nuclear fission". Soon after, Fermi hypothesized that fission of uranium might release enough neutrons to sustain a fission reaction. Confirmation of this hypothesis came in 1939, and later work found that on average about 2.5 neutrons are released by each fission of uranium-235. Fermi urged Alfred O. C. Nier to separate uranium isotopes for determination of the fissile component, and on 29 February 1940, Nier used an instrument he built at the University of Minnesota to separate the world's first uranium-235 sample in the Tate Laboratory. Using Columbia University's cyclotron, John Dunning confirmed the sample to be the isolated fissile material on 1 March. Further work found that the far more common uranium-238 isotope can be transmuted into plutonium, which, like uranium-235, is also fissile by thermal neutrons. These discoveries led numerous countries to begin working on the development of nuclear weapons and nuclear power. Despite fission having been discovered in Germany, the Uranverein ("uranium club") Germany's wartime project to research nuclear power and/or weapons was hampered by limited resources, infighting, the exile or non-involvement of several prominent scientists in the field and several crucial mistakes such as failing to account for impurities in available graphite samples which made it appear less suitable as a neutron moderator than it is in reality. Germany's attempts to build a natural uranium / heavy water reactor had not come close to reaching criticality by the time the Americans reached Haigerloch, the site of the last German wartime reactor experiment.
On 2 December 1942, as part of the Manhattan Project, another team led by Enrico Fermi was able to initiate the first artificial self-sustained nuclear chain reaction, Chicago Pile-1. An initial plan using enriched uranium-235 was abandoned as it was as yet unavailable in sufficient quantities. Working in a lab below the stands of Stagg Field at the University of Chicago, the team created the conditions needed for such a reaction by piling together 360 tonnes of graphite, 53 tonnes of uranium oxide, and 5.5 tonnes of uranium metal, most of which was supplied by Westinghouse Lamp Plant in a makeshift production process.
Nuclear weaponry
Two types of atomic bomb were developed by the United States during World War II: a uranium-based device (codenamed "Little Boy") whose fissile material was highly enriched uranium, and a plutonium-based device (see Trinity test and "Fat Man") whose plutonium was derived from uranium-238. Little Boy became the first nuclear weapon used in war when it was detonated over Hiroshima, Japan, on 6 August 1945. Exploding with a yield equivalent to 12,500 tonnes of TNT, the blast and thermal wave of the bomb destroyed nearly 50,000 buildings and killed about 75,000 people (see Atomic bombings of Hiroshima and Nagasaki). Initially it was believed that uranium was relatively rare, and that nuclear proliferation could be avoided by simply buying up all known uranium stocks, but within a decade large deposits of it were discovered in many places around the world.
Reactors
The X-10 Graphite Reactor at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, formerly known as the Clinton Pile and X-10 Pile, was the world's second artificial nuclear reactor (after Enrico Fermi's Chicago Pile) and was the first reactor designed and built for continuous operation. Argonne National Laboratory's Experimental Breeder Reactor I, located at the Atomic Energy Commission's National Reactor Testing Station near Arco, Idaho, became the first nuclear reactor to create electricity on 20 December 1951. Initially, four 150-watt light bulbs were lit by the reactor, but improvements eventually enabled it to power the whole facility (later, the town of Arco became the first in the world to have all its electricity come from nuclear power generated by BORAX-III, another reactor designed and operated by Argonne National Laboratory). The world's first commercial scale nuclear power station, Obninsk in the Soviet Union, began generation with its reactor AM-1 on 27 June 1954. Other early nuclear power plants were Calder Hall in England, which began generation on 17 October 1956, and the Shippingport Atomic Power Station in Pennsylvania, which began on 26 May 1958. Nuclear power was used for the first time for propulsion by a submarine, the USS Nautilus, in 1954.
Prehistoric naturally occurring fission
In 1972, French physicist Francis Perrin discovered fifteen ancient and no longer active natural nuclear fission reactors in three separate ore deposits at the Oklo mine in Gabon, Africa, collectively known as the Oklo Fossil Reactors. The ore deposit is 1.7 billion years old; then, uranium-235 constituted about 3% of uranium on Earth. This is high enough to permit a sustained chain reaction, if other supporting conditions exist. The capacity of the surrounding sediment to contain the health-threatening nuclear waste products has been cited by the U.S. federal government as supporting evidence for the feasibility to store spent nuclear fuel at the Yucca Mountain nuclear waste repository.
Contamination and the Cold War legacy
Above-ground nuclear tests by the Soviet Union and the United States in the 1950s and early 1960s and by France into the 1970s and 1980s spread a significant amount of fallout from uranium daughter isotopes around the world. Additional fallout and pollution occurred from several nuclear accidents.
Uranium miners have a higher incidence of cancer. An excess risk of lung cancer among Navajo uranium miners, for example, has been documented and linked to their occupation. The Radiation Exposure Compensation Act, a 1990 law in the US, required $100,000 in "compassion payments" to uranium miners diagnosed with cancer or other respiratory ailments.
During the Cold War between the Soviet Union and the United States, huge stockpiles of uranium were amassed and tens of thousands of nuclear weapons were created using enriched uranium and plutonium made from uranium. After the break-up of the Soviet Union in 1991, an estimated 600 short tons (540 metric tons) of highly enriched weapons grade uranium (enough to make 40,000 nuclear warheads) had been stored in often inadequately guarded facilities in the Russian Federation and several other former Soviet states. Police in Asia, Europe, and South America on at least 16 occasions from 1993 to 2005 have intercepted shipments of smuggled bomb-grade uranium or plutonium, most of which was from ex-Soviet sources. From 1993 to 2005 the Material Protection, Control, and Accounting Program, operated by the federal government of the United States, spent about US$550 million to help safeguard uranium and plutonium stockpiles in Russia. This money was used for improvements and security enhancements at research and storage facilities.
Safety of nuclear facilities in Russia has been significantly improved since the stabilization of political and economical turmoil of the early 1990s. For example, in 1993 there were 29 incidents ranking above level 1 on the International Nuclear Event Scale, and this number dropped under four per year in 1995–2003. The number of employees receiving annual radiation doses above 20 mSv, which is equivalent to a single full-body CT scan, saw a strong decline around 2000. In November 2015, the Russian government approved a federal program for nuclear and radiation safety for 2016 to 2030 with a budget of 562 billion rubles (ca. 8 billion USD). Its key issue is "the deferred liabilities accumulated during the 70 years of the nuclear industry, particularly during the time of the Soviet Union". About 73% of the budget will be spent on decommissioning aged and obsolete nuclear reactors and nuclear facilities, especially those involved in state defense programs; 20% will go in processing and disposal of nuclear fuel and radioactive waste, and 5% into monitoring and ensuring of nuclear and radiation safety.
Occurrence
Uranium is a naturally occurring element found in low levels in all rock, soil, and water. It is the highest-numbered element found naturally in significant quantities on Earth and is almost always found combined with other elements. Uranium is the 48th most abundant element in the Earth’s crust. The decay of uranium, thorium, and potassium-40 in Earth's mantle is thought to be the main source of heat that keeps the Earth's outer core in the liquid state and drives mantle convection, which in turn drives plate tectonics.
Uranium's concentration in the Earth's crust is (depending on the reference) 2 to 4 parts per million, or about 40 times as abundant as silver. The Earth's crust from the surface to 25 km (15 mi) down is calculated to contain 10 kg (2 lb) of uranium while the oceans may contain 10 kg (2 lb). The concentration of uranium in soil ranges from 0.7 to 11 parts per million (up to 15 parts per million in farmland soil due to use of phosphate fertilizers), and its concentration in sea water is 3 parts per billion.
Uranium is more plentiful than antimony, tin, cadmium, mercury, or silver, and it is about as abundant as arsenic or molybdenum. Uranium is found in hundreds of minerals, including uraninite (the most common uranium ore), carnotite, autunite, uranophane, torbernite, and coffinite. Significant concentrations of uranium occur in some substances such as phosphate rock deposits, and minerals such as lignite, and monazite sands in uranium-rich ores (it is recovered commercially from sources with as little as 0.1% uranium).
Origin
Like all elements with atomic weights higher than that of iron, uranium is only naturally formed by the r-process (rapid neutron capture) in supernovae and neutron star mergers. Primordial thorium and uranium are only produced in the r-process, because the s-process (slow neutron capture) is too slow and cannot pass the gap of instability after bismuth. Besides the two extant primordial uranium isotopes, U and U, the r-process also produced significant quantities of U, which has a shorter half-life and so is an extinct radionuclide, having long since decayed completely to Th. Further uranium-236 was produced by the decay of Pu, accounting for the observed higher-than-expected abundance of thorium and lower-than-expected abundance of uranium. While the natural abundance of uranium has been supplemented by the decay of extinct Pu (half-life 375,000 years) and Cm (half-life 16 million years), producing U and U respectively, this occurred to an almost negligible extent due to the shorter half-lives of these parents and their lower production than U and Pu, the parents of thorium: the Cm/U ratio at the formation of the Solar System was .
Biotic and abiotic
Some bacteria, such as Shewanella putrefaciens, Geobacter metallireducens and some strains of Burkholderia fungorum, can use uranium for their growth and convert U(VI) to U(IV). Recent research suggests that this pathway includes reduction of the soluble U(VI) via an intermediate U(V) pentavalent state.
Other organisms, such as the lichen Trapelia involuta or microorganisms such as the bacterium Citrobacter, can absorb concentrations of uranium that are up to 300 times the level of their environment. Citrobacter species absorb uranyl ions when given glycerol phosphate (or other similar organic phosphates). After one day, one gram of bacteria can encrust themselves with nine grams of uranyl phosphate crystals; this creates the possibility that these organisms could be used in bioremediation to decontaminate uranium-polluted water.
The proteobacterium Geobacter has also been shown to bioremediate uranium in ground water. The mycorrhizal fungus Glomus intraradices increases uranium content in the roots of its symbiotic plant.
In nature, uranium(VI) forms highly soluble carbonate complexes at alkaline pH. This leads to an increase in mobility and availability of uranium to groundwater and soil from nuclear wastes which leads to health hazards. However, it is difficult to precipitate uranium as phosphate in the presence of excess carbonate at alkaline pH. A Sphingomonas sp. strain BSAR-1 has been found to express a high activity alkaline phosphatase (PhoK) that has been applied for bioprecipitation of uranium as uranyl phosphate species from alkaline solutions. The precipitation ability was enhanced by overexpressing PhoK protein in E. coli.
Plants absorb some uranium from soil. Dry weight concentrations of uranium in plants range from 5 to 60 parts per billion, and ash from burnt wood can have concentrations up to 4 parts per million. Dry weight concentrations of uranium in food plants are typically lower with one to two micrograms per day ingested through the food people eat.
Production and mining
Worldwide production of uranium in 2021 was 48,332 tonnes, of which 21,819 t (45%) was mined in Kazakhstan. Other important uranium mining countries are Namibia (5,753 t), Canada (4,693 t), Australia (4,192 t), Uzbekistan (3,500 t), and Russia (2,635 t).
Uranium ore is mined in several ways: open pit, underground, in-situ leaching, and borehole mining. Low-grade uranium ore mined typically contains 0.01 to 0.25% uranium oxides. Extensive measures must be employed to extract the metal from its ore. High-grade ores found in Athabasca Basin deposits in Saskatchewan, Canada can contain up to 23% uranium oxides on average. Uranium ore is crushed and rendered into a fine powder and then leached with either an acid or alkali. The leachate is subjected to one of several sequences of precipitation, solvent extraction, and ion exchange. The resulting mixture, called yellowcake, contains at least 75% uranium oxides UO. Yellowcake is then calcined to remove impurities from the milling process before refining and conversion.
Commercial-grade uranium can be produced through the reduction of uranium halides with alkali or alkaline earth metals. Uranium metal can also be prepared through electrolysis of or
, dissolved in molten calcium chloride () and sodium chloride (NaCl) solution. Very pure uranium is produced through the thermal decomposition of uranium halides on a hot filament.
Resources and reserves
It is estimated that 6.1 million tonnes of uranium exists in ores that are economically viable at US$130 per kg of uranium, while 35 million tonnes are classed as mineral resources (reasonable prospects for eventual economic extraction).
Australia has 28% of the world's known uranium ore reserves and the world's largest single uranium deposit is located at the Olympic Dam Mine in South Australia. There is a significant reserve of uranium in Bakouma, a sub-prefecture in the prefecture of Mbomou in the Central African Republic.
Some uranium also originates from dismantled nuclear weapons. For example, in 1993–2013 Russia supplied the United States with 15,000 tonnes of low-enriched uranium within the Megatons to Megawatts Program.
An additional 4.6 billion tonnes of uranium are estimated to be dissolved in sea water (Japanese scientists in the 1980s showed that extraction of uranium from sea water using ion exchangers was technically feasible). There have been experiments to extract uranium from sea water, but the yield has been low due to the carbonate present in the water. In 2012, ORNL researchers announced the successful development of a new absorbent material dubbed HiCap which performs surface retention of solid or gas molecules, atoms or ions and also effectively removes toxic metals from water, according to results verified by researchers at Pacific Northwest National Laboratory.
Supplies
In 2005, ten countries accounted for the majority of the world's concentrated uranium oxides: Canada (27.9%), Australia (22.8%), Kazakhstan (10.5%), Russia (8.0%), Namibia (7.5%), Niger (7.4%), Uzbekistan (5.5%), the United States (2.5%), Argentina (2.1%) and Ukraine (1.9%). In 2008, Kazakhstan was forecast to increase production and become the world's largest supplier of uranium by 2009; Kazakhstan has dominated the world's uranium market since 2010. In 2021, its share was 45.1%, followed by Namibia (11.9%), Canada (9.7%), Australia (8.7%), Uzbekistan (7.2%), Niger (4.7%), Russia (5.5%), China (3.9%), India (1.3%), Ukraine (0.9%), and South Africa (0.8%), with a world total production of 48,332 tonnes. Most uranium was produced not by conventional underground mining of ores (29% of production), but by in situ leaching (66%).
In the late 1960s, UN geologists discovered major uranium deposits and other rare mineral reserves in Somalia. The find was the largest of its kind, with industry experts estimating the deposits at over 25% of the world's then known uranium reserves of 800,000 tons.
The ultimate available supply is believed to be sufficient for at least the next 85 years, though some studies indicate underinvestment in the late twentieth century may produce supply problems in the 21st century.
Uranium deposits seem to be log-normal distributed. There is a 300-fold increase in the amount of uranium recoverable for each tenfold decrease in ore grade.
In other words, there is little high grade ore and proportionately much more low grade ore available.
Compounds
Oxidation states and oxides
Oxides
Calcined uranium yellowcake, as produced in many large mills, contains a distribution of uranium oxidation species in various forms ranging from most oxidized to least oxidized. Particles with short residence times in a calciner will generally be less oxidized than those with long retention times or particles recovered in the stack scrubber. Uranium content is usually referenced to , which dates to the days of the Manhattan Project when was used as an analytical chemistry reporting standard.
Phase relationships in the uranium-oxygen system are complex. The most important oxidation states of uranium are uranium(IV) and uranium(VI), and their two corresponding oxides are, respectively, uranium dioxide () and uranium trioxide (). Other uranium oxides such as uranium monoxide (UO), diuranium pentoxide (), and uranium peroxide () also exist.
The most common forms of uranium oxide are triuranium octoxide () and . Both oxide forms are solids that have low solubility in water and are relatively stable over a wide range of environmental conditions. Triuranium octoxide is (depending on conditions) the most stable compound of uranium and is the form most commonly found in nature. Uranium dioxide is the form in which uranium is most commonly used as a nuclear reactor fuel. At ambient temperatures, will gradually convert to . Because of their stability, uranium oxides are generally considered the preferred chemical form for storage or disposal.
Aqueous chemistry
Salts of many oxidation states of uranium are water-soluble and may be studied in aqueous solutions. The most common ionic forms are (brown-red), (green), (unstable), and (yellow), for U(III), U(IV), U(V), and U(VI), respectively. A few solid and semi-metallic compounds such as UO and US exist for the formal oxidation state uranium(II), but no simple ions are known to exist in solution for that state. Ions of liberate hydrogen from water and are therefore considered to be highly unstable. The ion represents the uranium(VI) state and is known to form compounds such as uranyl carbonate, uranyl chloride and uranyl sulfate. also forms complexes with various organic chelating agents, the most commonly encountered of which is uranyl acetate.
Unlike the uranyl salts of uranium and polyatomic ion uranium-oxide cationic forms, the uranates, salts containing a polyatomic uranium-oxide anion, are generally not water-soluble.
Carbonates
The interactions of carbonate anions with uranium(VI) cause the Pourbaix diagram to change greatly when the medium is changed from water to a carbonate containing solution. While the vast majority of carbonates are insoluble in water (students are often taught that all carbonates other than those of alkali metals are insoluble in water), uranium carbonates are often soluble in water. This is because a U(VI) cation is able to bind two terminal oxides and three or more carbonates to form anionic complexes.
Effects of pH
The uranium fraction diagrams in the presence of carbonate illustrate this further: when the pH of a uranium(VI) solution increases, the uranium is converted to a hydrated uranium oxide hydroxide and at high pHs it becomes an anionic hydroxide complex.
When carbonate is added, uranium is converted to a series of carbonate complexes if the pH is increased. One effect of these reactions is increased solubility of uranium in the pH range 6 to 8, a fact that has a direct bearing on the long term stability of spent uranium dioxide nuclear fuels.
Hydrides, carbides and nitrides
Uranium metal heated to reacts with hydrogen to form uranium hydride. Even higher temperatures will reversibly remove the hydrogen. This property makes uranium hydrides convenient starting materials to create reactive uranium powder along with various uranium carbide, nitride, and halide compounds. Two crystal modifications of uranium hydride exist: an α form that is obtained at low temperatures and a β form that is created when the formation temperature is above 250 °C.
Uranium carbides and uranium nitrides are both relatively inert semimetallic compounds that are minimally soluble in acids, react with water, and can ignite in air to form . Carbides of uranium include uranium monocarbide (UC), uranium dicarbide (), and diuranium tricarbide (). Both UC and are formed by adding carbon to molten uranium or by exposing the metal to carbon monoxide at high temperatures. Stable below 1800 °C, is prepared by subjecting a heated mixture of UC and to mechanical stress. Uranium nitrides obtained by direct exposure of the metal to nitrogen include uranium mononitride (UN), uranium dinitride (), and diuranium trinitride ().
Halides
All uranium fluorides are created using uranium tetrafluoride (); itself is prepared by hydrofluorination of uranium dioxide. Reduction of with hydrogen at 1000 °C produces uranium trifluoride (). Under the right conditions of temperature and pressure, the reaction of solid with gaseous uranium hexafluoride () can form the intermediate fluorides of , , and .
At room temperatures, has a high vapor pressure, making it useful in the gaseous diffusion process to separate the rare uranium-235 from the common uranium-238 isotope. This compound can be prepared from uranium dioxide and uranium hydride by the following process:
+ 4 HF → + 2 (500 °C, endothermic)
+ → (350 °C, endothermic)
The resulting , a white solid, is highly reactive (by fluorination), easily sublimes (emitting a vapor that behaves as a nearly ideal gas), and is the most volatile compound of uranium known to exist.
One method of preparing uranium tetrachloride () is to directly combine chlorine with either uranium metal or uranium hydride. The reduction of by hydrogen produces uranium trichloride () while the higher chlorides of uranium are prepared by reaction with additional chlorine. All uranium chlorides react with water and air.
Bromides and iodides of uranium are formed by direct reaction of, respectively, bromine and iodine with uranium or by adding to those element's acids. Known examples include: , , , and . has never been prepared. Uranium oxyhalides are water-soluble and include , , , and . Stability of the oxyhalides decrease as the atomic weight of the component halide increases.
Isotopes
Uranium, like all elements with an atomic number greater than 82, has no stable isotopes. All isotopes of uranium are radioactive because the strong nuclear force does not prevail over electromagnetic repulsion in nuclides containing more than 82 protons. Nevertheless, the two most stable isotopes, U and U, have half-lives long enough to occur in nature as primordial radionuclides, with measurable quantities having survived since the formation of the Earth. These two nuclides, along with thorium-232, are the only confirmed primordial nuclides heavier than nearly-stable bismuth-209.
Natural uranium consists of three major isotopes: uranium-238 (99.28% natural abundance), uranium-235 (0.71%), and uranium-234 (0.0054%). There are also five other trace isotopes: uranium-240, a decay product of plutonium-244; uranium-239, which is formed when U undergoes spontaneous fission, releasing neutrons that are captured by another U atom; uranium-237, which is formed when U captures a neutron but emits two more, which then decays to neptunium-237; uranium-236, which occurs in trace quantities due to neutron capture on U and as a decay product of plutonium-244; and finally, uranium-233, which is formed in the decay chain of neptunium-237. Additionally, uranium-232 would be produced by the double beta decay of natural thorium-232, though this energetically possible process has never been observed.
Uranium-238 is the most stable isotope of uranium, with a half-life of about years, roughly the age of the Earth. Uranium-238 is predominantly an alpha emitter, decaying to thorium-234. It ultimately decays through the uranium series, which has 18 members, into lead-206. Uranium-238 is not fissile, but is a fertile isotope, because after neutron activation it can be converted to plutonium-239, another fissile isotope. Indeed, the U nucleus can absorb one neutron to produce the radioactive isotope uranium-239. U decays by beta emission to neptunium-239, also a beta-emitter, that decays in its turn, within a few days into plutonium-239. Pu was used as fissile material in the first atomic bomb detonated in the "Trinity test" on 16 July 1945 in New Mexico.
Uranium-235 has a half-life of about years; it is the next most stable uranium isotope after U and is also predominantly an alpha emitter, decaying to thorium-231. Uranium-235 is important for both nuclear reactors and nuclear weapons, because it is the only uranium isotope existing in nature on Earth in significant amounts that is fissile. This means that it can be split into two or three fragments (fission products) by thermal neutrons. The decay chain of U, which is called the actinium series, has 15 members and eventually decays into lead-207. The constant rates of decay in these decay series makes the comparison of the ratios of parent to daughter elements useful in radiometric dating.
Uranium-236 has a half-life of years and is not found in significant quantities in nature. The half-life of uranium-236 is too short for it to be primordial, though it has been identified as an extinct progenitor of its alpha decay daughter, thorium-232. Uranium-236 occurs in spent nuclear fuel when neutron capture on U does not induce fission, or as a decay product of plutonium-240. Uranium-236 is not fertile, as three more neutron captures are required to produce fissile Pu, and is not itself fissile; as such, it is considered long-lived radioactive waste.
Uranium-234 is a member of the uranium series and occurs in equilibrium with its progenitor, U; it undergoes alpha decay with a half-life of 245,500 years and decays to lead-206 through a series of relatively short-lived isotopes.
Uranium-233 undergoes alpha decay with a half-life of 160,000 years and, like U, is fissile. It can be bred from thorium-232 via neutron bombardment, usually in a nuclear reactor; this process is known as the thorium fuel cycle. Owing to the fissility of U and the greater natural abundance of thorium (three times that of uranium), U has been investigated for use as nuclear fuel as a possible alternative to U and Pu, though is not in widespread use . The decay chain of uranium-233 forms part of the neptunium series and ends at nearly-stable bismuth-209 (half-life ) and stable thallium-205.
Uranium-232 is an alpha emitter with a half-life of 68.9 years. This isotope is produced as a byproduct in production of U and is considered a nuisance, as it is not fissile and decays through short-lived alpha and gamma emitters such as Tl. It is also expected that thorium-232 should be able to undergo double beta decay, which would produce uranium-232, but this has not yet been observed experimentally.
All isotopes from U to U inclusive have minor cluster decay branches (less than %), and all these bar U, in addition to U, have minor spontaneous fission branches; the greatest branching ratio for spontaneous fission is about % for U, or about one in every two million decays. The shorter-lived trace isotopes U and U exclusively undergo beta decay, with respective half-lives of 6.752 days and 23.45 minutes.
In total, 28 isotopes of uranium have been identified, ranging in mass number from 214 to 242, with the exception of 220. Among the uranium isotopes not found in natural samples or nuclear fuel, the longest-lived is U, an alpha emitter with a half-life of 20.23 days. This isotope has been considered for use in targeted alpha-particle therapy (TAT). All other isotopes have half-lives shorter than one hour, except for U (half-life 4.2 days) and U (half-life 14.1 hours). The shortest-lived known isotope is U, with a half-life of 660 nanoseconds, and it is expected that the hitherto unknown U has an even shorter half-life. The proton-rich isotopes lighter than U primarily undergo alpha decay, except for U and U, which decay to protactinium isotopes via positron emission and electron capture, respectively; the neutron-rich U, U, and U undergo beta decay to form neptunium isotopes.
Enrichment
In nature, uranium is found as uranium-238 (99.2742%) and uranium-235 (0.7204%). Isotope separation concentrates (enriches) the fissile uranium-235 for nuclear weapons and most nuclear power plants, except for gas cooled reactors and pressurized heavy water reactors. Most neutrons released by a fissioning atom of uranium-235 must impact other uranium-235 atoms to sustain the nuclear chain reaction. The concentration and amount of uranium-235 needed to achieve this is called a 'critical mass'.
To be considered 'enriched', the uranium-235 fraction should be between 3% and 5%. This process produces huge quantities of uranium that is depleted of uranium-235 and with a correspondingly increased fraction of uranium-238, called depleted uranium or 'DU'. To be considered 'depleted', the U concentration should be no more than 0.3%. The price of uranium has risen since 2001, so enrichment tailings containing more than 0.35% uranium-235 are being considered for re-enrichment, driving the price of depleted uranium hexafluoride above $130 per kilogram in July 2007 from $5 in 2001.
The gas centrifuge process, where gaseous uranium hexafluoride () is separated by the difference in molecular weight between UF and UF using high-speed centrifuges, is the cheapest and leading enrichment process. The gaseous diffusion process had been the leading method for enrichment and was used in the Manhattan Project. In this process, uranium hexafluoride is repeatedly diffused through a silver-zinc membrane, and the different isotopes of uranium are separated by diffusion rate (since uranium-238 is heavier it diffuses slightly slower than uranium-235). The molecular laser isotope separation method employs a laser beam of precise energy to sever the bond between uranium-235 and fluorine. This leaves uranium-238 bonded to fluorine and allows uranium-235 metal to precipitate from the solution. An alternative laser method of enrichment is known as atomic vapor laser isotope separation (AVLIS) and employs visible tunable lasers such as dye lasers. Another method used is liquid thermal diffusion.
The only significant deviation from the U to U ratio in any known natural samples occurs in Oklo, Gabon, where natural nuclear fission reactors consumed some of the U some two billion years ago when the ratio of U to U was more akin to that of low enriched uranium allowing regular ("light") water to act as a neutron moderator akin to the process in humanmade light water reactors. The existence of such natural fission reactors which had been theoretically predicted beforehand was proven as the slight deviation of U concentration from the expected values were discovered during uranium enrichment in France. Subsequent investigations to rule out any nefarious human action (such as stealing of U) confirmed the theory by finding isotope ratios of common fission products (or rather their stable daughter nuclides) in line with the values expected for fission but deviating from the values expected for non-fission derived samples of those elements.
Human exposure
A person can be exposed to uranium (or its radioactive daughters, such as radon) by inhaling dust in air or by ingesting contaminated water and food. The amount of uranium in air is usually very small; however, people who work in factories that process phosphate fertilizers, live near government facilities that made or tested nuclear weapons, live or work near a modern battlefield where depleted uranium weapons have been used, or live or work near a coal-fired power plant, facilities that mine or process uranium ore, or enrich uranium for reactor fuel, may have increased exposure to uranium. Houses or structures that are over uranium deposits (either natural or man-made slag deposits) may have an increased incidence of exposure to radon gas. The Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit for uranium exposure in the workplace as 0.25 mg/m over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.2 mg/m over an 8-hour workday and a short-term limit of 0.6 mg/m. At 10 mg/m, uranium is immediately dangerous to life and health.
Most ingested uranium is excreted during digestion. Only 0.5% is absorbed when insoluble forms of uranium, such as its oxide, are ingested, whereas absorption of the more soluble uranyl ion can be up to 5%. However, soluble uranium compounds tend to quickly pass through the body, whereas insoluble uranium compounds, especially when inhaled by way of dust into the lungs, pose a more serious exposure hazard. After entering the bloodstream, the absorbed uranium tends to bioaccumulate and stay for many years in bone tissue because of uranium's affinity for phosphates. Incorporated uranium becomes uranyl ions, which accumulate in bone, liver, kidney, and reproductive tissues.
Radiological and chemical toxicity of uranium combine by the fact that elements of high atomic number Z like uranium exhibit phantom or secondary radiotoxicity through absorption of natural background gamma and X-rays and re-emission of photoelectrons, which in combination with the high affinity of uranium to the phosphate moiety of DNA cause increased single and double strand DNA breaks.
Uranium is not absorbed through the skin, and alpha particles released by uranium cannot penetrate the skin.
Uranium can be decontaminated from steel surfaces and aquifers.
Effects and precautions
Normal functioning of the kidney, brain, liver, heart, and other systems can be affected by uranium exposure, because, besides being weakly radioactive, uranium is a toxic metal. Uranium is also a reproductive toxicant. Radiological effects are generally local because alpha radiation, the primary form of U decay, has a very short range, and will not penetrate skin. Alpha radiation from inhaled uranium has been demonstrated to cause lung cancer in exposed nuclear workers. While the CDC has published one study that no human cancer has been seen as a result of exposure to natural or depleted uranium, exposure to uranium and its decay products, especially radon, is a significant health threat. Exposure to strontium-90, iodine-131, and other fission products is unrelated to uranium exposure, but may result from medical procedures or exposure to spent reactor fuel or fallout from nuclear weapons.
Although accidental inhalation exposure to a high concentration of uranium hexafluoride has resulted in human fatalities, those deaths were associated with the generation of highly toxic hydrofluoric acid and uranyl fluoride rather than with uranium itself. Finely divided uranium metal presents a fire hazard because uranium is pyrophoric; small grains will ignite spontaneously in air at room temperature.
Uranium metal is commonly handled with gloves as a sufficient precaution. Uranium concentrate is handled and contained so as to ensure that people do not inhale or ingest it.
| Physical sciences | Chemical elements_2 | null |
31744 | https://en.wikipedia.org/wiki/Ungulate | Ungulate | Ungulates ( ) are members of the diverse clade Euungulata ("true ungulates"), which primarily consists of large mammals with hooves. Once part of the clade "Ungulata" along with the clade Paenungulata, "Ungulata" has since been determined to be a polyphyletic and thereby invalid clade based on molecular data. As a result, true ungulates had since been reclassified to the newer clade Euungulata in 2001 within the clade Laurasiatheria while Paenungulata has been reclassified to a distant clade Afrotheria. Living ungulates are divided into two orders: Perissodactyla including equines, rhinoceroses, and tapirs; and Artiodactyla including cattle, antelope, pigs, giraffes, camels, sheep, deer, and hippopotamuses, among others. Cetaceans such as whales, dolphins, and porpoises are also classified as artiodactyls, although they do not have hooves. Most terrestrial ungulates use the hoofed tips of their toes to support their body weight while standing or moving. Two other orders of ungulates, Notoungulata and Litopterna, both native to South America, became extinct at the end of the Pleistocene, around 12,000 years ago.
The term means, roughly, "being hoofed" or "hoofed animal". As a descriptive term, "ungulate" normally excludes cetaceans as they do not possess most of the typical morphological characteristics of other ungulates, but recent discoveries indicate that they were also descended from early artiodactyls. Ungulates are typically herbivorous and many employ specialized gut bacteria to enable them to digest cellulose, though some members may deviate from this: several species of pigs and the extinct entelodonts are omnivorous, while cetaceans and the extinct mesonychians are carnivorous.
Etymology
Ungulate is from the Late Latin adjective . is a diminutive form of Latin (finger nail; toe nail).
Classifications
History
Euungulata is a clade (or in some taxonomies, a grand order) of mammals. The two extant orders of ungulates are the Perissodactyla (odd-toed ungulates) and Artiodactyla (even-toed ungulates). Hyracoidea (hyraxes), Sirenia (sea cows, dugongs and manatees) and Proboscidea (elephants) were in the past grouped within the clade "Ungulata", later found to be a polyphyletic and now invalid clade. The three orders of Paenungulata are now considered a clade and grouped in the Afrotheria clade, while Euungulata is now grouped under the Laurasiatheria clade.
In 2009, morphological and molecular work found that aardvarks, hyraxes, sea cows, and elephants were more closely related to each other and to sengis, tenrecs, and golden moles than to the perissodactyls and artiodactyls, and form the clade Afrotheria. Elephants, sea cows, and hyraxes were grouped together in the clade Paenungulata, while the aardvark has been considered as either a close relative to them or a close relative to sengis in the clade Afroinsectiphilia. This is a striking example of convergent evolution.
There is now some dispute as to whether this smaller Euungulata is a cladistic (evolution-based) group, or merely a phenetic group (form taxon) or folk taxon (similar, but not necessarily related). Some studies have indeed found the mesaxonian ungulates and paraxonian ungulates to form a monophyletic lineage, closely related to either the Ferae (the carnivorans and the pangolins) in the clade Fereuungulata or to the bats. Other studies found the two orders not that closely related, as some place the perissodactyls as close relatives to bats and Ferae in Pegasoferae and others place the artiodactyls as close relatives to bats.
Taxonomy
Below is a simplified taxonomy (assuming that ungulates do indeed form a natural grouping) with the extant families, in order of the relationships. Keep in mind that there were still some grey areas of conflict, such as the case with the relationship between the pecoran families and the baleen whale families. See each family for the relationships of the species as well as the controversies in their respective articles.
Euungulata
Perissodactyla (Mesaxonian ungulates)
Hippomorpha
Equidae: Horses, asses and zebras
Ceratomorpha
Tapiridae: Tapirs
Rhinocerotidae: Rhinoceroses
Artiodactyla (= Cetartiodactyla) (Paraxonian ungulates)
Tylopoda
Camelidae: Camels and llamas
Artiofabula
Suina
Tayassuidae: Peccaries
Suidae: Pigs
Cetruminantia
Ruminantia
Tragulidae: Chevrotains
Cervoidea
Antilocapridae: Pronghorn
Giraffidae: Giraffes and okapi
Cervidae: Deer
Moschidae: Musk deer
Bovidae: Cattle and antelopes
Whippomorpha
Hippopotamidae: Hippopotamuses
Cetacea
Mysticeti
Balaenidae: Bowhead and right whales
Cetotheriidae: Pygmy right whale
Balaenopteridae: Rorquals
Odontoceti
Physeteroidea
Physeteridae: Sperm whale
Kogiidae: Lesser sperm whales
Platanistoidea
Platanistidae: Indian river dolphins
Ziphioidea
Ziphiidae: Beaked whales
Lipotoidea
Lipotidae: Baiji (functionally extinct)
Inioidea
Iniidae: Amazonian river dolphins
Pontoporiidae: La Plata dolphin
Delphinoidea
Monodontidae: Beluga and narwhal
Phocoenidae: Porpoises
Delphinidae: Oceanic dolphins
Phylogeny
Below is the general consensus of the phylogeny of the ungulate families.
Evolutionary history
Perissodactyla and Artiodactyla include the majority of large land mammals. These two groups first appeared during the late Paleocene, rapidly spreading to a wide variety of species on numerous continents, and have developed in parallel since that time. Some scientists believed that modern ungulates were descended from an evolutionary grade of mammals known as the condylarths. The earliest known member of this group may have been the tiny Protungulatum, a mammal that co-existed with the last of non-avian dinosaurs 66 million years ago. However, many authorities do not consider it a true placental, let alone an ungulate. The enigmatic dinoceratans were among the first large herbivorous mammals, although their exact relationship with other mammals is still debated with one of the theories being that they might just be distant relatives to living ungulates; the most recent study recovers them as within the true ungulate assemblage, closest to Carodnia.
In Australia, the recently-extinct marsupial Chaeropus ("pig-footed bandicoot") also developed hooves similar to those of artiodactyls, an example of convergent evolution.
Perissodactyl evolution
Perissodactyls were thought to have evolved from the Phenacodontidae, small, sheep-sized animals that were already showing signs of anatomical features that their descendants would inherit (the reduction of digit I and V for example). By the start of the Eocene, 55 million years ago (Mya), they had diversified and spread out to occupy several continents. Horses and tapirs both evolved in North America; rhinoceroses appear to have developed in Asia from tapir-like animals and then colonised the Americas during the middle Eocene (about 45 Mya). Of the approximately 15 families, only three survive (McKenna and Bell, 1997; Hooker, 2005). These families were very diverse in form and size; they included the enormous brontotheres and the bizarre chalicotheres. The largest perissodactyl, an Asian rhinoceros called Paraceratherium, reached , more than twice the weight of an elephant.
It has been found in a cladistic study that the anthracobunids and the desmostylians – two lineages that have been previously classified as Afrotherians (more specifically closer to elephants) – have been classified as a clade that is closely related to the perissodactyls. The desmostylians were large amphibious quadrupeds with massive limbs and a short tail. They grew to in length and were thought to have weighed more than . Their fossils were known from the northern Pacific Rim, from southern Japan through Russia, the Aleutian Islands and the Pacific coast of North America to the southern tip of Baja California. Their dental and skeletal form suggests desmostylians were aquatic herbivores dependent on littoral habitats. Their name refers to their highly distinctive molars, in which each cusp was modified into hollow columns, so that a typical molar would have resembled a cluster of pipes, or in the case of worn molars, volcanoes. They were the only marine mammals to have gone extinct.
The South American meridiungulates contain the somewhat tapir-like pyrotheres and astrapotheres, the mesaxonic litopterns and the diverse notoungulates. As a whole, meridiungulates were said to have evolved from animals like Hyopsodus. For a while their relationships with other ungulates were a mystery. Some paleontologists have even challenged the monophyly of Meridiungulata by suggesting that the pyrotheres may be more closely related to other mammals, such as Embrithopoda (an African order that were related to elephants) than to other South American ungulates. A recent study based on bone collagen has found that at least litopterns and the notoungulates were closely related to the perissodactyls.
The oldest known fossils assigned to Equidae date from the early Eocene, 54 million years ago. They had been assigned to the genus Hyracotherium, but the type species of that genus is now considered not a member of this family, but the other species have been split off into different genera. These early Equidae were fox-sized animals with three toes on the hind feet, and four on the front feet. They were herbivorous browsers on relatively soft plants, and were already adapted for running. The complexity of their brains suggest that they already were alert and intelligent animals. Later species reduced the number of toes, and developed teeth more suited for grinding up grass and other tough plant food.
Rhinocerotoids diverged from other perissodactyls by the early Eocene. Fossils of Hyrachyus eximus found in North America date to this period. This small hornless ancestor resembled a tapir or small horse more than a rhino. Three families, sometimes grouped together as the superfamily Rhinocerotoidea, evolved in the late Eocene: Hyracodontidae, Amynodontidae and Rhinocerotidae, thus creating an explosion of diversity unmatched for a while until environmental changes drastically eliminated several species.
The first tapirids, such as Heptodon, appeared in the early Eocene. They appeared very similar to modern forms, but were about half the size, and lacked the proboscis. The first true tapirs appeared in the Oligocene. By the Miocene, such genera as Miotapirus were almost indistinguishable from the extant species. Asian and American tapirs were believed to have diverged around 20 to 30 million years ago; and tapirs migrated from North America to South America around 3 million years ago, as part of the Great American Interchange.
Perissodactyls were the dominant group of large terrestrial browsers right through the Oligocene. However, the rise of grasses in the Miocene (about 20 Mya) saw a major change: the artiodactyl species with their more complex stomachs were better able to adapt to a coarse, low-nutrition diet, and soon rose to prominence. Nevertheless, many perissodactyl species survived and prospered until the late Pleistocene (about 10,000 years ago) when they faced the pressure of human hunting and habitat change.
Artiodactyl evolution
The artiodactyls were thought to have evolved from a small group of condylarths, Arctocyonidae, which were unspecialized, superficially raccoon-like to bear-like omnivores from the Early Paleocene (about 65 to 60 million years ago). They had relatively short limbs lacking specializations associated with their relatives (e.g. reduced side digits, fused bones, and hooves), and long, heavy tails. Their primitive anatomy makes it unlikely that they were able to run down prey, but with their powerful proportions, claws, and long canines, they may have been able to overpower smaller animals in surprise attacks. Evidently these mammals soon evolved into two separate lineages: the mesonychians and the artiodactyls.
The first artiodactyls looked like today's chevrotains or pigs: small, short-legged creatures that ate leaves and the soft parts of plants. By the Late Eocene (46 million years ago), the three modern suborders had already developed: Suina (the pig group); Tylopoda (the camel group); and Ruminantia (the goat and cattle group). Nevertheless, artiodactyls were far from dominant at that time: the perissodactyls were much more successful and far more numerous. Artiodactyls survived in niche roles, usually occupying marginal habitats, and it is presumably at that time that they developed their complex digestive systems, which allowed them to survive on lower-grade food. While most artiodactyls were taking over the niches left behind by several extinct perissodactyls, one lineage of artiodactyls began to venture out into the seas.
Cetacean evolution
The traditional theory of cetacean evolution was that cetaceans were related to the mesonychian. These animals had unusual triangular teeth very similar to those of primitive cetaceans. This is why scientists long believed that cetaceans evolved from a form of mesonychian. Today, many scientists believe cetaceans evolved from the same stock that gave rise to hippopotamuses. This hypothesized ancestral group likely split into two branches around . One branch would evolve into cetaceans, possibly beginning about with the proto-whale Pakicetus and other early cetacean ancestors collectively known as Archaeoceti, which eventually underwent aquatic adaptation into the completely aquatic cetaceans. The other branch became the anthracotheres, a large family of four-legged beasts, the earliest of whom in the late Eocene would have resembled skinny hippopotamuses with comparatively small and narrow heads. All branches of the anthracotheres, except that which evolved into Hippopotamidae, became extinct during the Pliocene without leaving any descendants.
The family Raoellidae is said to be the closest artiodactyl family to the cetaceans. Consequentially, new theories in cetacean evolution hypothesize that whales and their ancestors escaped predation, not competition, by slowly adapting to the ocean.
Mesonychian evolution
Mesonychians were depicted as "wolves on hooves" and were the first major mammalian predators, appearing in the Paleocene. Early mesonychians had five digits on their feet, which probably rested flat on the ground during walking (plantigrade locomotion), but later mesonychians had four digits that ended in tiny hooves on all of their toes and were increasingly well adapted to running. Like running members of the even-toed ungulates, mesonychians (Pachyaena, for example) walked on their digits (digitigrade locomotion). Mesonychians fared very poorly at the close of the Eocene epoch, with only one genus, Mongolestes, surviving into the Early Oligocene epoch, as the climate changed and fierce competition arose from the better adapted creodonts.
Characteristics
Ungulates were in high diversity in response to sexual selection and ecological events; most ungulates lack a collar bone. Terrestrial ungulates were for the most part herbivores, with some of them being grazers. However, there were exceptions to this as pigs, peccaries, hippos and duikers were known to have an omnivorous diet. Some cetaceans were the only modern ungulates that were carnivores; baleen whales consume significantly smaller animals in relation to their body size, such as small species of fish and krill; toothed whales, depending on the species, can consume a wide range of species: squid, fish, sharks, and other species of mammals such as seals and other whales. In terms of ecosystem ungulates have colonized all corners of the planet, from mountains to the ocean depths; grasslands to deserts and some have been domesticated by humans.
Anatomy
Ungulates have developed specialized adaptations, especially in the areas of cranial appendages, dentition, and leg morphology including the modification of the astragalus (one of the ankle bones at the end of the lower leg) with a short, robust head.
Hooves
The hoof is the tip of the toe of an ungulate mammal, strengthened by a thick horny (keratin) covering. The hoof consists of a hard or rubbery sole, and a hard wall formed by a thick nail rolled around the tip of the toe. Both the sole and the edge of the hoof wall normally bear the weight of the animal. Hooves grow continuously, and are constantly worn down by use. In most modern ungulates, the radius and ulna are fused along the length of the forelimb; early ungulates, such as the arctocyonids, did not share this unique skeletal structure. The fusion of the radius and ulna prevents an ungulate from rotating its forelimb. Since this skeletal structure has no specific function in ungulates, it is considered a homologous characteristic that ungulates share with other mammals. This trait would have been passed down from a common ancestor. While the two orders of ungulates colloquial names were based on the number of toes of their members ("odd-toed" for the perissodactyls and "even-toed" for the terrestrial artiodactyls), it is not an accurate reason they were grouped. Tapirs have four toes in the front, yet they were members of the "odd-toed" order; peccaries and modern cetaceans were members of the "even-toed" order, yet peccaries have three toes in the front and whales were an extreme example as they have flippers instead of hooves. Scientists had classified them according to the distribution of their weight to their toes.
Perissodactyls have a mesaxonic foot, meaning that the weight is distributed on the third toe on all legs thanks to the plane symmetry of their feet. There has been a reduction of toes from the common ancestor, with the classic example being horses with their single hooves. In consequence, there was an alternative name for the perissodactyls the nearly obsolete Mesaxonia. Perissodactyls were not the only lineage of mammals to have evolved this trait; the meridiungulates have evolved mesaxonic feet numerous times.
Terrestrial artiodactyls have a paraxonic foot, meaning that the weight is distributed on the third and the fourth toe on all legs. The majority of these mammals have cloven hooves, with two smaller ones known as the dewclaws that were located further up on the leg. The earliest cetaceans (the archaeocetes), also had this characteristic in the addition of also having both an astragalus and cuboid bone in the ankle, which were further diagnostic traits of artiodactyls.
In modern cetaceans, the front limbs had become pectoral fins and the hind parts were internal and reduced. Occasionally, the genes that code for longer extremities cause a modern cetacean to develop miniature legs (known as atavism). The main method of moving is an up-and-down motion with the tail fin, called the fluke, which is used for propulsion, while the pectoral fins together with the entire tail section provide directional control. All modern cetaceans still retain their digits despite the external appearance suggesting otherwise.
Teeth
Most ungulates have developed reduced canine teeth and specialized molars, including bunodont (low, rounded cusps) and hypsodont (high crowned) teeth. The development of hypsodonty has been of particular interest as this adaptation was strongly associated with the spread of grasslands during the Miocene about 25 million years ago. As forest biomes declined, grasslands spread, opening new niches for mammals. Many ungulates switched from browsing diets to grazing diets, and possibly driven by abrasive silica in grass, hypsodonty became common. However, recent evidence ties the evolution of hypsodonty to open, gritty habitats and not the grass itself. This is termed the Grit, not grass hypothesis.
Some ungulates completely lack upper incisors and instead have a dental pad to assist in browsing. It can be found in camels, ruminants, and some toothed whales; modern baleen whales were remarkable in that they have baleen instead to filter out the krill from the water. On the other spectrum teeth have been evolved as weapons or sexual display seen in pigs and peccaries, some species of deer, musk deer, hippopotamuses, beaked whales and the Narwhal, with its long canine tooth.
Cranial appendages
Ungulates have evolved a variety of cranial appendages that can be found in cervoids (with the exception of musk deer). In oxen and antelope, the size and shape of the horns varies greatly but the basic structure is always a pair of simple bony protrusions without branches, often having a spiral, twisted, or fluted form, each covered in a permanent sheath of keratin. The unique horn structure is the only unambiguous morphological feature of bovids that distinguishes them from other pecorans. Male horn development has been linked to sexual selection, while the presence of horns in females is likely due to natural selection. The horns of females are usually smaller than those of males and are sometimes of a different shape. The horns of female bovids are thought to have evolved for defense against predators or to express territoriality, as nonterritorial females, which are able to use crypsis for predator defense, often lack horns.
Rhinoceros horns, unlike those of other horned mammals, consist only of keratin. These horns rest on the nasal ridge of the animal's skull.
Antlers are unique to cervids and found mostly on males: the only cervid females with antlers are caribou and reindeer, whose antlers are normally smaller than males'. Nevertheless, fertile does of other species of deer have the capacity to produce antlers on occasion, usually due to increased testosterone levels. Each antler grows from an attachment point on the skull called a pedicle. While an antler is growing it is covered with highly vascular skin called velvet, which supplies oxygen and nutrients to the growing bone. Antlers are considered one of the most exaggerated cases of male secondary sexual traits in the animal kingdom, and grow faster than any other mammal bone. Growth occurs at the tip, initially as cartilage that is then mineralized to become bone. Once the antler has achieved its full size, the velvet is lost and the antler's bone dies. This dead bone structure is the mature antler. In most cases, the bone at the base is destroyed by osteoclasts and the antlers eventually fall off. As a result of their fast growth rate antlers place a substantial nutritional demand on deer; they thus can constitute an honest signal of metabolic efficiency and food gathering capability.
Ossicones are horn-like (or antler-like) protuberances found on the heads of giraffes and male okapis. They are similar to the horns of antelopes and cattle save that they are derived from ossified cartilage, and that the ossicones remain covered in skin and fur rather than horn.
Pronghorn cranial appendages are unique. Each "horn" of the pronghorn is composed of a slender, laterally flattened blade of bone that grows from the frontal bones of the skull, forming a permanent core. As in the Giraffidae, skin covers the bony cores, but in the pronghorn it develops into a keratinous sheath that is shed and regrown on an annual basis. Unlike the horns of the family Bovidae, the horn sheaths of the pronghorn are branched, each sheath possessing a forward-pointing tine (hence the name pronghorn). The horns of males are well developed.
| Biology and health sciences | Mammals: General | Animals |
31772 | https://en.wikipedia.org/wiki/Ursa%20Major | Ursa Major | Ursa Major, also known as the Great Bear, is a constellation in the northern sky, whose associated mythology likely dates back into prehistory. Its Latin name means "greater (or larger) bear", referring to and contrasting it with nearby Ursa Minor, the lesser bear. In antiquity, it was one of the original 48 constellations listed by Ptolemy in the 2nd century AD, drawing on earlier works by Greek, Egyptian, Babylonian, and Assyrian astronomers. Today it is the third largest of the 88 modern constellations.
Ursa Major is primarily known from the asterism of its main seven stars, which has been called the "Big Dipper", "the Wagon", "Charles's Wain", or "the Plough", among other names. In particular, the Big Dipper's stellar configuration mimics the shape of the "Little Dipper". Two of its stars, named Dubhe and Merak (α Ursae Majoris and β Ursae Majoris), can be used as the navigational pointer towards the place of the current northern pole star, Polaris in Ursa Minor.
Ursa Major, along with asterisms it contains or overlaps, is significant to numerous world cultures, often as a symbol of the north. Its depiction on the flag of Alaska is a modern example of such symbolism.
Ursa Major is visible throughout the year from most of the Northern Hemisphere, and appears circumpolar above the mid-northern latitudes. From southern temperate latitudes, the main asterism is invisible, but the southern parts of the constellation can still be viewed.
Characteristics
Ursa Major covers 1279.66 square degrees or 3.10% of the total sky, making it the third largest constellation. In 1930, Eugène Delporte set its official International Astronomical Union (IAU) constellation boundaries, defining it as a 28-sided irregular polygon. In the equatorial coordinate system, the constellation stretches between the right ascension coordinates of and and the declination coordinates of +28.30° and +73.14°. Ursa Major borders eight other constellations: Draco to the north and northeast, Boötes to the east, Canes Venatici to the east and southeast, Coma Berenices to the southeast, Leo and Leo Minor to the south, Lynx to the southwest and Camelopardalis to the northwest. The three-letter constellation abbreviation "UMa" was adopted by the IAU in 1922.
Features
Asterisms
The outline of the seven bright stars of Ursa Major form the asterism known as the "Big Dipper" in the United States and Canada, while in the United Kingdom it is called the Plough or (historically) Charles' Wain. Six of the seven stars are of second magnitude or higher, and it forms one of the best-known patterns in the sky. As many of its common names allude, its shape is said to resemble a ladle, an agricultural plough, or wagon. In the context of Ursa Major, they are commonly drawn to represent the hindquarters and tail of the Great Bear. Starting with the "ladle" portion of the dipper and extending clockwise (eastward in the sky) through the handle, these stars are the following:
Dubhe ("the bear"), which at a magnitude of 1.79 is the 35th-brightest star in the sky and the second-brightest of Ursa Major.
Merak ("the loins of the bear"), with a magnitude of 2.37.
Phecda ("thigh"), with a magnitude of 2.44.
Megrez, meaning "root of the tail", referring to its location as the intersection of the body and tail of the bear (or the ladle and handle of the dipper).
Alioth, a name which refers not to a bear but to a "black horse", the name corrupted from the original and mis-assigned to the similarly named Alcor, the naked-eye binary companion of Mizar. Alioth is the brightest star of Ursa Major and the 33rd-brightest in the sky, with a magnitude of 1.76. It is also the brightest of the chemically peculiar Ap stars, magnetic stars whose chemical elements are either depleted or enhanced, and appear to change as the star rotates.
Mizar, ζ Ursae Majoris, the second star in from the end of the handle of the Big Dipper, and the constellation's fourth-brightest star. Mizar, which means "girdle", forms a famous double star, with its optical companion Alcor (80 Ursae Majoris), the two of which were termed the "horse and rider" by the Arabs.
Alkaid, known as η Ursae Majoris, is situated at the end of the tail. With a magnitude of 1.85, Alkaid is the third-brightest star of Ursa Major.
Except for Dubhe and Alkaid, the stars of the Big Dipper all have proper motions heading toward a common point in Sagittarius. A few other such stars have been identified, and together they are called the Ursa Major Moving Group.
The stars Merak (β Ursae Majoris) and Dubhe (α Ursae Majoris) are known as the "pointer stars" because they are helpful for finding Polaris, also known as the North Star or Pole Star. By visually tracing a line from Merak through Dubhe (1 unit) and continuing for 5 units, one's eye will land on Polaris, accurately indicating true north.
Another asterism representing three pairs of footprints of a leaping gazelle is recognized in Arab culture. It is a series of three pairs of stars found along the southern border of the constellation. From southeast to southwest, the "first leap", comprising ν and ξ Ursae Majoris (Alula Borealis and Australis, respectively); the "second leap", comprising λ and μ Ursae Majoris (Tania Borealis and Australis); and the "third leap", comprising ι and κ Ursae Majoris, (Talitha Borealis and Australis respectively).
Other stars
W Ursae Majoris is the prototype of a class of contact binary variable stars, and ranges between 7.75m and 8.48m.
47 Ursae Majoris is a Sun-like star with a three-planet system. 47 Ursae Majoris b, discovered in 1996, orbits every 1078 days and is 2.53 times the mass of Jupiter. 47 Ursae Majoris c, discovered in 2001, orbits every 2391 days and is 0.54 times the mass of Jupiter. 47 Ursae Majoris d, discovered in 2010, has an uncertain period, lying between 8907 and 19097 days; it is 1.64 times the mass of Jupiter. The star is of magnitude 5.0 and is approximately 46 light-years from Earth.
The star TYC 3429-697-1 ( ), located to the east of θ Ursae Majoris and to the southwest of the "Big Dipper") has been recognized as the state star of Delaware, and is informally known as the Delaware Diamond.
Deep-sky objects
Several bright galaxies are found in Ursa Major, including the pair Messier 81 (one of the brightest galaxies in the sky) and Messier 82 above the bear's head, and Pinwheel Galaxy (M101), a spiral northeast of Alkaid. The spiral galaxies Messier 108 and Messier 109 are also found in this constellation. The bright planetary nebula Owl Nebula (M97) can be found along the bottom of the bowl of the Big Dipper.
M81 is a nearly face-on spiral galaxy 11.8 million light-years from Earth. Like most spiral galaxies, it has a core made up of old stars, with arms filled with young stars and nebulae. Along with M82, it is a part of the galaxy cluster closest to the Local Group.
M82 is a nearly edgewise galaxy that is interacting gravitationally with M81. It is the brightest infrared galaxy in the sky. SN 2014J, an apparent Type Ia supernova, was observed in M82 on 21 January 2014.
M97, also called the Owl Nebula, is a planetary nebula 1,630 light-years from Earth; it has a magnitude of approximately 10. It was discovered in 1781 by Pierre Méchain.
M101, also called the Pinwheel Galaxy, is a face-on spiral galaxy located 25 million light-years from Earth. It was discovered by Pierre Méchain in 1781. Its spiral arms have regions with extensive star formation and have strong ultraviolet emissions. It has an integrated magnitude of 7.5, making it visible in both binoculars and telescopes, but not to the naked eye.
NGC 2787 is a lenticular galaxy at a distance of 24 million light-years. Unlike most lenticular galaxies, NGC 2787 has a bar at its center. It also has a halo of globular clusters, indicating its age and relative stability.
NGC 2950 is a lenticular galaxy located 60 million light-years from Earth.
NGC 3000 is a double star, and catalogued as a nebula-type object.
NGC 3079 is a starburst spiral galaxy located 52 million light-years from Earth. It has a horseshoe-shaped structure at its center that indicates the presence of a supermassive black hole. The structure itself is formed by superwinds from the black hole.
NGC 3310 is another starburst spiral galaxy located 50 million light-years from Earth. Its bright white color is caused by its higher than usual rate of star formation, which began 100 million years ago after a merger. Studies of this and other starburst galaxies have shown that their starburst phase can last for hundreds of millions of years, far longer than was previously assumed.
NGC 4013 is an edge-on spiral galaxy located 55 million light-years from Earth. It has a prominent dust lane and has several visible star forming regions.
I Zwicky 18 is a young dwarf galaxy at a distance of 45 million light-years. The youngest-known galaxy in the visible universe, I Zwicky 18 is about 4 million years old, about one-thousandth the age of the Solar System. It is filled with star forming regions which are creating many hot, young, blue stars at a very high rate.
The Hubble Deep Field is located to the northeast of δ Ursae Majoris.
Meteor showers
The Alpha Ursae Majorids are a minor meteor shower in the constellation. They may be caused by the comet C/1992 W1 (Ohshita).
The Kappa Ursae Majorids are a newly discovered meteor shower, peaking between November 1 and November 10.
The October Ursae Majorids were discovered in 2006 by Japanese researchers. They may be caused may be a long period comet. The shower peaks between October 12 and 19.
Extrasolar planets
HD 80606, a sun-like star in a binary system, orbits a common center of gravity with its partner, HD 80607; the two are separated by 1,200 AU on average. Research conducted in 2003 indicates that its sole planet, HD 80606 b is a future hot Jupiter, modeled to have evolved in a perpendicular orbit around 5 AU from its sun. The 4-Jupiter mass planet is projected to eventually move into a circular, more aligned orbit via the Kozai mechanism. However, it is currently on an incredibly eccentric orbit that ranges from approximately one astronomical unit at its apoapsis and six stellar radii at periapsis.
History
Ursa Major has been reconstructed as an Indo-European constellation. It was one of the 48 constellations listed by the 2nd century AD astronomer Ptolemy in his Almagest, who called it Arktos Megale. It is mentioned by such poets as Homer, Spenser, Shakespeare, Tennyson and also by Federico Garcia Lorca, in "Song for the Moon". Ancient Finnish poetry also refers to the constellation, and it features in the painting Starry Night Over the Rhône by Vincent van Gogh. It may be mentioned in the biblical book of Job, dated between the 7th and 4th centuries BC, although this is often disputed.
Mythology
The constellation of Ursa Major has been seen as a bear, usually female, by many distinct civilizations. This may stem from a common oral tradition of Cosmic Hunt myths stretching back more than 13,000 years. Using statistical and phylogenetic tools, Julien d'Huy reconstructs the following Palaeolithic state of the story: "There is an animal that is a horned herbivore, especially an elk. One human pursues this ungulate. The hunt locates or get to the sky. The animal is alive when it is transformed into a constellation. It forms the Big Dipper."
Greco-Roman tradition
In Greek mythology, Zeus (the king of the gods, known as Jupiter in Roman mythology) lusts after a young woman named Callisto, a nymph of Artemis (known to the Romans as Diana). Zeus's jealous wife Hera (Juno to the Romans) discovers that Callisto has a son named Arcas as the result of her rape by Zeus and transforms Callisto into a bear as a punishment. Callisto, while in bear form, later encounters her son Arcas. Arcas almost spears the bear, but to avert the tragedy Zeus whisks them both into the sky, Callisto as Ursa Major and Arcas as the constellation Boötes. Ovid called Ursa Major the Parrhasian Bear, since Callisto came from Parrhasia in Arcadia, where the story is set.
The Greek poet Aratus called the constellation Helike, ("turning" or "twisting"), because it turns around the celestial pole. The Odyssey notes that it is the sole constellation that never sinks below the horizon and "bathes in the Ocean's waves", so it is used as a celestial reference point for navigation. It has also been called the "Wain" or "Plaustrum", a Latin word referring to a horse-drawn cart.
Hindu tradition
In Hinduism, Ursa Major/Big dipper/ Great Bear is known as Saptarshi, each of the stars representing one of the Saptarishis or Seven Sages (Rishis) viz. Bhrigu, Atri, Angiras, Vasishtha, Pulastya, Pulaha, and Kratu. The fact that the two front stars of the constellations point to the pole star is explained as the boon given to the boy sage Dhruva by Lord Vishnu.
In Judaism and Christianity
One of the few star groups mentioned in the Bible (Job ; ; – Orion and the Pleiades being others), Ursa Major was also pictured as a bear by the Jews. "The Bear" was translated as "Arcturus" in the Vulgate and it persisted in the King James Version of the Bible.
East Asian traditions
In China and Japan, the Big Dipper is called the "North Dipper" (Chinese: , Japanese: ), and in ancient times, each one of the seven stars had a specific name, often coming themselves from ancient China:
"Pivot" (C: shū J: sū) is for Dubhe (Alpha Ursae Majoris)
"Beautiful jade" (C: xuán J: sen) is for Merak (Beta Ursae Majoris)
"Pearl" (C: jī J: ki) is for Phecda (Gamma Ursae Majoris)
"Balance" (C: quán J: ken) is for Megrez (Delta Ursae Majoris)
"Measuring rod of jade" (C: yùhéng J: gyokkō) is for Alioth (Epsilon Ursae Majoris)
"Opening of the Yang" (C: kāiyáng J: kaiyō) is for Mizar (Zeta Ursae Majoris)
Alkaid (Eta Ursae Majoris) has several nicknames: "Sword" (C: jiàn J: ken) (short form from "End of the sword" (C: jiàn xiān J: ken saki)), "Flickering light" (C: yáoguāng J: yōkō), or again "Star of military defeat" (C: pójūn xīng J: hagun sei), because travel in the direction of this star was regarded as bad luck for an army.
In Shinto, the seven largest stars of Ursa Major belong to Ame-no-Minakanushi, the oldest and most powerful of all kami.
In South Korea, the constellation is referred to as "the seven stars of the north". In the related myth, a widow with seven sons found comfort with a widower, but to get to his house required crossing a stream. The seven sons, sympathetic to their mother, placed stepping stones in the river. Their mother, not knowing who put the stones in place, blessed them and, when they died, they became the constellation.
Native American traditions
The Iroquois interpreted Alioth, Mizar, and Alkaid as three hunters pursuing the Great Bear. According to one version of their myth, the first hunter (Alioth) is carrying a bow and arrow to strike down the bear. The second hunter (Mizar) carries a large pot – the star Alcor – on his shoulder in which to cook the bear while the third hunter (Alkaid) hauls a pile of firewood to light a fire beneath the pot.
The Lakota people call the constellation , or "Great Bear".
The Wampanoag people (Algonquian) referred to Ursa Major as "maske", meaning "bear" according to Thomas Morton in The New England Canaan.
The Wasco-Wishram Native Americans interpreted the constellation as five wolves and two bears that were left in the sky by Coyote.
Germanic traditions
To Norse pagans, the Big Dipper was known as Óðins vagn, "Woden's wagon". Likewise Woden is poetically referred to by Kennings such as vagna verr 'guardian of the wagon' or vagna rúni 'confidant of the wagon'
Uralic traditions
In the Finnish language, the asterism is sometimes called by its old Finnish name, Otava. The meaning of the name has been almost forgotten in Modern Finnish; it means a salmon weir. Ancient Finns believed the bear (Ursus arctos) was lowered to earth in a golden basket off the Ursa Major, and when a bear was killed, its head was positioned on a tree to allow the bear's spirit to return to Ursa Major.
In the Sámi languages of Northern Europe, part of the constellation (i.e. the Big Dipper minus Dubhe and Merak, is identified as the bow of the great hunter Fávdna (the star Arcturus). In the main Sámi language, North Sámi, it is called Fávdnadávgi ("Fávdna's Bow") or simply dávggát ("the Bow"). The constellation features prominently in the Sámi anthem, which begins with the words Guhkkin davvin dávggaid vuolde sabmá suolggai Sámieanan, which translates to "Far to the north, under the Bow, the Land of the Sámi slowly comes into view." The Bow is an important part of the Sámi traditional narrative about the night sky, in which various hunters try to chase down Sarva, the Great Reindeer, a large constellation that takes up almost half the sky. According to the legend, Fávdna stands ready to fire his Bow every night but hesitates because he might hit Stella Polaris, known as Boahji ("the Rivet"), which would cause the sky to collapse and end the world.
Southeast Asian traditions
In Burmese, Pucwan Tārā (ပုဇွန် တာရာ, ) is the name of a constellation comprising stars from the head and forelegs of Ursa Major; pucwan (ပုဇွန်) is a general term for a crustacean, such as prawn, shrimp, crab, lobster, etc.
In Javanese, it is known as "lintang jong", which means "the jong constellation". Likewise, in Malay it is called "bintang jong".
Esoteric lore
In Theosophy, it is believed that the Seven Stars of the Pleiades focus the spiritual energy of the seven rays from the Galactic Logos to the Seven Stars of the Great Bear, then to Sirius, then to the Sun, then to the god of Earth (Sanat Kumara), and finally through the seven Masters of the Seven Rays to the human race.
Graphic visualisation
In European star charts, the constellation was visualized with the 'square' of the Big Dipper forming the bear's body and the chain of stars forming the Dipper's "handle" as a long tail. However, bears do not have long tails, and Jewish astronomers considered Alioth, Mizar, and Alkaid instead to be three cubs following their mother, while the Native Americans saw them as three hunters.
Noted children's book author H. A. Rey, in his 1952 book The Stars: A New Way to See Them, () had a different asterism in mind for Ursa Major, that instead had the "bear" image of the constellation oriented with Alkaid as the tip of the bear's nose, and the "handle" of the Big Dipper part of the constellation forming the outline of the top of the bear's head and neck, rearwards to the shoulder, potentially giving it the longer head and neck of a polar bear.
Ursa Major is also pictured as the Starry Plough, the Irish flag of Labour, adopted by James Connolly's Irish Citizen Army in 1916, which shows the constellation on a blue background; on the state flag of Alaska; and on the House of Bernadotte's variation of the coat of arms of Sweden. The seven stars on a red background of the flag of the Community of Madrid, Spain, may be the stars of the Plough asterism (or of Ursa Minor). The same can be said of the seven stars pictured in the bordure azure of the coat of arms of Madrid, capital of that country.
| Physical sciences | Constellations | null |
31773 | https://en.wikipedia.org/wiki/Ursa%20Minor | Ursa Minor | Ursa Minor (, contrasting with Ursa Major), also known as the Little Bear, is a constellation located in the far northern sky. As with the Great Bear, the tail of the Little Bear may also be seen as the handle of a ladle, hence the North American name, Little Dipper: seven stars with four in its bowl like its partner the Big Dipper. Ursa Minor was one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and remains one of the 88 modern constellations. Ursa Minor has traditionally been important for navigation, particularly by mariners, because of Polaris being the north pole star.
Polaris, the brightest star in the constellation, is a yellow-white supergiant and the brightest Cepheid variable star in the night sky, ranging in apparent magnitude from 1.97 to 2.00. Beta Ursae Minoris, also known as Kochab, is an aging star that has swollen and cooled to become an orange giant with an apparent magnitude of 2.08, only slightly fainter than Polaris. Kochab and 3rd-magnitude Gamma Ursae Minoris have been called the "guardians of the pole star" or "Guardians of The Pole". Planets have been detected orbiting four of the stars, including Kochab. The constellation also contains an isolated neutron star—Calvera—and H1504+65, the hottest white dwarf yet discovered, with a surface temperature of 200,000 K.
History and mythology
In the Babylonian star catalogues, Ursa Minor was known as the "Wagon of Heaven" (, also associated with the goddess Damkina). It is listed in the MUL.APIN catalogue, compiled around 1000 BC, among the "Stars of Enlil"—that is, the northern sky.
According to Diogenes Laërtius, citing Callimachus, Thales of Miletus "measured the stars of the Wagon by which the Phoenicians sail". Diogenes identifies these as the constellation of Ursa Minor, which for its reported use by the Phoenicians for navigation at sea were also named Phoinikē.
The tradition of naming the northern constellations "bears" appears to be genuinely Greek, although Homer refers to just a single "bear".
The original "bear" is thus Ursa Major, and Ursa Minor was admitted as the second, or "Phoenician Bear" (Ursa Phoenicia, hence Φοινίκη, Phoenice)
only later, according to Strabo (I.1.6, C3) due to a suggestion by Thales, who suggested it as a navigation aid to the Greeks, who had been navigating by Ursa Major. In classical antiquity, the celestial pole was somewhat closer to Beta Ursae Minoris than to Alpha Ursae Minoris, and the entire constellation was taken to indicate the northern direction. Since the medieval period, it has become convenient to use Alpha Ursae Minoris (or "Polaris") as the North Star. (Even though, in the medieval period, Polaris was still several degrees away from the celestial pole. ) Now, Polaris is within 1° of the north celestial pole and remains the current Pole star. Its Neo-Latin name of stella polaris was coined only in the early modern period.
The ancient name of the constellation is Cynosura (Greek Κυνοσούρα "dog's tail").
The origin of this name is unclear (Ursa Minor being a "dog's tail" would imply that another constellation nearby is "the dog", but no such constellation is known).
Instead, the mythographic tradition of Catasterismi makes Cynosura the name of an Oread nymph described as a nurse of Zeus, honoured by the god with a place in the sky.
There are various proposed explanations for the name Cynosura. One suggestion connects it to the myth of Callisto, with her son Arcas replaced by her dog being placed in the sky by Zeus.
Others have suggested that an archaic interpretation of Ursa Major was that of a cow, forming a group with Boötes as herdsman, and Ursa Minor as a dog. George William Cox explained it as a variant of Λυκόσουρα, understood as "wolf's tail" but by him etymologized as "trail, or train, of light" (i.e. λύκος "wolf" vs. λύκ- "light"). Allen points to the Old Irish name of the constellation, drag-blod "fire trail", for comparison.
Brown (1899) suggested a non-Greek origin of the name (a loan from an Assyrian An‑nas-sur‑ra "high-rising").
An alternative myth tells of two bears that saved Zeus from his murderous father Cronus by hiding him on Mount Ida. Later Zeus set them in the sky, but their tails grew long from their being swung up into the sky by the god.
Because Ursa Minor consists of seven stars, the Latin word for "north" (i.e., where Polaris points) is septentrio, from septem (seven) and triones (oxen), from seven oxen driving a plough, which the seven stars also resemble. This name has also been attached to the main stars of Ursa Major.
In Inuit astronomy, the three brightest stars — Polaris, Kochab, and Pherkad — were known as Nuutuittut ("never moving"), though the term is more frequently used in the singular to refer to Polaris alone. The Pole Star is too high in the sky at far northern latitudes to be of use in navigation. In Chinese astronomy, the main stars of Ursa Minor are divided between two asterisms:
勾陳 Gòuchén (Curved Array) (including α UMi, δ UMi, ε UMi, ζ UMi, η UMi, θ UMi, λ UMi) and
北極 Běijí (Northern Pole) (including β UMi and γ UMi).
Characteristics
Ursa Minor is bordered by Camelopardalis to the west, Draco to the west, and Cepheus to the east. Covering 256 square degrees, it ranks 56th of the 88 constellations in size. Ursa Minor is colloquially known in the US as the Little Dipper because its seven brightest stars seem to form the shape of a dipper (ladle or scoop). The star at the end of the dipper handle is Polaris. Polaris can also be found by following a line through the two stars—Alpha and Beta Ursae Majoris, popularly called the Pointers—that form the end of the "bowl" of the Big Dipper, for 30 degrees (three upright fists at arms' length) across the night sky. The four stars constituting the bowl of the Little Dipper are of second, third, fourth, and fifth magnitudes, respectively, and provide an easy guide to determining what magnitude stars are visible, useful for city dwellers or testing one's eyesight.
The three-letter abbreviation for the constellation, as adopted by the IAU (International Astronomical Union) in 1922, is "UMi". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 22 segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates range from the north celestial pole to 65.40° in the south. Its position in the far northern celestial hemisphere means that the whole constellation is visible only to observers in the northern hemisphere.
Features
Stars
The German cartographer Johann Bayer used the Greek letters alpha to theta to label the most prominent stars in the constellation, while his countryman Johann Elert Bode subsequently added iota through phi. Only lambda and pi remain in use, likely because of their proximity to the north celestial pole. Within the constellation's borders, there are 39 stars brighter than or equal to apparent magnitude 6.5.
The traditional names of the main seven in Johann Bayer's ordering are:
Polaris
Kochab
Pherkad
Yildun
Epsilon Ursae Minoris has no traditional name.
Zeta Ursae Minoris has no traditional name.
Eta Ursae Minoris has no traditional name.
Marking the Little Bear's tail, Polaris, or Alpha Ursae Minoris, is the brightest star in the constellation, varying between apparent magnitudes 1.97 and 2.00 over a period of 3.97 days. Located around 432 light-years away from Earth, it is a yellow-white supergiant that varies between spectral types F7Ib and F8Ib, and has around 6 times the Sun's mass, 2,500 times its luminosity, and 45 times its radius. Polaris is the brightest Cepheid variable star visible from Earth. It is a triple star system, the supergiant primary star having two yellow-white main-sequence star companions that are 17 and 2,400 astronomical units (AU) distant and take 29.6 and 42,000 years respectively to complete one orbit.
Traditionally called Kochab, Beta Ursae Minoris, at apparent magnitude 2.08, is slightly less bright than Polaris. Located around 131 light-years away from Earth, it is an orange giant—an evolved star that has used up the hydrogen in its core and moved off the main sequence—of spectral type K4III. Slightly variable over a period of 4.6 days, Kochab has had its mass estimated at 1.3 times that of the Sun via measurement of these oscillations. Kochab is 450 times more luminous than the Sun and has 42 times its diameter, with a surface temperature of approximately 4,130 K. Estimated to be around 2.95 billion years old, ±1 billion years, Kochab was announced to have a planetary companion around 6.1 times as massive as Jupiter with an orbit of 522 days.
Traditionally known as Pherkad, Gamma Ursae Minoris has an apparent magnitude that varies between 3.04 and 3.09 roughly every 3.4 hours. It and Kochab have been termed the "guardians of the pole star". A white bright giant of spectral type A3II-III, with around 4.8 times the Sun's mass, 1,050 times its luminosity and 15 times its radius, it is 487±8 light-years distant from Earth. Pherkad belongs to a class of stars known as Delta Scuti variables—short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study asteroseismology. Also possibly a member of this class is Zeta Ursae Minoris, a white star of spectral type A3V, which has begun cooling, expanding and brightening. It is likely to have been a B3 main-sequence star and is now slightly variable. At magnitude 4.95 the dimmest of the seven stars of the Little Dipper is Eta Ursae Minoris. A yellow-white main-sequence star of spectral type F5V, it is 97 light-years distant. It is double the Sun's diameter, 1.4 times as massive, and shines with 7.4 times its luminosity. Nearby Zeta lies 5.00-magnitude Theta Ursae Minoris. Located 860 ± 80 light-years distant, it is an orange giant of spectral type K5III that has expanded and cooled off the main sequence, and has an estimated diameter around 4.8 times that of the Sun.
Making up the handle of the Little Dipper are Delta Ursae Minoris, or Yildun, and Epsilon Ursae Minoris. Just over 3.5 degrees from the north celestial pole, Delta is a white main-sequence star of spectral type A1V with an apparent magnitude of 4.35, located 172±1 light-years from Earth. It has around 2.8 times the diameter and 47 times the luminosity of the Sun. A triple star system, Epsilon Ursae Minoris shines with a combined average light of magnitude 4.22. A yellow giant of spectral type G5III, the primary is a RS Canum Venaticorum variable star. It is a spectroscopic binary, with a companion 0.36 AU distant, and a third star—an orange main-sequence star of spectral type K0—8100 AU distant.
Located close to Polaris is Lambda Ursae Minoris, a red giant of spectral type M1III. It is a semiregular variable varying between magnitudes 6.35 and 6.45. The northerly nature of the constellation means that the variable stars can be observed all year: The red giant R Ursae Minoris is a semiregular variable varying from magnitude 8.5 to 11.5 over 328 days, while S Ursae Minoris is a long-period variable that ranges between magnitudes 8.0 and 11 over 331 days. Located south of Kochab and Pherkad towards Draco is RR Ursae Minoris, a red giant of spectral type M5III that is also a semiregular variable ranging from magnitude 4.44 to 4.85 over a period of 43.3 days. T Ursae Minoris is another red-giant variable star that has undergone a dramatic change in status—from being a long-period (Mira) variable ranging from magnitude 7.8 to 15 over 310–315 days, to being a semiregular variable. The star is thought to have undergone a shell helium flash—a point where the shell of helium around the star's core reaches a critical mass and ignites—marked by its abrupt change in variability in 1979. Z Ursae Minoris is a faint variable star that suddenly dropped 6 magnitudes in 1992 and was identified as one of a rare class of stars—R Coronae Borealis variables.
Eclipsing variables are star systems that vary in brightness because of one star passing in front of the other rather than from any intrinsic change in luminosity. W Ursae Minoris is one such system, its magnitude ranging from 8.51 to 9.59 over 1.7 days. The combined spectrum of the system is A2V, but the masses of the two component stars are unknown. A slight change in the orbital period in 1973 suggests there is a third component of the multiple star system—most likely a red dwarf—with an orbital period of 62.2±3.9 years. RU Ursae Minoris is another example, ranging from 10 to 10.66 over 0.52 days. It is a semidetached system, as the secondary star is filling its Roche lobe and transferring matter to the primary.
RW Ursae Minoris is a cataclysmic variable star system that flared up as a nova in 1956, reaching magnitude 6. In 2003, it was still two magnitudes brighter than its baseline, and dimming at a rate of 0.02 magnitude a year. Its distance has been calculated as 5,000±800 parsecs (16,300 light-years), which puts its location in the galactic halo.
Taken from the villain in The Magnificent Seven, Calvera is the nickname given to an X-ray source known as 1RXS J141256.0+792204 in the ROSAT All-Sky Survey Bright Source Catalog (RASS/BSC). It has been identified as an isolated neutron star, one of the closest of its kind to Earth. Ursa Minor has two enigmatic white dwarfs. Documented on January 27, 2011, H1504+65 is a faint (magnitude 15.9) star with the hottest surface temperature—200,000 K—yet discovered for a white dwarf. Its atmosphere, composed of roughly half carbon, half oxygen and 2% neon, is devoid of hydrogen and helium—its composition unexplainable by current models of stellar evolution. WD 1337+705 is a cooler white dwarf that has magnesium and silicon in its spectrum, suggesting a companion or circumstellar disk, though no evidence for either has come to light. WISE 1506+7027 is a brown dwarf of spectral type T6 that is a mere light-years away from Earth. A faint object of magnitude 14, it was discovered by the Wide-field Infrared Survey Explorer (WISE) in 2011.
Kochab aside, three more stellar systems have been discovered to contain planets. 11 Ursae Minoris is an orange giant of spectral type K4III around 1.8 times as massive as the Sun. Around 1.5 billion years old, it has cooled and expanded since it was an A-type main-sequence star. Around 390 light-years distant, it shines with an apparent magnitude of 5.04. A planet around 11 times the mass of Jupiter was discovered in 2009 orbiting the star with a period of 516 days. HD 120084 is another evolved star, a yellow giant of spectral type G7III, around 2.4 times the mass of the Sun. It has a planet 4.5 times the mass of Jupiter, with one of the most eccentric planetary orbits (e = 0.66), discovered by precisely measuring the radial velocity of the star in 2013. HD 150706 is a sunlike star of spectral type G0V some 89 light-years distant from the Solar System. It was thought to have a planet as massive as Jupiter at a distance of 0.6 AU, but this was discounted in 2007. A further study published in 2012 showed that it has a companion around 2.7 times as massive as Jupiter that takes around 16 years to complete an orbit and is 6.8 AU distant from its star.
Deep-sky objects
Ursa Minor is rather devoid of deep-sky objects. The Ursa Minor Dwarf, a dwarf spheroidal galaxy, was discovered by Albert George Wilson of the Lowell Observatory in the Palomar Sky Survey in 1955. Its centre is around light-years distant from Earth. In 1999, Kenneth Mighell and Christopher Burke used the Hubble Space Telescope to confirm that the galaxy had had a single burst of star formation that took place around 14 billion years ago and lasted around 2 billion years, and that the galaxy was probably as old as the Milky Way itself.
NGC 3172 (also known as Polarissima Borealis) is a faint, magnitude-14.9 galaxy that happens to be the closest NGC object to the north celestial pole. It was discovered by John Herschel in 1831.
NGC 6217 is a barred spiral galaxy located some 67 million light-years away, which can be located with a or larger telescope as an 11th-magnitude object about 2.5° east-northeast of Zeta Ursae Minoris. It has been characterized as a starburst galaxy, which means it is undergoing a high rate of star formation compared with a typical galaxy.
NGC 6251 is an active supergiant elliptical radio galaxy more than 340 million light-years away from Earth. It has a Seyfert 2 active galactic nucleus, and is one of the most extreme examples of a Seyfert galaxy. This galaxy may be associated with gamma-ray source 3EG J1621+8203, which has high-energy gamma-ray emission. It is also noted for its one-sided radio jet—one of the brightest known—discovered in 1977.
Meteor showers
The Ursids, a prominent meteor shower that occurs in Ursa Minor, peaks between December 18 and 25. Its parent body is the comet 8P/Tuttle.
| Physical sciences | Constellations | null |
31780 | https://en.wikipedia.org/wiki/Ultrasound | Ultrasound | Ultrasound is sound with frequencies greater than 20 kilohertz. This frequency is the approximate upper audible limit of human hearing in healthy young adults. The physical principles of acoustic waves apply to any frequency range, including ultrasound. Ultrasonic devices operate with frequencies from 20 kHz up to several gigahertz.
Ultrasound is used in many different fields. Ultrasonic devices are used to detect objects and measure distances. Ultrasound imaging or sonography is often used in medicine. In the nondestructive testing of products and structures, ultrasound is used to detect invisible flaws. Industrially, ultrasound is used for cleaning, mixing, and accelerating chemical processes. Animals such as bats and porpoises use ultrasound for locating prey and obstacles.
History
Acoustics, the science of sound, starts as far back as Pythagoras in the 6th century BC, who wrote on the mathematical properties of stringed instruments. Echolocation in bats was discovered by Lazzaro Spallanzani in 1794, when he demonstrated that bats hunted and navigated by inaudible sound, not vision. Francis Galton in 1893 invented the Galton whistle, an adjustable whistle that produced ultrasound, which he used to measure the hearing range of humans and other animals, demonstrating that many animals could hear sounds above the hearing range of humans.
The first article on the history of ultrasound was written in 1948. According to its author,
during the First World War, a Russian engineer named Chilowski submitted an idea for submarine detection to the French Government. The latter invited Paul Langevin, then Director of the School of Physics and Chemistry in Paris, to evaluate it. Chilowski's proposal was to excite a cylindrical, mica condenser by a high-frequency Poulsen arc at approximately 100 kHz and thus to generate an ultrasound beam for detecting submerged objects. The idea of locating underwater obstacles had been suggested prior by L. F. Richardson, following the Titanic disaster. Richardson had proposed to position a high-frequency hydraulic whistle at the focus of a mirror and use the beam for locating submerged navigational hazards. A prototype was built by Sir Charles Parsons, the inventor of the vapour turbine, but the device was found not to be suitable for this purpose.
Langevin's device made use of the piezoelectric effect, which he had been acquainted with whilst a student at the laboratory of Jacques and Pierre Curie. Langevin calculated and built an ultrasound transducer comprising a thin sheet of quartz sandwiched between two steel plates. Langevin was the first to report cavitation-related bioeffects from ultrasound.
Definition
Ultrasound is defined by the American National Standards Institute as "sound at frequencies greater than 20 kHz". In air at atmospheric pressure, ultrasonic waves have wavelengths of 1.9 cm or less.
Ultrasound can be generated at very high frequencies; ultrasound is used for sonochemistry at frequencies up to multiple hundreds of kilohertz. Medical imaging equipment uses frequencies in the MHz range. UHF ultrasound waves have been generated as high as the gigahertz range.
Characterizing extremely high-frequency ultrasound poses challenges, as such rapid movement causes waveforms to steepen and form shock waves.
Perception
Humans
The upper frequency limit in humans (approximately 20 kHz) is due to limitations of the middle ear. Auditory sensation can occur if high‐intensity ultrasound is fed directly into the human skull and reaches the cochlea through bone conduction, without passing through the middle ear.
Children can hear some high-pitched sounds that older adults cannot hear, because in humans the upper limit pitch of hearing tends to decrease with age. An American cell phone company has used this to create ring signals that supposedly are only audible to younger humans, but many older people can hear the signals, which may be because of the considerable variation of age-related deterioration in the upper hearing threshold.
Animals
Bats use a variety of ultrasonic ranging (echolocation) techniques to detect their prey. They can detect frequencies beyond 100 kHz, possibly up to 200 kHz.
Many insects have good ultrasonic hearing, and most of these are nocturnal insects listening for echolocating bats. These include many groups of moths, beetles, praying mantises and lacewings. Upon hearing a bat, some insects will make evasive manoeuvres to escape being caught. Ultrasonic frequencies trigger a reflex action in the noctuid moth that causes it to drop slightly in its flight to evade attack. Tiger moths also emit clicks which may disturb bats' echolocation, and in other cases may advertise the fact that they are poisonous by emitting sound.
Dogs and cats' hearing range extends into the ultrasound; the top end of a dog's hearing range is about 45 kHz, while a cat's is 64 kHz. The wild ancestors of cats and dogs evolved this higher hearing range to hear high-frequency sounds made by their preferred prey, small rodents. A dog whistle is a whistle that emits ultrasound, used for training and calling dogs. The frequency of most dog whistles is within the range of 23 to 54 kHz.
Toothed whales, including dolphins, can hear ultrasound and use such sounds in their navigational system (biosonar) to orient and to capture prey. Porpoises have the highest known upper hearing limit at around 160 kHz. Several types of fish can detect ultrasound. In the order Clupeiformes, members of the subfamily Alosinae (shad) have been shown to be able to detect sounds up to 180 kHz, while the other subfamilies (e.g. herrings) can hear only up to 4 kHz.
No bird species have been reported to be sensitive to ultrasound.
Commercial ultrasonic systems have been sold for supposed indoors electronic pest control and outdoors ultrasonic algae control. However, no scientific evidence exists on the success of such devices for these purposes.
Detection and ranging
Non-contact sensor
An ultrasonic level or sensing system requires no contact with the target. For many processes in the medical, pharmaceutical, military and general industries this is an advantage over inline sensors that may contaminate the liquids inside a vessel or tube or that may be clogged by the product.
Both continuous wave and pulsed systems are used. The principle behind a pulsed-ultrasonic technology is that the transmit signal consists of short bursts of ultrasonic energy. After each burst, the electronics looks for a return signal within a small window of time corresponding to the time it takes for the energy to pass through the vessel. Only a signal received during this window will qualify for additional signal processing.
A popular consumer application of ultrasonic ranging was the Polaroid SX-70 camera, which included a lightweight transducer system to focus the camera automatically. Polaroid later licensed this ultrasound technology and it became the basis of a variety of ultrasonic products.
Motion sensors and flow measurement
A common ultrasound application is an automatic door opener, where an ultrasonic sensor detects a person's approach and opens the door. Ultrasonic sensors are also used to detect intruders; the ultrasound can cover a wide area from a single point. The flow in pipes or open channels can be measured by ultrasonic flowmeters, which measure the average velocity of flowing liquid. In rheology, an acoustic rheometer relies on the principle of ultrasound. In fluid mechanics, fluid flow can be measured using an ultrasonic flow meter.
Nondestructive testing
Ultrasonic testing is a type of nondestructive testing commonly used to find flaws in materials and to measure the thickness of objects. Frequencies of 2 to 10 MHz are common, but for special purposes other frequencies are used. Inspection may be manual or automated and is an essential part of modern manufacturing processes. Most metals can be inspected as well as plastics and aerospace composites. Lower frequency ultrasound (50–500 kHz) can also be used to inspect less dense materials such as wood, concrete and cement.
Ultrasound inspection of welded joints has been an alternative to radiography for nondestructive testing since the 1960s. Ultrasonic inspection eliminates the use of ionizing radiation, with safety and cost benefits. Ultrasound can also provide additional information such as the depth of flaws in a welded joint. Ultrasonic inspection has progressed from manual methods to computerized systems that automate much of the process. An ultrasonic test of a joint can identify the existence of flaws, measure their size, and identify their location. Not all welded materials are equally amenable to ultrasonic inspection; some materials have a large grain size that produces a high level of background noise in measurements.
Ultrasonic thickness measurement is one technique used to monitor quality of welds.
Ultrasonic range finding
A common use of ultrasound is in underwater range finding; this use is also called sonar. An ultrasonic pulse is generated in a particular direction. If there is an object in the path of this pulse, part or all of the pulse will be reflected back to the transmitter as an echo and can be detected through the receiver path. By measuring the difference in time between the pulse being transmitted and the echo being received, it is possible to determine the distance.
The measured travel time of Sonar pulses in water is strongly dependent on the temperature and the salinity of the water. Ultrasonic ranging is also applied for measurement in air and for short distances. For example, hand-held ultrasonic measuring tools can rapidly measure the layout of rooms.
Although range finding underwater is performed at both sub-audible and audible frequencies for great distances (1 to several kilometers), ultrasonic range finding is used when distances are shorter and the accuracy of the distance measurement is desired to be finer. Ultrasonic measurements may be limited through barrier layers with large salinity, temperature or vortex differentials. Ranging in water varies from about hundreds to thousands of meters, but can be performed with centimeters to meters accuracy
Ultrasound Identification (USID)
Ultrasound Identification (USID) is a Real-Time Locating System (RTLS) or Indoor Positioning System (IPS) technology used to automatically track and identify the location of objects in real time using simple, inexpensive nodes (badges/tags) attached to or embedded in objects and devices, which then transmit an ultrasound signal to communicate their location to microphone sensors.
Imaging
The potential for ultrasonic imaging of objects, in which a 3 GHz sound wave could produce resolution comparable to an optical image, was recognized by Sergei Sokolov in 1939. Such frequencies were not possible at the time, and what technology did exist produced relatively low-contrast images with poor sensitivity.
Ultrasonic imaging uses frequencies of 2 megahertz and higher; the shorter wavelength allows resolution of small internal details in structures and tissues. The power density is generally less than 1 watt per square centimetre to avoid heating and cavitation effects in the object under examination. Ultrasonic imaging applications include industrial nondestructive testing, quality control and medical uses.
Acoustic microscopy
Acoustic microscopy is the technique of using sound waves to visualize structures too small to be resolved by the human eye. High and ultra high frequencies up to several gigahertz are used in acoustic microscopes. The reflection and diffraction of sound waves from microscopic structures can yield information not available with light.
Human medicine
Medical ultrasound is an ultrasound-based diagnostic medical imaging technique used to visualize muscles, tendons, and many internal organs to capture their size, structure and any pathological lesions with real time tomographic images. Ultrasound has been used by radiologists and sonographers to image the human body for at least 50 years and has become a widely used diagnostic tool. The technology is relatively inexpensive and portable, especially when compared with other techniques, such as magnetic resonance imaging (MRI) and computed tomography (CT). Ultrasound is also used to visualize fetuses during routine and emergency prenatal care. Such diagnostic applications used during pregnancy are referred to as obstetric sonography. As currently applied in the medical field, properly performed ultrasound poses no known risks to the patient. Sonography does not use ionizing radiation, and the power levels used for imaging are too low to cause adverse heating or pressure effects in tissue. Although the long-term effects due to ultrasound exposure at diagnostic intensity are still unknown, currently most doctors feel that the benefits to patients outweigh the risks. The ALARA (As Low As Reasonably Achievable) principle has been advocated for an ultrasound examination that is, keeping the scanning time and power settings as low as possible but consistent with diagnostic imaging and that by that principle nonmedical uses, which by definition are not necessary, are actively discouraged.
Ultrasound is also increasingly being used in trauma and first aid cases, with emergency ultrasound being used by some EMT response teams. Furthermore, ultrasound is used in remote diagnosis cases where teleconsultation is required, such as scientific experiments in space or mobile sports team diagnosis.
According to RadiologyInfo, ultrasounds are useful in the detection of pelvic abnormalities and can involve techniques known as abdominal (transabdominal) ultrasound, vaginal (transvaginal or endovaginal) ultrasound in women, and also rectal (transrectal) ultrasound in men.
Veterinary medicine
Diagnostic ultrasound is used externally in horses for evaluation of soft tissue and tendon injuries, and internally in particular for reproductive workevaluation of the reproductive tract of the mare and pregnancy detection. It may also be used in an external manner in stallions for evaluation of testicular condition and diameter as well as internally for reproductive evaluation (deferent duct etc.).
By 2005, ultrasound technology began to be used by the beef cattle industry to improve animal health and the yield of cattle operations. Ultrasound is used to evaluate fat thickness, rib eye area, and intramuscular fat in living animals. It is also used to evaluate the health and characteristics of unborn calves.
Ultrasound technology provides a means for cattle producers to obtain information that can be used to improve the breeding and husbandry of cattle. The technology can be expensive, and it requires a substantial time commitment for continuous data collection and operator training. Nevertheless, this technology has proven useful in managing and running a cattle breeding operation.
Processing and power
High-power applications of ultrasound often use frequencies between 20 kHz and a few hundred kHz. Intensities can be very high; above 10 watts per square centimeter, cavitation can be inducted in liquid media, and some applications use up to 1000 watts per square centimeter. Such high intensities can induce chemical changes or produce significant effects by direct mechanical action, and can inactivate harmful microorganisms.
Physical therapy
Ultrasound has been used since the 1940s by physical and occupational therapists for treating connective tissue: ligaments, tendons, and fascia (and also scar tissue). Conditions for which ultrasound may be used for treatment include the follow examples: ligament sprains, muscle strains, tendonitis, joint inflammation, plantar fasciitis, metatarsalgia, facet irritation, impingement syndrome, bursitis, rheumatoid arthritis, osteoarthritis, and scar tissue adhesion.
Relatively high power ultrasound can break up stony deposits or tissue, increase skin permeability, accelerate the effect of drugs in a targeted area, assist in the measurement of the elastic properties of tissue, and can be used to sort cells or small particles for research.
Ultrasonic impact treatment
Ultrasonic impact treatment (UIT) uses ultrasound to enhance the mechanical and physical properties of metals. It is a metallurgical processing technique in which ultrasonic energy is applied to a metal object. Ultrasonic treatment can result in controlled residual compressive stress, grain refinement and grain size reduction. Low and high cycle fatigue are enhanced and have been documented to provide increases up to ten times greater than non-UIT specimens. Additionally, UIT has proven effective in addressing stress corrosion cracking, corrosion fatigue and related issues.
When the UIT tool, made up of the ultrasonic transducer, pins and other components, comes into contact with the work piece it acoustically couples with the work piece, creating harmonic resonance. This harmonic resonance is performed at a carefully calibrated frequency, to which metals respond very favorably.
Depending on the desired effects of treatment a combination of different frequencies and displacement amplitude is applied. These frequencies range between 25 and 55 kHz, with the displacement amplitude of the resonant body of between 22 and 50 μm (0.00087 and 0.0020 in).
UIT devices rely on magnetostrictive transducers.
Processing
Ultrasonication offers great potential in the processing of liquids and slurries, by improving the mixing and chemical reactions in various applications and industries. Ultrasonication generates alternating low-pressure and high-pressure waves in liquids, leading to the formation and violent collapse of small vacuum bubbles. This phenomenon is termed cavitation and causes high speed impinging liquid jets and strong hydrodynamic shear-forces. These effects are used for the deagglomeration and milling of micrometre and nanometre-size materials as well as for the disintegration of cells or the mixing of reactants. In this aspect, ultrasonication is an alternative to high-speed mixers and agitator bead mills. Ultrasonic foils under the moving wire in a paper machine will use the shock waves from the imploding bubbles to distribute the cellulose fibres more uniformly in the produced paper web, which will make a stronger paper with more even surfaces. Furthermore, chemical reactions benefit from the free radicals created by the cavitation as well as from the energy input and the material transfer through boundary layers. For many processes, this sonochemical (see sonochemistry) effect leads to a substantial reduction in the reaction time, like in the transesterification of oil into biodiesel.
Substantial ultrasonic intensity and high ultrasonic vibration amplitudes are required for many processing applications, such as nano-crystallization, nano-emulsification, deagglomeration, extraction, cell disruption, as well as many others. Commonly, a process is first tested on a laboratory scale to prove feasibility and establish some of the required ultrasonic exposure parameters. After this phase is complete, the process is transferred to a pilot (bench) scale for flow-through pre-production optimization and then to an industrial scale for continuous production. During these scale-up steps, it is essential to make sure that all local exposure conditions (ultrasonic amplitude, cavitation intensity, time spent in the active cavitation zone, etc.) stay the same. If this condition is met, the quality of the final product remains at the optimized level, while the productivity is increased by a predictable "scale-up factor". The productivity increase results from the fact that laboratory, bench and industrial-scale ultrasonic processor systems incorporate progressively larger ultrasonic horns, able to generate progressively larger high-intensity cavitation zones and, therefore, to process more material per unit of time. This is called "direct scalability". It is important to point out that increasing the power of the ultrasonic processor alone does not result in direct scalability, since it may be (and frequently is) accompanied by a reduction in the ultrasonic amplitude and cavitation intensity. During direct scale-up, all processing conditions must be maintained, while the power rating of the equipment is increased in order to enable the operation of a larger ultrasonic horn.
Ultrasonic manipulation and characterization of particles
A researcher at the Industrial Materials Research Institute, Alessandro Malutta, devised an experiment that demonstrated the trapping action of ultrasonic standing waves on wood pulp fibers diluted in water and their parallel orienting into the equidistant pressure planes. The time to orient the fibers in equidistant planes is measured with a laser and an electro-optical sensor. This could provide the paper industry a quick on-line fiber size measurement system. A somewhat different implementation was demonstrated at Pennsylvania State University using a microchip which generated a pair of perpendicular standing surface acoustic waves allowing to position particles equidistant to each other on a grid. This experiment, called acoustic tweezers, can be used for applications in material sciences, biology, physics, chemistry and nanotechnology.
Ultrasonic cleaning
Ultrasonic cleaners, sometimes mistakenly called supersonic cleaners, are used at frequencies from 20 to 40 kHz for jewellery, lenses and other optical parts, watches, dental instruments, surgical instruments, diving regulators and industrial parts. An ultrasonic cleaner works mostly by energy released from the collapse of millions of microscopic cavitation bubbles near the dirty surface. The collapsing bubbles form tiny shockwaves that break up and disperse contaminants on the object's surface.
Ultrasonic disintegration
Similar to ultrasonic cleaning, biological cells including bacteria can be disintegrated. High power ultrasound produces cavitation that facilitates particle disintegration or reactions. This has uses in biological science for analytical or chemical purposes (sonication and sonoporation) and in killing bacteria in sewage. High power ultrasound can disintegrate corn slurry and enhance liquefaction and saccharification for higher ethanol yield in dry corn milling plants.
Ultrasonic humidifier
The ultrasonic humidifier, one type of nebulizer (a device that creates a very fine spray), is a popular type of humidifier. It works by vibrating a metal plate at ultrasonic frequencies to nebulize (sometimes incorrectly called "atomize") the water. Because the water is not heated for evaporation, it produces a cool mist. The ultrasonic pressure waves nebulize not only the water but also materials in the water including calcium, other minerals, viruses, fungi, bacteria, and other impurities. Illness caused by impurities that reside in a humidifier's reservoir fall under the heading of "Humidifier Fever".
Ultrasonic humidifiers are frequently used in aeroponics, where they are generally referred to as foggers.
Ultrasonic welding
In ultrasonic welding of plastics, high frequency (15 kHz to 40 kHz) low amplitude vibration is used to create heat by way of friction between the materials to be joined. The interface of the two parts is specially designed to concentrate the energy for maximum weld strength.
Sonochemistry
Power ultrasound in the 20–100 kHz range is used in chemistry. The ultrasound does not interact directly with molecules to induce the chemical change, as its typical wavelength (in the millimeter range) is too long compared to the molecules. Instead, the energy causes cavitation which generates extremes of temperature and pressure in the liquid where the reaction happens. Ultrasound also breaks up solids and removes passivating layers of inert material to give a larger surface area for the reaction to occur over. Both of these effects make the reaction faster. In 2008, Atul Kumar reported synthesis of Hantzsch esters and polyhydroquinoline derivatives via multi-component reaction protocol in aqueous micelles using ultrasound.
Ultrasound is used in extraction, using different frequencies.
Other uses
When applied in specific configurations, ultrasound can produce short bursts of light in a phenomenon known as sonoluminescence.
Ultrasound is used when characterizing particulates through the technique of ultrasound attenuation spectroscopy or by observing electroacoustic phenomena or by transcranial pulsed ultrasound.
Wireless communication
Audio can be propagated by modulated ultrasound.
A formerly popular consumer application of ultrasound was in television remote controls for adjusting volume and changing channels. Introduced by Zenith in the late 1950s, the system used a hand-held remote control containing short rod resonators struck by small hammers, and a microphone on the set. Filters and detectors discriminated between the various operations. The principal advantages were that no battery was needed in the hand-held control box and, unlike radio waves, the ultrasound was unlikely to affect neighboring sets. Ultrasound remained in use until displaced by infrared systems starting in the late 1980s.
In July 2015, The Economist reported that researchers at the University of California, Berkeley have conducted ultrasound studies using graphene diaphragms. The thinness and low weight of graphene combined with its strength make it an effective material to use in ultrasound communications. One suggested application of the technology would be underwater communications, where radio waves typically do not travel well.
Ultrasonic signals have been used in "audio beacons" for cross-device tracking of Internet users.
Safety
Occupational exposure to ultrasound in excess of 120 dB may lead to hearing loss. Exposure in excess of 155 dB may produce heating effects that are harmful to the human body, and it has been calculated that exposures above 180 dB may lead to death. The UK's independent Advisory Group on Non-ionising Radiation (AGNIR) produced a report in 2010, which was published by the UK Health Protection Agency (HPA). This report recommended an exposure limit for the general public to airborne ultrasound sound pressure levels (SPL) of 70 dB (at 20 kHz), and 100 dB (at 25 kHz and above).
In medical ultrasound, guidelines exist to prevent inertial cavitation from happening. The risk of inertial cavitation damage is expressed by the mechanical index.
| Physical sciences | Waves | null |
31863 | https://en.wikipedia.org/wiki/Uterus | Uterus | The uterus (from Latin uterus, : uteri or uteruses) or womb () is the organ in the reproductive system of most female mammals, including humans, that accommodates the embryonic and fetal development of one or more fertilized eggs until birth. The uterus is a hormone-responsive sex organ that contains glands in its lining that secrete uterine milk for embryonic nourishment. (The term uterus is also applied to analogous structures in some non-mammalian animals.)
In the human, the lower end of the uterus is a narrow part known as the isthmus that connects to the cervix, the anterior gateway leading to the vagina. The upper end, the body of the uterus, is connected to the fallopian tubes at the uterine horns; the rounded part, the fundus, is above the openings to the fallopian tubes. The connection of the uterine cavity with a fallopian tube is called the uterotubal junction. The fertilized egg is carried to the uterus along the fallopian tube. It will have divided on its journey to form a blastocyst that will implant itself into the lining of the uterus – the endometrium, where it will receive nutrients and develop into the embryo proper, and later fetus, for the duration of the pregnancy.
In the human embryo, the uterus develops from the paramesonephric ducts, which fuse into the single organ known as a simplex uterus. The uterus has different forms in many other animals and in some it exists as two separate uteri known as a duplex uterus.
In medicine and related professions, the term uterus is consistently used, while the Germanic-derived term womb is commonly used in everyday contexts. Events occurring within the uterus are described with the term in utero.
Structure
In humans, the uterus is located within the pelvic region immediately behind and almost overlying the bladder, and in front of the sigmoid colon. The human uterus is pear-shaped and about long, broad (side to side), and thick. A typical adult uterus weighs about 60 grams. The uterus can be divided anatomically into four regions: the fundus – the uppermost rounded portion of the uterus above the openings of the fallopian tubes, the body, the cervix, and the cervical canal. The cervix protrudes into the vagina. The uterus is held in position within the pelvis by ligaments, which are part of the endopelvic fascia. These ligaments include the pubocervical ligaments, the cardinal ligaments, and the uterosacral ligaments. It is covered by a sheet-like fold of peritoneum, the broad ligament.
Layers
The uterus has three layers, which together form the uterine wall. From innermost to outermost, these layers are the endometrium, myometrium, and perimetrium.
The endometrium is the inner epithelial layer, along with its mucous membrane, of the mammalian uterus. It has a basal layer and a functional layer; the functional layer thickens and then is shed during the menstrual cycle or estrous cycle. During pregnancy, the uterine glands and blood vessels in the endometrium further increase in size and number and form the decidua. Vascular spaces fuse and become interconnected, forming the placenta, which supplies oxygen and nutrition to the embryo and fetus.
The myometrium of the uterus mostly consists of smooth muscle. The innermost layer of myometrium is known as the junctional zone, which becomes thickened in adenomyosis.
The perimetrium is a serous layer of visceral peritoneum. It covers the outer surface of the uterus.
Surrounding the uterus is a layer or band of fibrous and fatty connective tissue called the parametrium that connects the uterus to other tissues of the pelvis.
Commensal and mutualistic organisms are present in the uterus and form the uterine microbiome.
Support
The uterus is primarily supported by the pelvic diaphragm, perineal body, and urogenital diaphragm. Secondarily, it is supported by ligaments, including the peritoneal ligament and the broad ligament of uterus.
Major ligaments
The uterus is held in place by several peritoneal ligaments, of which the following are the most important (there are two of each):
Axis
Normally, the human uterus lies in anteversion and anteflexion. In most women, the long axis of the uterus is bent forward on the long axis of the vagina, against the urinary bladder. This position is referred to as anteversion of the uterus. Furthermore, the long axis of the body of the uterus is bent forward at the level of the internal os with the long axis of the cervix. This position is termed anteflexion of the uterus. The uterus assumes an anteverted position in 50% of women, a retroverted position in 25% of women, and a midposed position in the remaining 25% of women.
Position
The uterus is located in the middle of the pelvic cavity, in the frontal plane (due to the broad ligament of the uterus). The fundus does not extend above the linea terminalis, while the vaginal part of the cervix does not extend below the interspinal line. The uterus is mobile and moves posteriorly under the pressure of a full bladder, or anteriorly under the pressure of a full rectum. If both are full, it moves upwards. Increased intra-abdominal pressure pushes it downwards. The mobility is conferred to it by a musculo-fibrous apparatus that consists of suspensory and sustentacular parts. Under normal circumstances, the suspensory part keeps the uterus in anteflexion and anteversion (in 90% of women) and keeps it "floating" in the pelvis. The meanings of these terms are described below:
The sustentacular part supports the pelvic organs and comprises the larger pelvic diaphragm in the back and the smaller urogenital diaphragm in the front.
The pathological changes of the position of the uterus are:
retroversion/retroflexion, if it is fixed
hyperanteflexion – tipped too forward; most commonly congenital, but may be caused by tumors
anteposition, retroposition, lateroposition – the whole uterus is moved; caused by parametritis or tumors
elevation, descensus, prolapse
rotation (the whole uterus rotates around its longitudinal axis), torsion (only the body of the uterus rotates around)
inversion
In cases where the uterus is "tipped", also known as retroverted uterus, the woman may have symptoms of pain during sexual intercourse, pelvic pain during menstruation, minor incontinence, urinary tract infections, fertility difficulties, and difficulty using tampons. A pelvic examination by a doctor can determine if a uterus is tipped.
Blood, lymph, and nerve supply
The human uterus is supplied by arterial blood both from the uterine artery and the ovarian artery. Another anastomotic branch may also supply the uterus from anastomosis of these two arteries.
Afferent nerves supplying the uterus are T11 and T12. Sympathetic supply is from the hypogastric plexus and the ovarian plexus. Parasympathetic supply is from the S2, S3 and S4 nerves.
Development
Bilateral Müllerian ducts form during early human fetal life. In males, anti-Müllerian hormone (AMH) secreted from the testes leads to the ducts' regression. In females, these ducts give rise to the fallopian tubes and the uterus. In humans, the lower segments of the two ducts fuse to form a single uterus; in cases of uterine malformations this fusion may be disturbed. The different uterine morphologies among the mammals are due to varying degrees of fusion of the Müllerian ducts.
Various congenital conditions of the uterus can develop in utero. Though uncommon, some of these are didelphic uterus, bicornate uterus and others.
| Biology and health sciences | Reproductive system | null |
31880 | https://en.wikipedia.org/wiki/Universe | Universe | The universe is all of space and time and their contents. It comprises all of existence, any fundamental interaction, physical process and physical constant, and therefore all forms of matter and energy, and the structures they form, from sub-atomic particles to entire galactic filaments. Since the early 20th century, the field of cosmology establishes that space and time emerged together at the Big Bang ago and that the universe has been expanding since then. The portion of the universe that can be seen by humans is approximately 93 billion light-years in diameter at present, but the total size of the universe is not known.
Some of the earliest cosmological models of the universe were developed by ancient Greek and Indian philosophers and were geocentric, placing Earth at the center. Over the centuries, more precise astronomical observations led Nicolaus Copernicus to develop the heliocentric model with the Sun at the center of the Solar System. In developing the law of universal gravitation, Isaac Newton built upon Copernicus's work as well as Johannes Kepler's laws of planetary motion and observations by Tycho Brahe.
Further observational improvements led to the realization that the Sun is one of a few hundred billion stars in the Milky Way, which is one of a few hundred billion galaxies in the observable universe. Many of the stars in a galaxy have planets. At the largest scale, galaxies are distributed uniformly and the same in all directions, meaning that the universe has neither an edge nor a center. At smaller scales, galaxies are distributed in clusters and superclusters which form immense filaments and voids in space, creating a vast foam-like structure. Discoveries in the early 20th century have suggested that the universe had a beginning and has been expanding since then.
According to the Big Bang theory, the energy and matter initially present have become less dense as the universe expanded. After an initial accelerated expansion called the inflationary epoch at around 10−32 seconds, and the separation of the four known fundamental forces, the universe gradually cooled and continued to expand, allowing the first subatomic particles and simple atoms to form. Giant clouds of hydrogen and helium were gradually drawn to the places where matter was most dense, forming the first galaxies, stars, and everything else seen today.
From studying the effects of gravity on both matter and light, it has been discovered that the universe contains much more matter than is accounted for by visible objects; stars, galaxies, nebulas and interstellar gas. This unseen matter is known as dark matter. In the widely accepted ΛCDM cosmological model, dark matter accounts for about of the mass and energy in the universe while about is dark energy, a mysterious form of energy responsible for the acceleration of the expansion of the universe. Ordinary ('baryonic') matter therefore composes only of the universe. Stars, planets, and visible gas clouds only form about 6% of this ordinary matter.
There are many competing hypotheses about the ultimate fate of the universe and about what, if anything, preceded the Big Bang, while other physicists and philosophers refuse to speculate, doubting that information about prior states will ever be accessible. Some physicists have suggested various multiverse hypotheses, in which the universe might be one among many.
Definition
The physical universe is defined as all of space and time (collectively referred to as spacetime) and their contents. Such contents comprise all of energy in its various forms, including electromagnetic radiation and matter, and therefore planets, moons, stars, galaxies, and the contents of intergalactic space. The universe also includes the physical laws that influence energy and matter, such as conservation laws, classical mechanics, and relativity.
The universe is often defined as "the totality of existence", or everything that exists, everything that has existed, and everything that will exist. In fact, some philosophers and scientists support the inclusion of ideas and abstract concepts—such as mathematics and logic—in the definition of the universe. The word universe may also refer to concepts such as the cosmos, the world, and nature.
Etymology
The word universe derives from the Old French word , which in turn derives from the Latin word , meaning 'combined into one'. The Latin word 'universum' was used by Cicero and later Latin authors in many of the same senses as the modern English word is used.
Synonyms
A term for universe among the ancient Greek philosophers from Pythagoras onwards was () 'the all', defined as all matter and all space, and () 'all things', which did not necessarily include the void. Another synonym was () meaning 'the world, the cosmos'. Synonyms are also found in Latin authors (, , ) and survive in modern languages, e.g., the German words , , and for universe. The same synonyms are found in English, such as everything (as in the theory of everything), the cosmos (as in cosmology), the world (as in the many-worlds interpretation), and nature (as in natural laws or natural philosophy).
Chronology and the Big Bang
The prevailing model for the evolution of the universe is the Big Bang theory. The Big Bang model states that the earliest state of the universe was an extremely hot and dense one, and that the universe subsequently expanded and cooled. The model is based on general relativity and on simplifying assumptions such as the homogeneity and isotropy of space. A version of the model with a cosmological constant (Lambda) and cold dark matter, known as the Lambda-CDM model, is the simplest model that provides a reasonably good account of various observations about the universe.
The initial hot, dense state is called the Planck epoch, a brief period extending from time zero to one Planck time unit of approximately 10−43 seconds. During the Planck epoch, all types of matter and all types of energy were concentrated into a dense state, and gravity—currently the weakest by far of the four known forces—is believed to have been as strong as the other fundamental forces, and all the forces may have been unified. The physics controlling this very early period (including quantum gravity in the Planck epoch) is not understood, so we cannot say what, if anything, happened before time zero. Since the Planck epoch, the universe has been expanding to its present scale, with a very short but intense period of cosmic inflation speculated to have occurred within the first 10−32 seconds. This initial period of inflation would explain why space appears to be very flat.
Within the first fraction of a second of the universe's existence, the four fundamental forces had separated. As the universe continued to cool from its inconceivably hot state, various types of subatomic particles were able to form in short periods of time known as the quark epoch, the hadron epoch, and the lepton epoch. Together, these epochs encompassed less than 10 seconds of time following the Big Bang. These elementary particles associated stably into ever larger combinations, including stable protons and neutrons, which then formed more complex atomic nuclei through nuclear fusion.
This process, known as Big Bang nucleosynthesis, lasted for about 17 minutes and ended about 20 minutes after the Big Bang, so only the fastest and simplest reactions occurred. About 25% of the protons and all the neutrons in the universe, by mass, were converted to helium, with small amounts of deuterium (a form of hydrogen) and traces of lithium. Any other element was only formed in very tiny quantities. The other 75% of the protons remained unaffected, as hydrogen nuclei.
After nucleosynthesis ended, the universe entered a period known as the photon epoch. During this period, the universe was still far too hot for matter to form neutral atoms, so it contained a hot, dense, foggy plasma of negatively charged electrons, neutral neutrinos and positive nuclei. After about 377,000 years, the universe had cooled enough that electrons and nuclei could form the first stable atoms. This is known as recombination for historical reasons; electrons and nuclei were combining for the first time. Unlike plasma, neutral atoms are transparent to many wavelengths of light, so for the first time the universe also became transparent. The photons released ("decoupled") when these atoms formed can still be seen today; they form the cosmic microwave background (CMB).
As the universe expands, the energy density of electromagnetic radiation decreases more quickly than does that of matter because the energy of each photon decreases as it is cosmologically redshifted. At around 47,000 years, the energy density of matter became larger than that of photons and neutrinos, and began to dominate the large scale behavior of the universe. This marked the end of the radiation-dominated era and the start of the matter-dominated era.
In the earliest stages of the universe, tiny fluctuations within the universe's density led to concentrations of dark matter gradually forming. Ordinary matter, attracted to these by gravity, formed large gas clouds and eventually, stars and galaxies, where the dark matter was most dense, and voids where it was least dense. After around 100–300 million years, the first stars formed, known as Population III stars. These were probably very massive, luminous, non metallic and short-lived. They were responsible for the gradual reionization of the universe between about 200–500 million years and 1 billion years, and also for seeding the universe with elements heavier than helium, through stellar nucleosynthesis.
The universe also contains a mysterious energy—possibly a scalar field—called dark energy, the density of which does not change over time. After about 9.8 billion years, the universe had expanded sufficiently so that the density of matter was less than the density of dark energy, marking the beginning of the present dark-energy-dominated era. In this era, the expansion of the universe is accelerating due to dark energy.
Physical properties
Of the four fundamental interactions, gravitation is the dominant at astronomical length scales. Gravity's effects are cumulative; by contrast, the effects of positive and negative charges tend to cancel one another, making electromagnetism relatively insignificant on astronomical length scales. The remaining two interactions, the weak and strong nuclear forces, decline very rapidly with distance; their effects are confined mainly to sub-atomic length scales.
The universe appears to have much more matter than antimatter, an asymmetry possibly related to the CP violation. This imbalance between matter and antimatter is partially responsible for the existence of all matter existing today, since matter and antimatter, if equally produced at the Big Bang, would have completely annihilated each other and left only photons as a result of their interaction. These laws are Gauss's law and the non-divergence of the stress–energy–momentum pseudotensor.
Size and regions
Due to the finite speed of light, there is a limit (known as the particle horizon) to how far light can travel over the age of the universe.
The spatial region from which we can receive light is called the observable universe. The proper distance (measured at a fixed time) between Earth and the edge of the observable universe is 46 billion light-years (14 billion parsecs), making the diameter of the observable universe about 93 billion light-years (28 billion parsecs). Although the distance traveled by light from the edge of the observable universe is close to the age of the universe times the speed of light, , the proper distance is larger because the edge of the observable universe and the Earth have since moved further apart.
For comparison, the diameter of a typical galaxy is 30,000 light-years (9,198 parsecs), and the typical distance between two neighboring galaxies is 3 million light-years (919.8 kiloparsecs). As an example, the Milky Way is roughly 100,000–180,000 light-years in diameter, and the nearest sister galaxy to the Milky Way, the Andromeda Galaxy, is located roughly 2.5 million light-years away.
Because humans cannot observe space beyond the edge of the observable universe, it is unknown whether the size of the universe in its totality is finite or infinite. Estimates suggest that the whole universe, if finite, must be more than 250 times larger than a Hubble sphere. Some disputed estimates for the total size of the universe, if finite, reach as high as megaparsecs, as implied by a suggested resolution of the No-Boundary Proposal.
Age and expansion
Assuming that the Lambda-CDM model is correct, the measurements of the parameters using a variety of techniques by numerous experiments yield a best value of the age of the universe at 13.799 ± 0.021 billion years, as of 2015.
Over time, the universe and its contents have evolved. For example, the relative population of quasars and galaxies has changed and the universe has expanded. This expansion is inferred from the observation that the light from distant galaxies has been redshifted, which implies that the galaxies are receding from us. Analyses of Type Ia supernovae indicate that the expansion is accelerating.
The more matter there is in the universe, the stronger the mutual gravitational pull of the matter. If the universe were too dense then it would re-collapse into a gravitational singularity. However, if the universe contained too little matter then the self-gravity would be too weak for astronomical structures, like galaxies or planets, to form. Since the Big Bang, the universe has expanded monotonically. Perhaps unsurprisingly, our universe has just the right mass–energy density, equivalent to about 5 protons per cubic meter, which has allowed it to expand for the last 13.8 billion years, giving time to form the universe as observed today.
There are dynamical forces acting on the particles in the universe which affect the expansion rate. Before 1998, it was expected that the expansion rate would be decreasing as time went on due to the influence of gravitational interactions in the universe; and thus there is an additional observable quantity in the universe called the deceleration parameter, which most cosmologists expected to be positive and related to the matter density of the universe. In 1998, the deceleration parameter was measured by two different groups to be negative, approximately −0.55, which technically implies that the second derivative of the cosmic scale factor has been positive in the last 5–6 billion years.
Spacetime
Modern physics regards events as being organized into spacetime. This idea originated with the special theory of relativity, which predicts that if one observer sees two events happening in different places at the same time, a second observer who is moving relative to the first will see those events happening at different times. The two observers will disagree on the time between the events, and they will disagree about the distance separating the events, but they will agree on the speed of light , and they will measure the same value for the combination . The square root of the absolute value of this quantity is called the interval between the two events. The interval expresses how widely separated events are, not just in space or in time, but in the combined setting of spacetime.
The special theory of relativity cannot account for gravity. Its successor, the general theory of relativity, explains gravity by recognizing that spacetime is not fixed but instead dynamical. In general relativity, gravitational force is reimagined as curvature of spacetime. A curved path like an orbit is not the result of a force deflecting a body from an ideal straight-line path, but rather the body's attempt to fall freely through a background that is itself curved by the presence of other masses. A remark by John Archibald Wheeler that has become proverbial among physicists summarizes the theory: "Spacetime tells matter how to move; matter tells spacetime how to curve", and therefore there is no point in considering one without the other. The Newtonian theory of gravity is a good approximation to the predictions of general relativity when gravitational effects are weak and objects are moving slowly compared to the speed of light.
The relation between matter distribution and spacetime curvature is given by the Einstein field equations, which require tensor calculus to express. The universe appears to be a smooth spacetime continuum consisting of three spatial dimensions and one temporal (time) dimension. Therefore, an event in the spacetime of the physical universe can be identified by a set of four coordinates: (x, y, z, t). On average, space is observed to be very nearly flat (with a curvature close to zero), meaning that Euclidean geometry is empirically true with high accuracy throughout most of the universe. Spacetime also appears to have a simply connected topology, in analogy with a sphere, at least on the length scale of the observable universe. However, present observations cannot exclude the possibilities that the universe has more dimensions (which is postulated by theories such as string theory) and that its spacetime may have a multiply connected global topology, in analogy with the cylindrical or toroidal topologies of two-dimensional spaces.
Shape
General relativity describes how spacetime is curved and bent by mass and energy (gravity). The topology or geometry of the universe includes both local geometry in the observable universe and global geometry. Cosmologists often work with a given space-like slice of spacetime called the comoving coordinates. The section of spacetime which can be observed is the backward light cone, which delimits the cosmological horizon. The cosmological horizon, also called the particle horizon or the light horizon, is the maximum distance from which particles can have traveled to the observer in the age of the universe. This horizon represents the boundary between the observable and the unobservable regions of the universe.
An important parameter determining the future evolution of the universe theory is the density parameter, Omega (Ω), defined as the average matter density of the universe divided by a critical value of that density. This selects one of three possible geometries depending on whether Ω is equal to, less than, or greater than 1. These are called, respectively, the flat, open and closed universes.
Observations, including the Cosmic Background Explorer (COBE), Wilkinson Microwave Anisotropy Probe (WMAP), and Planck maps of the CMB, suggest that the universe is infinite in extent with a finite age, as described by the Friedmann–Lemaître–Robertson–Walker (FLRW) models. These FLRW models thus support inflationary models and the standard model of cosmology, describing a flat, homogeneous universe presently dominated by dark matter and dark energy.
Support of life
The fine-tuned universe hypothesis is the proposition that the conditions that allow the existence of observable life in the universe can only occur when certain universal fundamental physical constants lie within a very narrow range of values. According to this hypothesis, if any of several fundamental constants were only slightly different, the universe would have been unlikely to be conducive to the establishment and development of matter, astronomical structures, elemental diversity, or life as it is understood. Whether this is true, and whether that question is even logically meaningful to ask, are subjects of much debate. The proposition is discussed among philosophers, scientists, theologians, and proponents of creationism.
Composition
The universe is composed almost completely of dark energy, dark matter, and ordinary matter. Other contents are electromagnetic radiation (estimated to constitute from 0.005% to close to 0.01% of the total mass–energy of the universe) and antimatter.
The proportions of all types of matter and energy have changed over the history of the universe. The total amount of electromagnetic radiation generated within the universe has decreased by 1/2 in the past 2 billion years. Today, ordinary matter, which includes atoms, stars, galaxies, and life, accounts for only 4.9% of the contents of the universe. The present overall density of this type of matter is very low, roughly 4.5 × 10−31 grams per cubic centimeter, corresponding to a density of the order of only one proton for every four cubic meters of volume. The nature of both dark energy and dark matter is unknown. Dark matter, a mysterious form of matter that has not yet been identified, accounts for 26.8% of the cosmic contents. Dark energy, which is the energy of empty space and is causing the expansion of the universe to accelerate, accounts for the remaining 68.3% of the contents.
Matter, dark matter, and dark energy are distributed homogeneously throughout the universe over length scales longer than 300 million light-years (ly) or so. However, over shorter length-scales, matter tends to clump hierarchically; many atoms are condensed into stars, most stars into galaxies, most galaxies into clusters, superclusters and, finally, large-scale galactic filaments. The observable universe contains as many as an estimated 2 trillion galaxies and, overall, as many as an estimated 1024 stars – more stars (and earth-like planets) than all the grains of beach sand on planet Earth; but less than the total number of atoms estimated in the universe as 1082; and the estimated total number of stars in an inflationary universe (observed and unobserved), as 10100. Typical galaxies range from dwarfs with as few as ten million (107) stars up to giants with one trillion (1012) stars. Between the larger structures are voids, which are typically 10–150 Mpc (33 million–490 million ly) in diameter. The Milky Way is in the Local Group of galaxies, which in turn is in the Laniakea Supercluster. This supercluster spans over 500 million light-years, while the Local Group spans over 10 million light-years. The universe also has vast regions of relative emptiness; the largest known void measures 1.8 billion ly (550 Mpc) across.
The observable universe is isotropic on scales significantly larger than superclusters, meaning that the statistical properties of the universe are the same in all directions as observed from Earth. The universe is bathed in highly isotropic microwave radiation that corresponds to a thermal equilibrium blackbody spectrum of roughly 2.72548 kelvins. The hypothesis that the large-scale universe is homogeneous and isotropic is known as the cosmological principle. A universe that is both homogeneous and isotropic looks the same from all vantage points and has no center.
Dark energy
An explanation for why the expansion of the universe is accelerating remains elusive. It is often attributed to the gravitational influence of "dark energy", an unknown form of energy that is hypothesized to permeate space. On a mass–energy equivalence basis, the density of dark energy (~ 7 × 10−30 g/cm3) is much less than the density of ordinary matter or dark matter within galaxies. However, in the present dark-energy era, it dominates the mass–energy of the universe because it is uniform across space.
Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously, and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space while still permeating them enough to cause the observed rate of expansion. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to vacuum energy.
Dark matter
Dark matter is a hypothetical kind of matter that is invisible to the entire electromagnetic spectrum, but which accounts for most of the matter in the universe. The existence and properties of dark matter are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. Other than neutrinos, a form of hot dark matter, dark matter has not been detected directly, making it one of the greatest mysteries in modern astrophysics. Dark matter neither emits nor absorbs light or any other electromagnetic radiation at any significant level. Dark matter is estimated to constitute 26.8% of the total mass–energy and 84.5% of the total matter in the universe.
Ordinary matter
The remaining 4.9% of the mass–energy of the universe is ordinary matter, that is, atoms, ions, electrons and the objects they form. This matter includes stars, which produce nearly all of the light we see from galaxies, as well as interstellar gas in the interstellar and intergalactic media, planets, and all the objects from everyday life that we can bump into, touch or squeeze. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10 percent of the ordinary matter contribution to the mass–energy density of the universe.
Ordinary matter commonly exists in four states (or phases): solid, liquid, gas, and plasma. However, advances in experimental techniques have revealed other previously theoretical phases, such as Bose–Einstein condensates and fermionic condensates. Ordinary matter is composed of two types of elementary particles: quarks and leptons. For example, the proton is formed of two up quarks and one down quark; the neutron is formed of two down quarks and one up quark; and the electron is a kind of lepton. An atom consists of an atomic nucleus, made up of protons and neutrons (both of which are baryons), and electrons that orbit the nucleus.
Soon after the Big Bang, primordial protons and neutrons formed from the quark–gluon plasma of the early universe as it cooled below two trillion degrees. A few minutes later, in a process known as Big Bang nucleosynthesis, nuclei formed from the primordial protons and neutrons. This nucleosynthesis formed lighter elements, those with small atomic numbers up to lithium and beryllium, but the abundance of heavier elements dropped off sharply with increasing atomic number. Some boron may have been formed at this time, but the next heavier element, carbon, was not formed in significant amounts. Big Bang nucleosynthesis shut down after about 20 minutes due to the rapid drop in temperature and density of the expanding universe. Subsequent formation of heavier elements resulted from stellar nucleosynthesis and supernova nucleosynthesis.
Particles
Ordinary matter and the forces that act on matter can be described in terms of elementary particles. These particles are sometimes described as being fundamental, since they have an unknown substructure, and it is unknown whether or not they are composed of smaller and even more fundamental particles. In most contemporary models they are thought of as points in space. All elementary particles are currently best explained by quantum mechanics and exhibit wave–particle duality: their behavior has both particle-like and wave-like aspects, with different features dominating under different circumstances.
Of central importance is the Standard Model, a theory that is concerned with electromagnetic interactions and the weak and strong nuclear interactions. The Standard Model is supported by the experimental confirmation of the existence of particles that compose matter: quarks and leptons, and their corresponding "antimatter" duals, as well as the force particles that mediate interactions: the photon, the W and Z bosons, and the gluon. The Standard Model predicted the existence of the recently discovered Higgs boson, a particle that is a manifestation of a field within the universe that can endow particles with mass. Because of its success in explaining a wide variety of experimental results, the Standard Model is sometimes regarded as a "theory of almost everything". The Standard Model does not, however, accommodate gravity. A true force–particle "theory of everything" has not been attained.
Hadrons
A hadron is a composite particle made of quarks held together by the strong force. Hadrons are categorized into two families: baryons (such as protons and neutrons) made of three quarks, and mesons (such as pions) made of one quark and one antiquark. Of the hadrons, protons are stable, and neutrons bound within atomic nuclei are stable. Other hadrons are unstable under ordinary conditions and are thus insignificant constituents of the modern universe.
From approximately 10−6 seconds after the Big Bang, during a period known as the hadron epoch, the temperature of the universe had fallen sufficiently to allow quarks to bind together into hadrons, and the mass of the universe was dominated by hadrons. Initially, the temperature was high enough to allow the formation of hadron–anti-hadron pairs, which kept matter and antimatter in thermal equilibrium. However, as the temperature of the universe continued to fall, hadron–anti-hadron pairs were no longer produced. Most of the hadrons and anti-hadrons were then eliminated in particle–antiparticle annihilation reactions, leaving a small residual of hadrons by the time the universe was about one second old.
Leptons
A lepton is an elementary, half-integer spin particle that does not undergo strong interactions but is subject to the Pauli exclusion principle; no two leptons of the same species can be in exactly the same state at the same time. Two main classes of leptons exist: charged leptons (also known as the electron-like leptons), and neutral leptons (better known as neutrinos). Electrons are stable and the most common charged lepton in the universe, whereas muons and taus are unstable particles that quickly decay after being produced in high energy collisions, such as those involving cosmic rays or carried out in particle accelerators. Charged leptons can combine with other particles to form various composite particles such as atoms and positronium. The electron governs nearly all of chemistry, as it is found in atoms and is directly tied to all chemical properties. Neutrinos rarely interact with anything, and are consequently rarely observed. Neutrinos stream throughout the universe but rarely interact with normal matter.
The lepton epoch was the period in the evolution of the early universe in which the leptons dominated the mass of the universe. It started roughly 1 second after the Big Bang, after the majority of hadrons and anti-hadrons annihilated each other at the end of the hadron epoch. During the lepton epoch the temperature of the universe was still high enough to create lepton–anti-lepton pairs, so leptons and anti-leptons were in thermal equilibrium. Approximately 10 seconds after the Big Bang, the temperature of the universe had fallen to the point where lepton–anti-lepton pairs were no longer created. Most leptons and anti-leptons were then eliminated in annihilation reactions, leaving a small residue of leptons. The mass of the universe was then dominated by photons as it entered the following photon epoch.
Photons
A photon is the quantum of light and all other forms of electromagnetic radiation. It is the carrier for the electromagnetic force. The effects of this force are easily observable at the microscopic and at the macroscopic level because the photon has zero rest mass; this allows long distance interactions.
The photon epoch started after most leptons and anti-leptons were annihilated at the end of the lepton epoch, about 10 seconds after the Big Bang. Atomic nuclei were created in the process of nucleosynthesis which occurred during the first few minutes of the photon epoch. For the remainder of the photon epoch the universe contained a hot dense plasma of nuclei, electrons and photons. About 380,000 years after the Big Bang, the temperature of the universe fell to the point where nuclei could combine with electrons to create neutral atoms. As a result, photons no longer interacted frequently with matter and the universe became transparent. The highly redshifted photons from this period form the cosmic microwave background. Tiny variations in the temperature of the CMB correspond to variations in the density of the universe that were the early "seeds" from which all subsequent structure formation took place.
Habitability
The frequency of life in the universe has been a frequent point of investigation in astronomy and astrobiology, being the issue of the Drake equation and the different views on it, from identifying the Fermi paradox, the situation of not having found any signs of extraterrestrial life, to arguments for a biophysical cosmology, a view of life being inherent to the physical cosmology of the universe.
Cosmological models
Model of the universe based on general relativity
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. It is the basis of current cosmological models of the universe. General relativity generalizes special relativity and Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present.
The relation is specified by the Einstein field equations, a system of partial differential equations. In general relativity, the distribution of matter and energy determines the geometry of spacetime, which in turn describes the acceleration of matter. Therefore, solutions of the Einstein field equations describe the evolution of the universe. Combined with measurements of the amount, type, and distribution of matter in the universe, the equations of general relativity describe the evolution of the universe over time.
With the assumption of the cosmological principle that the universe is homogeneous and isotropic everywhere, a specific solution of the field equations that describes the universe is the metric tensor called the Friedmann–Lemaître–Robertson–Walker metric,
where (r, θ, φ) correspond to a spherical coordinate system. This metric has only two undetermined parameters. An overall dimensionless length scale factor R describes the size scale of the universe as a function of time (an increase in R is the expansion of the universe), and a curvature index k describes the geometry. The index k is defined so that it can take only one of three values: 0, corresponding to flat Euclidean geometry; 1, corresponding to a space of positive curvature; or −1, corresponding to a space of positive or negative curvature. The value of R as a function of time t depends upon k and the cosmological constant Λ. The cosmological constant represents the energy density of the vacuum of space and could be related to dark energy. The equation describing how R varies with time is known as the Friedmann equation after its inventor, Alexander Friedmann.
The solutions for R(t) depend on k and Λ, but some qualitative features of such solutions are general. First and most importantly, the length scale R of the universe can remain constant only if the universe is perfectly isotropic with positive curvature (k = 1) and has one precise value of density everywhere, as first noted by Albert Einstein.
Second, all solutions suggest that there was a gravitational singularity in the past, when R went to zero and matter and energy were infinitely dense. It may seem that this conclusion is uncertain because it is based on the questionable assumptions of perfect homogeneity and isotropy (the cosmological principle) and that only the gravitational interaction is significant. However, the Penrose–Hawking singularity theorems show that a singularity should exist for very general conditions. Hence, according to Einstein's field equations, R grew rapidly from an unimaginably hot, dense state that existed immediately following this singularity (when R had a small, finite value); this is the essence of the Big Bang model of the universe. Understanding the singularity of the Big Bang likely requires a quantum theory of gravity, which has not yet been formulated.
Third, the curvature index k determines the sign of the curvature of constant-time spatial surfaces averaged over sufficiently large length scales (greater than about a billion light-years). If k = 1, the curvature is positive and the universe has a finite volume. A universe with positive curvature is often visualized as a three-dimensional sphere embedded in a four-dimensional space. Conversely, if k is zero or negative, the universe has an infinite volume. It may seem counter-intuitive that an infinite and yet infinitely dense universe could be created in a single instant when R = 0, but exactly that is predicted mathematically when k is nonpositive and the cosmological principle is satisfied. By analogy, an infinite plane has zero curvature but infinite area, whereas an infinite cylinder is finite in one direction and a torus is finite in both.
The ultimate fate of the universe is still unknown because it depends critically on the curvature index k and the cosmological constant Λ. If the universe were sufficiently dense, k would equal +1, meaning that its average curvature throughout is positive and the universe will eventually recollapse in a Big Crunch, possibly starting a new universe in a Big Bounce. Conversely, if the universe were insufficiently dense, k would equal 0 or −1 and the universe would expand forever, cooling off and eventually reaching the Big Freeze and the heat death of the universe. Modern data suggests that the expansion of the universe is accelerating; if this acceleration is sufficiently rapid, the universe may eventually reach a Big Rip. Observationally, the universe appears to be flat (k = 0), with an overall density that is very close to the critical value between recollapse and eternal expansion.
Multiverse hypotheses
Some speculative theories have proposed that our universe is but one of a set of disconnected universes, collectively denoted as the multiverse, challenging or enhancing more limited definitions of the universe. Max Tegmark developed a four-part classification scheme for the different types of multiverses that scientists have suggested in response to various problems in physics. An example of such multiverses is the one resulting from the chaotic inflation model of the early universe.
Another is the multiverse resulting from the many-worlds interpretation of quantum mechanics. In this interpretation, parallel worlds are generated in a manner similar to quantum superposition and decoherence, with all states of the wave functions being realized in separate worlds. Effectively, in the many-worlds interpretation the multiverse evolves as a universal wavefunction. If the Big Bang that created our multiverse created an ensemble of multiverses, the wave function of the ensemble would be entangled in this sense. Whether scientifically meaningful probabilities can be extracted from this picture has been and continues to be a topic of much debate, and multiple versions of the many-worlds interpretation exist. The subject of the interpretation of quantum mechanics is in general marked by disagreement.
The least controversial, but still highly disputed, category of multiverse in Tegmark's scheme is Level I. The multiverses of this level are composed by distant spacetime events "in our own universe". Tegmark and others have argued that, if space is infinite, or sufficiently large and uniform, identical instances of the history of Earth's entire Hubble volume occur every so often, simply by chance. Tegmark calculated that our nearest so-called doppelgänger is 1010115 metres away from us (a double exponential function larger than a googolplex). However, the arguments used are of speculative nature.
It is possible to conceive of disconnected spacetimes, each existing but unable to interact with one another. An easily visualized metaphor of this concept is a group of separate soap bubbles, in which observers living on one soap bubble cannot interact with those on other soap bubbles, even in principle. According to one common terminology, each "soap bubble" of spacetime is denoted as a universe, whereas humans' particular spacetime is denoted as the universe, just as humans call Earth's moon the Moon. The entire collection of these separate spacetimes is denoted as the multiverse.
With this terminology, different universes are not causally connected to each other. In principle, the other unconnected universes may have different dimensionalities and topologies of spacetime, different forms of matter and energy, and different physical laws and physical constants, although such possibilities are purely speculative. Others consider each of several bubbles created as part of chaotic inflation to be separate universes, though in this model these universes all share a causal origin.
Historical conceptions
Historically, there have been many ideas of the cosmos (cosmologies) and its origin (cosmogonies). Theories of an impersonal universe governed by physical laws were first proposed by the Greeks and Indians. Ancient Chinese philosophy encompassed the notion of the universe including both all of space and all of time. Over the centuries, improvements in astronomical observations and theories of motion and gravitation led to ever more accurate descriptions of the universe. The modern era of cosmology began with Albert Einstein's 1915 general theory of relativity, which made it possible to quantitatively predict the origin, evolution, and conclusion of the universe as a whole. Most modern, accepted theories of cosmology are based on general relativity and, more specifically, the predicted Big Bang.
Mythologies
Many cultures have stories describing the origin of the world and universe. Cultures generally regard these stories as having some truth. There are however many differing beliefs in how these stories apply amongst those believing in a supernatural origin, ranging from a god directly creating the universe as it is now to a god just setting the "wheels in motion" (for example via mechanisms such as the big bang and evolution).
Ethnologists and anthropologists who study myths have developed various classification schemes for the various themes that appear in creation stories. For example, in one type of story, the world is born from a world egg; such stories include the Finnish epic poem Kalevala, the Chinese story of Pangu or the Indian Brahmanda Purana. In related stories, the universe is created by a single entity emanating or producing something by him- or herself, as in the Tibetan Buddhism concept of Adi-Buddha, the ancient Greek story of Gaia (Mother Earth), the Aztec goddess Coatlicue myth, the ancient Egyptian god Atum story, and the Judeo-Christian Genesis creation narrative in which the Abrahamic God created the universe. In another type of story, the universe is created from the union of male and female deities, as in the Maori story of Rangi and Papa. In other stories, the universe is created by crafting it from pre-existing materials, such as the corpse of a dead god—as from Tiamat in the Babylonian epic Enuma Elish or from the giant Ymir in Norse mythology—or from chaotic materials, as in Izanagi and Izanami in Japanese mythology. In other stories, the universe emanates from fundamental principles, such as Brahman and Prakrti, and the creation myth of the Serers.
Philosophical models
The pre-Socratic Greek philosophers and Indian philosophers developed some of the earliest philosophical concepts of the universe. The earliest Greek philosophers noted that appearances can be deceiving, and sought to understand the underlying reality behind the appearances. In particular, they noted the ability of matter to change forms (e.g., ice to water to steam) and several philosophers proposed that all the physical materials in the world are different forms of a single primordial material, or arche. The first to do so was Thales, who proposed this material to be water. Thales' student, Anaximander, proposed that everything came from the limitless apeiron. Anaximenes proposed the primordial material to be air on account of its perceived attractive and repulsive qualities that cause the arche to condense or dissociate into different forms. Anaxagoras proposed the principle of Nous (Mind), while Heraclitus proposed fire (and spoke of logos). Empedocles proposed the elements to be earth, water, air and fire. His four-element model became very popular. Like Pythagoras, Plato believed that all things were composed of number, with Empedocles' elements taking the form of the Platonic solids. Democritus, and later philosophers—most notably Leucippus—proposed that the universe is composed of indivisible atoms moving through a void (vacuum), although Aristotle did not believe that to be feasible because air, like water, offers resistance to motion. Air will immediately rush in to fill a void, and moreover, without resistance, it would do so indefinitely fast.
Although Heraclitus argued for eternal change, his contemporary Parmenides emphasized changelessness. Parmenides' poem On Nature has been read as saying that all change is an illusion, that the true underlying reality is eternally unchanging and of a single nature, or at least that the essential feature of each thing that exists must exist eternally, without origin, change, or end. His student Zeno of Elea challenged everyday ideas about motion with several famous paradoxes. Aristotle responded to these paradoxes by developing the notion of a potential countable infinity, as well as the infinitely divisible continuum.
The Indian philosopher Kanada, founder of the Vaisheshika school, developed a notion of atomism and proposed that light and heat were varieties of the same substance. In the 5th century AD, the Buddhist atomist philosopher Dignāga proposed atoms to be point-sized, durationless, and made of energy. They denied the existence of substantial matter and proposed that movement consisted of momentary flashes of a stream of energy.
The notion of temporal finitism was inspired by the doctrine of creation shared by the three Abrahamic religions: Judaism, Christianity and Islam. The Christian philosopher, John Philoponus, presented the philosophical arguments against the ancient Greek notion of an infinite past and future. Philoponus' arguments against an infinite past were used by the early Muslim philosopher, Al-Kindi (Alkindus); the Jewish philosopher, Saadia Gaon (Saadia ben Joseph); and the Muslim theologian, Al-Ghazali (Algazel).
Pantheism is the philosophical religious belief that the universe itself is identical to divinity and a supreme being or entity. The physical universe is thus understood as an all-encompassing, immanent deity. The term 'pantheist' designates one who holds both that everything constitutes a unity and that this unity is divine, consisting of an all-encompassing, manifested god or goddess.
Astronomical concepts
The earliest written records of identifiable predecessors to modern astronomy come from Ancient Egypt and Mesopotamia from around 3000 to 1200 BCE. Babylonian astronomers of the 7th century BCE viewed the world as a flat disk surrounded by the ocean.
Later Greek philosophers, observing the motions of the heavenly bodies, were concerned with developing models of the universe based more profoundly on empirical evidence. The first coherent model was proposed by Eudoxus of Cnidos, a student of Plato who followed Plato's idea that heavenly motions had to be circular. In order to account for the known complications of the planets' motions, particularly retrograde movement, Eudoxus' model included 27 different celestial spheres: four for each of the planets visible to the naked eye, three each for the Sun and the Moon, and one for the stars. All of these spheres were centered on the Earth, which remained motionless while they rotated eternally. Aristotle elaborated upon this model, increasing the number of spheres to 55 in order to account for further details of planetary motion. For Aristotle, normal matter was entirely contained within the terrestrial sphere, and it obeyed fundamentally different rules from heavenly material.
The post-Aristotle treatise De Mundo (of uncertain authorship and date) stated, "Five elements, situated in spheres in five regions, the less being in each case surrounded by the greater—namely, earth surrounded by water, water by air, air by fire, and fire by ether—make up the whole universe". This model was also refined by Callippus and after concentric spheres were abandoned, it was brought into nearly perfect agreement with astronomical observations by Ptolemy. The success of such a model is largely due to the mathematical fact that any function (such as the position of a planet) can be decomposed into a set of circular functions (the Fourier modes). Other Greek scientists, such as the Pythagorean philosopher Philolaus, postulated (according to Stobaeus' account) that at the center of the universe was a "central fire" around which the Earth, Sun, Moon and planets revolved in uniform circular motion.
The Greek astronomer Aristarchus of Samos was the first known individual to propose a heliocentric model of the universe. Though the original text has been lost, a reference in Archimedes' book The Sand Reckoner describes Aristarchus's heliocentric model. Archimedes wrote:
You, King Gelon, are aware the universe is the name given by most astronomers to the sphere the center of which is the center of the Earth, while its radius is equal to the straight line between the center of the Sun and the center of the Earth. This is the common account as you have heard from astronomers. But Aristarchus has brought out a book consisting of certain hypotheses, wherein it appears, as a consequence of the assumptions made, that the universe is many times greater than the universe just mentioned. His hypotheses are that the fixed stars and the Sun remain unmoved, that the Earth revolves about the Sun on the circumference of a circle, the Sun lying in the middle of the orbit, and that the sphere of fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface.
Aristarchus thus believed the stars to be very far away, and saw this as the reason why stellar parallax had not been observed, that is, the stars had not been observed to move relative each other as the Earth moved around the Sun. The stars are in fact much farther away than the distance that was generally assumed in ancient times, which is why stellar parallax is only detectable with precision instruments. The geocentric model, consistent with planetary parallax, was assumed to be the explanation for the unobservability of stellar parallax.
The only other astronomer from antiquity known by name who supported Aristarchus's heliocentric model was Seleucus of Seleucia, a Hellenistic astronomer who lived a century after Aristarchus. According to Plutarch, Seleucus was the first to prove the heliocentric system through reasoning, but it is not known what arguments he used. Seleucus' arguments for a heliocentric cosmology were probably related to the phenomenon of tides. According to Strabo (1.1.9), Seleucus was the first to state that the tides are due to the attraction of the Moon, and that the height of the tides depends on the Moon's position relative to the Sun. Alternatively, he may have proved heliocentricity by determining the constants of a geometric model for it, and by developing methods to compute planetary positions using this model, similar to Nicolaus Copernicus in the 16th century. During the Middle Ages, heliocentric models were also proposed by the Persian astronomers Albumasar and Al-Sijzi.
The Aristotelian model was accepted in the Western world for roughly two millennia, until Copernicus revived Aristarchus's perspective that the astronomical data could be explained more plausibly if the Earth rotated on its axis and if the Sun were placed at the center of the universe.
As noted by Copernicus, the notion that the Earth rotates is very old, dating at least to Philolaus (), Heraclides Ponticus () and Ecphantus the Pythagorean. Roughly a century before Copernicus, the Christian scholar Nicholas of Cusa also proposed that the Earth rotates on its axis in his book, On Learned Ignorance (1440). Al-Sijzi also proposed that the Earth rotates on its axis. Empirical evidence for the Earth's rotation on its axis, using the phenomenon of comets, was given by Tusi (1201–1274) and Ali Qushji (1403–1474).
This cosmology was accepted by Isaac Newton, Christiaan Huygens and later scientists. Newton demonstrated that the same laws of motion and gravity apply to earthly and to celestial matter, making Aristotle's division between the two obsolete. Edmund Halley (1720) and Jean-Philippe de Chéseaux (1744) noted independently that the assumption of an infinite space filled uniformly with stars would lead to the prediction that the nighttime sky would be as bright as the Sun itself; this became known as Olbers' paradox in the 19th century. Newton believed that an infinite space uniformly filled with matter would cause infinite forces and instabilities causing the matter to be crushed inwards under its own gravity. This instability was clarified in 1902 by the Jeans instability criterion. One solution to these paradoxes is the Charlier universe, in which the matter is arranged hierarchically (systems of orbiting bodies that are themselves orbiting in a larger system, ad infinitum) in a fractal way such that the universe has a negligibly small overall density; such a cosmological model had also been proposed earlier in 1761 by Johann Heinrich Lambert.
Deep space astronomy
During the 18th century, Immanuel Kant speculated that nebulae could be entire galaxies separate from the Milky Way, and in 1850, Alexander von Humboldt called these separate galaxies Weltinseln, or "world islands", a term that later developed into "island universes". In 1919, when the Hooker Telescope was completed, the prevailing view was that the universe consisted entirely of the Milky Way Galaxy. Using the Hooker Telescope, Edwin Hubble identified Cepheid variables in several spiral nebulae and in 1922–1923 proved conclusively that Andromeda Nebula and Triangulum among others, were entire galaxies outside our own, thus proving that the universe consists of a multitude of galaxies. With this Hubble formulated the Hubble constant, which allowed for the first time a calculation of the age of the Universe and size of the Observable Universe, which became increasingly precise with better meassurements, starting at 2 billion years and 280 million light-years, until 2006 when data of the Hubble Space Telescope allowed a very accurate calculation of the age of the Universe and size of the Observable Universe.
The modern era of physical cosmology began in 1917, when Albert Einstein first applied his general theory of relativity to model the structure and dynamics of the universe. The discoveries of this era, and the questions that remain unanswered, are outlined in the sections above.
| Physical sciences | Science and medicine | null |
31883 | https://en.wikipedia.org/wiki/Uncertainty%20principle | Uncertainty principle | The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
More formally, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the product of the accuracy of certain related pairs of measurements on a quantum system, such as position, x, and momentum, p. Such paired-variables are known as complementary variables or canonically conjugate variables.
First introduced in 1927 by German physicist Werner Heisenberg, the formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928:
where is the reduced Planck constant.
The quintessentially quantum mechanical uncertainty principle comes in many forms other than position–momentum. The energy–time relationship is widely used to relate quantum state lifetime to measured energy widths but its formal derivation is fraught with confusing issues about the nature of time. The basic principle has been extended in numerous directions; it must be considered in many kinds of fundamental physical measurements.
Position–momentum
It is vital to illustrate how the principle applies to relatively intelligible physical situations since it is indiscernible on the macroscopic scales that humans experience. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily.
Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized at the same time. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation , where is the wavenumber.
In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable is performed, then the system is in a particular eigenstate of that observable. However, the particular eigenstate of the observable need not be an eigenstate of another observable : If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable.
Visualization
The uncertainty principle can be visualized using the position- and momentum-space wavefunctions for one spinless particle with mass in one dimension.
The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely, the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. These wavefunctions are Fourier transforms of each other: mathematically, the uncertainty principle expresses the relationship between conjugate variables in the transform.
Wave mechanics interpretation
According to the de Broglie hypothesis, every object in the universe is associated with a wave. Thus every object, from an elementary particle to atoms, molecules and on up to planets and beyond are subject to the uncertainty principle.
The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is
The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is
In the case of the single-mode plane wave, is 1 if and 0 otherwise. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet.
On the other hand, consider a wave function that is a sum of many waves, which we may write as
where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes
with representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that is the Fourier transform of and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta.
One way to quantify the precision of the position and momentum is the standard deviation σ. Since is a probability density function for position, we calculate its standard deviation.
The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound.
Proof of the Kennard inequality using wave mechanics
We are interested in the variances of position and momentum, defined as
Without loss of generality, we will assume that the means vanish, which just amounts to a shift of the origin of our coordinates. (A more general proof that does not make this assumption is given below.) This gives us the simpler form
The function can be interpreted as a vector in a function space. We can define an inner product for a pair of functions u(x) and v(x) in this vector space:
where the asterisk denotes the complex conjugate.
With this inner product defined, we note that the variance for position can be written as
We can repeat this for momentum by interpreting the function as a vector, but we can also take advantage of the fact that and are Fourier transforms of each other. We evaluate the inverse Fourier transform through integration by parts:
where in the integration by parts, the cancelled term vanishes because the wave function vanishes at both infinities and , and then use the Dirac delta function which is valid because does not depend on p .
The term is called the momentum operator in position space. Applying Plancherel's theorem, we see that the variance for momentum can be written as
The Cauchy–Schwarz inequality asserts that
The modulus squared of any complex number z can be expressed as
we let and and substitute these into the equation above to get
All that remains is to evaluate these inner products.
Plugging this into the above inequalities, we get
and taking the square root
with equality if and only if p and x are linearly dependent. Note that the only physics involved in this proof was that and are wave functions for position and momentum, which are Fourier transforms of each other. A similar result would hold for any pair of conjugate variables.
Matrix mechanics interpretation
In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators and , one defines their commutator as
In the case of position and momentum, the commutator is the canonical commutation relation
The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let be a right eigenstate of position with a constant eigenvalue . By definition, this means that Applying the commutator to yields
where is the identity operator.
Suppose, for the sake of proof by contradiction, that is also a right eigenstate of momentum, with constant eigenvalue . If this were true, then one could write
On the other hand, the above canonical commutation relation requires that
This implies that no quantum state can simultaneously be both a position and a momentum eigenstate.
When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations,
As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle.
Quantum harmonic oscillator stationary states
Consider a one-dimensional quantum harmonic oscillator. It is possible to express the position and momentum operators in terms of the creation and annihilation operators:
Using the standard rules for creation and annihilation operators on the energy eigenstates,
the variances may be computed directly,
The product of these standard deviations is then
In particular, the above Kennard bound is saturated for the ground state , for which the probability density is just the normal distribution.
Quantum harmonic oscillators with Gaussian initial condition
In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as
where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the -dependent solution. After many cancelations, the probability densities reduce to
where we have used the notation to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as
From the relations
we can conclude the following (the right most equality holds only when ):
Coherent states
A coherent state is a right eigenstate of the annihilation operator,
which may be represented in terms of Fock states as
In the picture where the coherent state is a massive particle in a quantum harmonic oscillator, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances,
Therefore, every coherent state saturates the Kennard bound
with position and momentum each contributing an amount in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general.
Particle in a box
Consider a particle in a one-dimensional box of length . The eigenfunctions in position and momentum space are
and
where and we have used the de Broglie relation . The variances of and can be calculated explicitly:
The product of the standard deviations is therefore
For all , the quantity is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when , in which case
Constant momentum
Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p0 according to
where we have introduced a reference scale , with describing the width of the distribution—cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are
Since and , this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is
such that the uncertainty product can only increase with time as
Mathematical formalism
Starting with Kennard's derivation of position-momentum uncertainty, Howard Percy Robertson developed a formulation for arbitrary Hermitian operator operators
expressed in terms of their standard deviation
where the brackets indicate an expectation value of the observable represented by operator . For a pair of operators and , define their commutator as
and the Robertson uncertainty relation is given by
Erwin Schrödinger showed how to allow for correlation between the operators, giving a stronger inequality, known as the Robertson–Schrödinger uncertainty relation,
where the anticommutator, is used.
Phase space
In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function with star product ★ and a function f, the following is generally true:
Choosing , we arrive at
Since this positivity condition is true for all a, b, and c, it follows that all the eigenvalues of the matrix are non-negative.
The non-negative eigenvalues then imply a corresponding non-negativity condition on the determinant,
or, explicitly, after algebraic manipulation,
Examples
Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below.
Position–linear momentum uncertainty relation: for the position and linear momentum operators, the canonical commutation relation implies the Kennard inequality from above:
Angular momentum uncertainty relation: For two orthogonal components of the total angular momentum operator of an object: where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for , a choice , , in angular momentum multiplets, ψ = |j, m⟩, bounds the Casimir invariant (angular momentum squared, ) from below and thus yields useful constraints such as , and hence j ≥ m, among others.
For the number of electrons in a superconductor and the phase of its Ginzburg–Landau order parameter
Limitations
The derivation of the Robertson inequality for operators and requires and to be defined. There are quantum systems where these conditions are not valid.
One example is a quantum particle on a ring, where the wave function depends on an angular variable in the interval . Define "position" and "momentum" operators and by
and
with periodic boundary conditions on . The definition of depends the range from 0 to . These operators satisfy the usual commutation relations for position and momentum operators, . More precisely, whenever both and are defined, and the space of such is a dense subspace of the quantum Hilbert space.
Now let be any of the eigenstates of , which are given by . These states are normalizable, unlike the eigenstates of the momentum operator on the line. Also the operator is bounded, since ranges over a bounded interval. Thus, in the state , the uncertainty of is zero and the uncertainty of is finite, so that
The Robertson uncertainty principle does not apply in this case: is not in the domain of the operator , since multiplication by disrupts the periodic boundary conditions imposed on .
For the usual position and momentum operators and on the real line, no such counterexamples can occur. As long as and are defined in the state , the Heisenberg uncertainty principle holds, even if fails to be in the domain of or of .
Mixed states
The Robertson–Schrödinger uncertainty can be improved noting that it must hold for all components in any decomposition of the density matrix given as
Here, for the probabilities and hold. Then, using the relation
for ,
it follows that
where the function in the bound is defined
The above relation very often has a bound larger than that of the original Robertson–Schrödinger uncertainty relation. Thus, we need to calculate the bound of the Robertson–Schrödinger uncertainty for the mixed components of the quantum state rather than for the quantum state, and compute an average of their square roots. The following expression is stronger than the Robertson–Schrödinger uncertainty relation
where on the right-hand side there is a concave roof over the decompositions of the density matrix.
The improved relation above is saturated by all single-qubit quantum states.
With similar arguments, one can derive a relation with a convex roof on the right-hand side
where denotes the quantum Fisher information and the density matrix is decomposed to pure states as
The derivation takes advantage of the fact that the quantum Fisher information is the convex roof of the variance times four.
A simpler inequality follows without a convex roof
which is stronger than the Heisenberg uncertainty relation, since for the quantum Fisher information we have
while for pure states the equality holds.
The Maccone–Pati uncertainty relations
The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Lorenzo Maccone and Arun K. Pati give non-trivial bounds on the sum of the variances for two incompatible observables. (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., Ref. due to Yichen Huang.) For two non-commuting observables and the first stronger uncertainty relation is given by
where , , is a normalized vector that is orthogonal to the state of the system and one should choose the sign of to make this real quantity a positive number.
The second stronger uncertainty relation is given by
where is a state orthogonal to .
The form of implies that the right-hand side of the new uncertainty relation is nonzero unless is an eigenstate of . One may note that can be an eigenstate of without being an eigenstate of either or . However, when is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless is an eigenstate of both.
Energy–time
An energy–time uncertainty relation like
has a long, controversial history; the meaning of and varies and different formulations have different arenas of validity. However, one well-known application is both well established and experimentally verified: the connection between the life-time of a resonance state, and its energy width :
In particle-physics, widths from experimental fits to the Breit–Wigner energy distribution are used to characterize the lifetime of quasi-stable or decaying states.
An informal, heuristic meaning of the principle is the following: A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy, excited states have a finite lifetime. By the time–energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth. The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics. The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width).
Time in quantum mechanics
The concept of "time" in quantum mechanics offers many challenges. There is no quantum theory of time measurement; relativity is both fundamental to time and difficult to include in quantum mechanics. While position and momentum are associated with a single particle, time is a system property: it has no operator needed for the Robertson–Schrödinger relation. The mathematical treatment of stable and unstable quantum systems differ. These factors combine to make energy–time uncertainty principles controversial.
Three notions of "time" can be distinguished: external, intrinsic, and observable. External or laboratory time is seen by the experimenter; intrinsic time is inferred by changes in dynamic variables, like the hands of a clock or the motion of a free particle; observable time concerns time as an observable, the measurement of time-separated events.
An external-time energy–time uncertainty principle might say that measuring the energy of a quantum system to an accuracy requires a time interval . However, Yakir Aharonov and David Bohm have shown that, in some quantum systems, energy can be measured accurately within an arbitrarily short time: external-time uncertainty principles are not universal.
Intrinsic time is the basis for several formulations of energy–time uncertainty relations, including the Mandelstam–Tamm relation discussed in the next section. A physical system with an intrinsic time closely matching the external laboratory time is called a "clock".
Observable time, measuring time between two events, remains a challenge for quantum theories; some progress has been made using positive operator-valued measure concepts.
Mandelstam–Tamm
In 1945, Leonid Mandelstam and Igor Tamm derived a non-relativistic time–energy uncertainty relation as follows. From Heisenberg mechanics, the generalized Ehrenfest theorem for an observable B without explicit time dependence, represented by a self-adjoint operator relates time dependence of the average value of to the average of its commutator with the Hamiltonian:
The value of is then substituted in the Robertson uncertainty relation for the energy operator and :
giving
(whenever the denominator is nonzero).
While this is a universal result, it depends upon the observable chosen and that the deviations and are computed for a particular state.
Identifying and the characteristic time
gives an energy–time relationship
Although has the dimension of time, it is different from the time parameter t that enters the Schrödinger equation. This can be interpreted as time for which the expectation value of the observable, changes by an amount equal to one standard deviation.
Examples:
The time a free quantum particle passes a point in space is more uncertain as the energy of the state is more precisely controlled: Since the time spread is related to the particle position spread and the energy spread is related to the momentum spread, this relation is directly related to position–momentum uncertainty.
A Delta particle, a quasistable composite of quarks related to protons and neutrons, has a lifetime of 10−23 s, so its measured mass equivalent to energy, 1232 MeV/c2, varies by ±120 MeV/c2; this variation is intrinsic and not caused by measurement errors.
Two energy states with energies superimposed to create a composite state
The probability amplitude of this state has a time-dependent interference term:
The oscillation period varies inversely with the energy difference: .
Each example has a different meaning for the time uncertainty, according to the observable and state used.
Quantum field theory
Some formulations of quantum field theory uses temporary electron–positron pairs in its calculations called virtual particles. The mass-energy and lifetime of these particles are related by the energy–time uncertainty relation. The energy of a quantum systems is not known with enough precision to limit their behavior to a single, simple history. Thus the influence of all histories must be incorporated into quantum calculations, including those with much greater or much less energy than the mean of the measured/calculated energy distribution.
The energy–time uncertainty principle does not temporarily violate conservation of energy; it does not imply that energy can be "borrowed" from the universe as long as it is "returned" within a short amount of time. The energy of the universe is not an exactly known parameter at all times. When events transpire at very short time intervals, there is uncertainty in the energy of these events.
Harmonic analysis
In the context of harmonic analysis the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds,
Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function and its Fourier transform :
Signal processing
In the context of time–frequency analysis uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from "Benedicks's theorem", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited. More accurately, the time-bandwidth or duration-bandwidth product satisfies
where and are the standard deviations of the time and frequency energy concentrations respectively. The minimum is attained for a Gaussian-shaped pulse (Gabor wavelet) [For the un-squared Gaussian (i.e. signal amplitude) and its un-squared Fourier transform magnitude ; squaring reduces each by a factor .] Another common measure is the product of the time and frequency full width at half maximum (of the power/energy), which for the Gaussian equals (see bandwidth-limited pulse).
Stated differently, one cannot simultaneously sharply localize a signal in both the time domain and frequency domain.
When applied to filters, the result implies that one cannot simultaneously achieve a high temporal resolution and high frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform—if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off.
Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other.
As a result, in order to analyze signals where the transients are important, the wavelet transform is often used instead of the Fourier.
Discrete Fourier transform
Let be a sequence of N complex numbers and be its discrete Fourier transform.
Denote by the number of non-zero elements in the time sequence and by the number of non-zero elements in the frequency sequence . Then,
This inequality is sharp, with equality achieved when x or X is a Dirac mass, or more generally when x is a nonzero multiple of a Dirac comb supported on a subgroup of the integers modulo N (in which case X is also a Dirac comb supported on a complementary subgroup, and vice versa).
More generally, if T and W are subsets of the integers modulo N, let denote the time-limiting operator and band-limiting operators, respectively. Then
where the norm is the operator norm of operators on the Hilbert space of functions on the integers modulo N. This inequality has implications for signal reconstruction.
When N is a prime number, a stronger inequality holds:
Discovered by Terence Tao, this inequality is also sharp.
Benedicks's theorem
Amrein–Berthier and Benedicks's theorem intuitively says that the set of points where is non-zero and the set of points where is non-zero cannot both be small.
Specifically, it is impossible for a function in and its Fourier transform to both be supported on sets of finite Lebesgue measure. A more quantitative version is
One expects that the factor may be replaced by , which is only known if either or is convex.
Hardy's uncertainty principle
The mathematician G. H. Hardy formulated the following uncertainty principle: it is not possible for and to both be "very rapidly decreasing". Specifically, if in is such that
and
( an integer),
then, if , while if , then there is a polynomial of degree such that
This was later improved as follows: if is such that
then
where is a polynomial of degree and is a real positive definite matrix.
This result was stated in Beurling's complete works without proof and proved in Hörmander (the case ) and Bonami, Demange, and Jaming for the general case. Note that Hörmander–Beurling's version implies the case in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref.
A full description of the case as well as the following extension to Schwartz class distributions appears in ref.
Additional uncertainty relations
Heisenberg limit
In quantum metrology, and especially interferometry, the Heisenberg limit is the optimal rate at which the accuracy of a measurement can scale with the energy used in the measurement. Typically, this is the measurement of a phase (applied to one arm of a beam-splitter) and the energy is given by the number of photons used in an interferometer. Although some claim to have broken the Heisenberg limit, this reflects disagreement on the definition of the scaling resource. Suitably defined, the Heisenberg limit is a consequence of the basic principles of quantum mechanics and cannot be beaten, although the weak Heisenberg limit can be beaten.
Systematic and statistical errors
The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation . Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect.
If we let represent the error (i.e., inaccuracy) of a measurement of an observable A and the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Masanao Ozawa − encompassing both systematic and statistical errors - holds:
Heisenberg's uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as
The formal derivation of the Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years.
Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors and . There is increasing experimental evidence that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality.
Using the same formalism, it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time):
The two simultaneous measurements on A and B are necessarily unsharp or weak.
It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson
and Ozawa relations we obtain
The four terms can be written as:
Defining:
as the inaccuracy in the measured values of the variable A and
as the resulting fluctuation in the conjugate variable B, Kazuo Fujikawa established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors:
Quantum entropic uncertainty principle
For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period. Other examples include highly bimodal distributions, or unimodal distributions with divergent variance.
A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty. This conjecture, also studied by I. I. Hirschman and proven in 1975 by W. Beckner and by Iwo Bialynicki-Birula and Jerzy Mycielski is that, for two normalized, dimensionless Fourier transform pairs and where
and
the Shannon information entropies
and
are subject to the following constraint,
where the logarithms may be in any base.
The probability distribution functions associated with the position wave function and the momentum wave function have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by
where and are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function and the momentum wavefunction , the above constraint can be written for the corresponding entropies as
where is the Planck constant.
Depending on one's choice of the product, the expression may be written in many ways. If is chosen to be , then
If, instead, is chosen to be , then
If and are chosen to be unity in whatever system of units are being used, then
where is interpreted as a dimensionless number equal to the value of the Planck constant in the chosen system of units. Note that these inequalities can be extended to multimode quantum states, or wavefunctions in more than one spatial dimension.
The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities
(equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations, because
In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof).
A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is
To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as
Under the above definition, the entropic uncertainty relation is
Here we note that is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research.
Uncertainty relation with three angular momentum components
For a particle of total angular momentum the following uncertainty relation holds
where are angular momentum components. The relation can be derived from
and
The relation can be strengthened as
where is the quantum Fisher information.
History
In 1925 Heisenberg published the Umdeutung (reinterpretation) paper where he showed that central aspect of quantum theory was the non-commutativity: the theory implied that the relative order of position and momentum measurement was significant. Working with Max Born and Pascual Jordan, he continued to develop matrix mechanics, that would become the first modern quantum mechanics formulation.
In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. Writing to Wolfgang Pauli in February 1927, he worked out the basic concepts.
In his celebrated 1927 paper "" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement, but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. His paper gave an analysis in terms of a microscope that Bohr showed was incorrect; Heisenberg included an addendum to the publication.
In his 1930 Chicago lecture he refined his principle:
Later work broadened the concept. Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote:It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.
Kennard in 1927 first proved the modern inequality:
where , and , are the standard deviations of position and momentum. (Heisenberg only proved relation () for the special case of Gaussian states.) In 1929 Robertson generalized the inequality to all observables and in 1930 Schrödinger extended the form to allow non-zero covariance of the operators; this result is referred to as Robertson-Schrödinger inequality.
Terminology and translation
Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word "Ungenauigkeit",
to describe the basic theoretical principle. Only in the endnote did he switch to the word "Unsicherheit". Later on, he always used "Unbestimmtheit". When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory, was published in 1930, however, only the English word "uncertainty" was used, and it became the term in the English language.
Heisenberg's microscope
The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by using the observer effect of an imaginary microscope as a measuring device.
He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it.
Problem 1 – If the photon has a short wavelength, and therefore, a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision does not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely.
Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electron's beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around.
The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to the Planck constant. Heisenberg did not care to formulate the uncertainty principle as an exact limit, and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable.
Intrinsic quantum uncertainty
Historically, the uncertainty principle has been confused with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the system, that is, without changing something in a system. Heisenberg used such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty. It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems, and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology.
Critical reactions
The Copenhagen interpretation of quantum mechanics and Heisenberg's uncertainty principle were, in fact, initially seen as twin targets by detractors. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be.
Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years.
Ideal detached observer
Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German):
Einstein's slit
The first of Einstein's thought experiments challenging the uncertainty principle went as follows:
Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy , the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to , and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement.
A similar analysis with particles diffracting through multiple slits is given by Richard Feynman.
Einstein's box
Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box. Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to the Planck constant." Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box." "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle."
Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the Earth's surface will result in an uncertainty in the rate of the clock", because of Einstein's own theory of gravity's effect on time. "Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape."
EPR paradox for entangled particles
In 1935, Einstein, Boris Podolsky and Nathan Rosen published an analysis of spatially separated entangled particles (EPR paradox). According to EPR, one could measure the position of one of the entangled particles and the momentum of the second particle, and from those measurements deduce the position and momentum of both particles to any precision, violating the uncertainty principle. In order to avoid such possibility, the measurement of one particle must modify the probability distribution of the other particle instantaneously, possibly violating the principle of locality.
In 1964, John Stewart Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out EPR's basic assumption of local hidden variables.
Popper's criticism
Science philosopher Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist. He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations". In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory.
In 1934, Popper published ("Critique of the Uncertainty Relations") in , and in the same year (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing:
Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Carl Friedrich von Weizsäcker, Heisenberg, and Einstein; Popper sent his paper to Einstein and it may have influenced the formulation of the EPR paradox.
Free will
Some scientists, including Arthur Compton and Martin Heisenberg, have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature. Proponents of this theory commonly say that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells.
Thermodynamics
There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics. See Gibbs paradox.
Rejection of the principle
Uncertainty principles relate quantum particles – electrons for example – to classical concepts – position and momentum. This presumes quantum particles have position and momentum. Edwin C. Kemble pointed out in 1937 that such properties cannot be experimentally verified and assuming they exist gives rise to many contradictions; similarly Rudolf Haag notes that position in quantum mechanics is an attribute of an interaction, say between an electron and a detector, not an intrinsic property. From this point of view the uncertainty principle is not a fundamental quantum property but a concept "carried over from the language of our ancestors", as Kemble says.
Applications
Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. All forms of spectroscopy, including particle physics use the relationship to relate measured energy line-width to the lifetime of quantum states. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers.
| Physical sciences | Quantum mechanics | null |
31888 | https://en.wikipedia.org/wiki/U-boat | U-boat | U-boats were naval submarines operated by Germany, particularly in the First and Second World Wars. The term is an anglicized version of the German word , a shortening of (under-sea boat), though the German term refers to any submarine. Austro-Hungarian Navy submarines were also known as U-boats.
U-boats are most known for their unrestricted submarine warfare in both world wars, trying to disrupt merchant traffic towards the UK and force the UK out of the war. In World War I, Germany intermittently waged unrestricted submarine warfare against the UK: a first campaign in 1915 was abandoned after strong protests from the US but in 1917 the Germans, facing deadlock on the continent, saw no other option than to resume the campaign in February 1917. The renewed campaign failed to achieve its goal mainly because of the introduction of convoys. Instead the campaign ensured final defeat as the campaign was a contributing factor to the entry of the US in the First World War.
In World War II, Karl Dönitz, supreme commander of the 's U-boat arm (, was convinced the UK and its convoys could be defeated by new tactics, and tried to focus on convoy battles. Though U-boat tactics initially saw success in the Battle of the Atlantic, greatly disrupting Allied shipping, improved convoy and anti-submarine tactics such as high-frequency direction finding and the Hedgehog anti-submarine system began to take a toll on the German U-boat force. This ultimately came to a head in May 1943, known as Black May, in which U-boat losses began to outpace their effect on shipping.
Early U-boats (1850–1914)
The first submarine built in Germany, the three-man Brandtaucher, sank to the bottom of Kiel Harbor on 1 February 1851 during a test dive.
Inventor and engineer Wilhelm Bauer had designed this vessel in 1850, and Schweffel and Howaldt constructed it in Kiel. Dredging operations in 1887 rediscovered Brandtaucher; she was later raised and put on historical display in Germany. The boats Nordenfelt I and Nordenfelt II, built to a Nordenfelt design, followed in 1890. In 1903, the Friedrich Krupp Germaniawerft dockyard in Kiel completed the first fully functional German-built submarine, Forelle, which Krupp sold to Russia during the Russo-Japanese War in April 1904.
At the beginning of the century, the German commander of the Navy Alfred von Tirpitz was building the High Seas Fleet with which he intended to challenge the supremacy of the Royal Navy. He focused on expensive battleships and there was no role for submarines in his fleet. Only when Krupp exported its submarines to Russia, Italy, Norway and Austria-Hungary did Tirpitz order one submarine. The was a completely redesigned and when the Imperial German Navy commissioned it on 14 December 1906, it was the last major navy to adopt submarines. The U-1 had a double hull and a single torpedo tube. It used an electric motor powered by batteries for submerged propulsion and a Körting kerosene engine for charging the batteries and propulsion on the surface. The 50%-larger (commissioned in 1908) had two torpedo tubes.
Because speed and range were severely limited underwater while running on battery power, U-boats were required to spend most of their time surfaced, running on fuel engines, diving only when attacked or for torpedo strikes. The more ship-like hull design reflects the fact that these were primarily surface vessels that could submerge when necessary. This contrasts with the cylindrical profile of modern nuclear submarines, which are more hydrodynamic under water (where they spend the majority of their time), but less stable on the surface. While U-boats were faster on the surface than submerged, the opposite is generally true of modern submarines.
Between 1908 and 1910 fourteen big boats with four torpedo tubes and two reload torpedoes were ordered. These boats used a kerosene engine which was safer than gasoline and more powerful than steam, but the white exhaust of the kerosene betrayed the presence of the U-boats, robbing them of their primary asset, their stealth. Diesel engines did not have that disadvantage, but a powerful and reliable diesel engine was still under development. Finally the class of 1912–13 had the first diesel engine installed in a German navy boat. Between 1910 and 1912 twenty-three diesel U-boats were ordered. At the start of World War I in 1914, Germany had 48 submarines of 13 classes in service or under construction. During that war, the Imperial German Navy used SM U-1 for training. Retired in 1919, she remains on display at the Deutsches Museum in Munich.
World War I (1914–1918)
Operations
During 1914, the U-boats operated against the British fleet: on 5 September 1914, the light cruiser was sunk by , the first ship to have been sunk by a submarine using a self-propelled torpedo. On 22 September, sank the armoured cruisers , , and . As a result, the British Grand Fleet had to withdraw to safer waters in Northern Ireland. Against merchant ships, U-boats observed the "prize rules" which meant they had to stop and inspect the ship, and take the crew off the ship before they could sink it. On 20 October 1914, sank the first merchant ship, , off Norway. Only ten merchants were sunk in that way before policy was changed on 18 February 1915. On the continent German hopes for a quick victory were dashed and a stalemate had settled on the front. The Germans hoped to break the deadlock by starting an unrestricted submarine campaign against shipping in the waters around the British Isles. This was also cited as a retaliation for British minefields and shipping blockades. Under the instructions given to U-boat captains, they could sink merchant ships, even neutral ones, without warning.
Only 29 U-boats were available for the campaign, and not more than seven were active around the British Isles at any time. The U-boats failed to enforce a blockade but three sinkings of liners, with loss of American lives, outraged the US so that the Kaiser had to stop the campaign in September 1915: on 7 May 1915 sank RMS Lusitania; on 19 August sank ; and on 9 September SM U-20 sank RMS Hesperian. Most of the U-boats were sent to the Mediterranean. At the beginning of 1916 54 U-boats were available, and the Kaiser allowed again operations around the British Isles, but with strict rules: no attacks on liners and outside the war zone around the British Isles attacks were only allowed on armed merchant ships. But on 24 March 25 Americans were killed in the torpedoing of the ferry , which was mistaken for a troopship by . The US threatened to sever diplomatic ties, which persuaded the Germans to fully reapply prize rules. In September 1916 120 U-boats were in service, and again some were sent to the Mediterranean. Whilst around British Isles prize rules were observed, in the Mediterranean a new unrestricted campaign was started. The renewed German campaign was effective, sinking 1.4 million tons of shipping between October 1916 and January 1917. Despite this, the deadlock situation on the continent frontlines demanded even greater results, and on 1 February 1917, Germany restarted the unrestricted submarine campaign around British Isles. Germany took the gamble that the U-boat campaign would force the UK out of the war before the US could effectively enter. On 3 February the US severed diplomatic relations with Germany and on 6 April the US declared war on Germany.
Unrestricted submarine warfare in 1917 was very successful, sinking more than 500,000 tons a month. With the introduction of convoys in August 1917 shipping losses declined to 300,000 a month on average, which was not sufficient to force the UK out of the war. With deteriorating conditions on the continent, all U-boats were recalled on 31 October 1918. An armistice became effective on 11 November 1918. Under the terms of armistice, all U-boats were to immediately surrender. Those in home waters sailed to the British submarine base at Harwich, after which the vessels were studied, then scrapped or given to Allied navies. Stephen King-Hall wrote a detailed eyewitness account of the surrender.
Of the 373 German U-boats that had been built, 179 were operational or nearly operational at the end of the war. 178 were lost by enemy action. 512 officers and 4894 enlisted men were killed. Of the surviving German submarines, 14 U-boats were scuttled and 122 surrendered. They sank 10 pre-dreadnought battleships, 18 heavy and light cruisers, and several smaller naval vessels. They further destroyed 5,708 merchant and fishing vessels for a total of 11,108,865 tons and the loss of about 15,000 sailors.
The Pour le Mérite, the highest decoration for gallantry for officers, was awarded to 29 U-boat commanders. Twelve U-boat crewmen were decorated with the Goldenes Militär-Verdienst-Kreuz, the highest bravery award for noncommissioned officers and enlisted men. The most successful U-boat commanders of World War I were Lothar von Arnauld de la Perière (189 merchant vessels and two gunboats with 446,708 tons), followed by Walter Forstmann (149 ships with 391,607 tons), and Max Valentiner (144 ships with 299,482 tons). Their records have not been surpassed in any subsequent conflict.
Classes
Interwar years (1919–1939)
Construction
The Treaty of Versailles ending World War I signed at the Paris Peace Conference in 1919 limited the surface navy of Germany's new Weimar Republic to only six battleships, six cruisers, twelve destroyers and twelve torpedo boats. The treaty also restricted the independent tonnage of ships and forbade the construction of submarines. In order to circumvent the restrictions of the treaty, a submarine design office called Ingenieurskantoor voor Scheepsbouw (IVS) was set up in the Netherlands The IVS was run by Krupp and made it possible to maintain a lead in submarine technology by designing and constructing submarines in Holland for other nations. The IVS made designs for small 250-ton U-boats, medium 500-ton U-boats and large 750-ton U-boats.
The IVS constructed three 500-ton medium submarines in Finland between 1927 and 1931, known as the Vetehinen-class. These ships were the prototypes for the subsequent German Type VII U-boat. In 1933 a small 250-ton submarine, the Vesikko was built. This submarine was nearly identical to the subsequent German Type II U-boat. A fifth very small 100-ton submarine, the Saukko was built in 1933 as well. In Spain a large 750-ton boat was built between 1929 and 1930. After the Spanish lost interest in the U-boat, they sold it to Turkey where it entered service as Gür. German sailors assisted in the trials for these submarines. These secret programs were exposed in the Lohmann Affair and as a result the Head of the Hans Zenker had to resign. His successor Erich Raeder continued the policy of secretly breaching the Versailles treaty. On 15 November 1932 a plan was approved for an expansion of the German navy which included U-boats.
In 1935, Britain sought to control the increasingly apparent breaches of the Versailles Treaty and it concluded in 1935 the Anglo-German Naval Agreement. This ended officially the limitation of the Versailles Treaty and allowed Germany to build ships in a 100:35 tonnage ratio to the British fleet. For submarines the Germans obtained a parity in tonnage, but promised a 45 percent limit unless special circumstances arose. This allowed 24,000 tons for U-boat building. Only one week after the signature of the agreement, the first of six Type II U-boats, was commissioned in the German Navy, which changed name from (Imperial Navy) to (War Navy). Within the year, the Germans commissioned a total of 36 U-boats for a total of 12,500 tons:
Twenty-four small 250-ton Type II U-boats
Ten medium 500-ton Type VII U-boats
Two large 750-ton Type I U-boats, based on the design of the Spanish submarine
Karl Dönitz was appointed as head of the submarine section of the . He believed firmly that in spite of the Anglo-German Naval agreement and Hitler's policy of avoiding conflict with Britain, the next war would be with Britain. Based on these views he requested that the remaining 11,500 tons be used for building twenty-three medium submarines, which were in his opinion the ideal type for the commerce war against British convoys. Raeder however did not share these beliefs and opinions and opted for a more balanced expansion of the submarine fleet:
Eight small 250-ton improved type II U-boats
Seven medium 500-ton U-boats. The type VII was designed with a single rudder and this had two drawbacks: as the rudder was not in the wash of the two propellers, the rudder response was not good. The stern torpedo tube had also to be mounted externally as the rudder obstructed the exit of an internal tube. As a consequence, this tube could not be reloaded. Hence the type VII was upgraded to type VIIB with dual rudders to improve maneuverability and to fit an internal stern tube with a reload.
Eight large 750-ton U-boats. The Type I was found to be unsatisfactory: not only had it the same single rudder maneuverability problems of the type VII, but it also had a very poor diving time. The gravity center of the U-boat was too forward, so when surfaced the Type I had its propellers exposed when pitching. Whilst submerged there were problems with depth keeping and stability as air bubbles in the fuel tanks wobbled back and forth. Hence a new Type IX design for a large U-boat was made
Twenty-one of these twenty-three U-boats were commissioned before the start of World War II. In 1937, Britain announced it would expand its submarine fleet from 52,700 to 70,000 tons. Again, Raeder decided that the extra 7,785 tons would be divided between medium and large U-boats:
Seven medium 500-ton type VIIB U-boats
Five large 750-ton of the improved type IXB U-boats
During 1938, Hitler changed his attitude towards Britain. Whilst he still hoped that Britain would not interfere in his foreign policy, it became clear to him that he needed a Navy that could act as a deterrent. Hitler wanted to invoke the escape clause of the naval agreement and to have 70,000 tons of submarines. Between May 1938 and January 1939, Raeder ordered 52 more U-boats to be completed by 1942:
Twenty-one medium 500-ton type VIIB U-boats
Eleven large 750-ton type IXB U-boats
Three very large type XB minelaying U-Boats
Four huge type XI U-cruisers
In 1939, the ambitious Plan Z was launched. It called for the construction of a German Navy capable of challenging the Royal Navy. The plan included 249 U-boats for a total of 200,000 tons. But when World War II broke out only months after the plan was announced, only a handful of the planned U-boats ended up being built.
When World War II started, Germany had 56 U-boats commissioned, of which 46 were operational and only 22 had enough range for Atlantic operations, the other 24 were limited to operations on the North Sea.
Developments
Compared to their World War I equivalents, the German U-boat designs of World War II were greatly improved. By using a new steel alloy and by welding instead of riveting, they had stronger hulls and could dive deeper. The diving time was decreased to thirty seconds for a medium U-boat. The power of diesel engines was increased, so U-boats had a greater surface speed. Range was increased by installing fuel saddle tanks, which were on open to the sea on the bottom in order to balance pressure, with the diesel fuel floating freely on the seawater within the saddle tank. Also, a technique was developed for economical cruising where only one of the two diesel engines would be run and would drive the two propeller shafts through a coupling with the two electro engines.
Another vast improvement was the introduction of new torpedo types for the U-boats: the classic G7a torpedo propelled by compressed air had a much larger warhead than its WWI equivalent, but more important was the introduction of the electric G7e torpedo. This torpedo was slower and had less range but it left no telltale bubble wake and was, hence, ideally suited for daylight attacks. During WWI the Germans had briefly experimented with magnetic pistols and these were further developed now as the standard pistol for torpedoes. The classic contact pistol required a torpedo to detonate against the ship's hull, whilst a magnetic torpedo could detonate below a ship, resulting in a much more damaging explosion. Thus, it was hoped that one torpedo would suffice to break the back of a ship, and a U-boat could sink many more ships with its supply of torpedoes.
All U-boats were now also equipped with long- and short-wave transmitters, which enabled them to communicate with bases ashore and with fellow U-boats at sea. This allowed for better operational information and guidance.
U-Boat design and layout
From bow to stern, A typical U-boat design comprised these sections:
Bow torpedo room. The torpedo tubes were loaded but torpedoes needed maintenance so there was space to unload the tubes. Below the floor plates four spare torpedoes were stored. Two more spares were stored above the floorplates where they occupied much of the available space. The crew responsible for the torpedo maintenance and launching had their sleeping bunks in this compartment, along with the lowest ratings on board. As long as the two spare torpedoes above the floorplates were not launched, living conditions were very cramped here. Once launched, space for extra bunks became available but, anyway, there were not enough sleeping bunks for all the crew, and these were 'hot bunks' which switched occupants as they went on or off duty.
Crew quarters for officers and chief petty officers, with a battery compartment below decks. The captain had a curtained bunk which faced 2 small rooms: the radio room and the hydrophone room.
Control room. The main large periscope, for general use, was located here. The rudder, diving planes, ballast and trim tanks were operated here with valves and buttons. Below decks, there was space to retract the periscope and to store ammunition for the deck gun. A cylindrical tube with a ladder led to the conning tower.
Conning tower. This space protruded from the cylindrical hull but was still within the pressure hull. Here, the angle and depth settings for the torpedoes were calculated with an analogue data solver. During submerged attacks the captain was on station here, operating the second, smaller attack periscope, which generated less wake at the surface. Above the conning tower was the bridge.
Aft crew quarters for petty officers, with another battery compartment below decks. The galley and toilet were also located here.
Engine (diesel) room. The diesel engines needed air, which was supplied through a pipe outside the pressure hull from the bridge, as high as possible from sea level. There was no exhaust pipe; in order to reduce smoke the exhaust was mixed with sea water. The diesel engine could drive an air compressor in order to feed air tanks needed for venting the ballast tanks.
Electrical or motor room. The electric motors were driven by the batteries. Alternatively, when driven by the diesel engines, the motors acted as generators for recharging the batteries.
Aft torpedo room. Only bigger type IX U-boats had such a compartment. Smaller U-boats did not have aft torpedo tubes at all, or had a single torpedo tube installed in the motor room, with a spare torpedo stored below decks between the engines.
World War II (1939–1945)
Operations
During World War II, U-boat warfare was the major component of the Battle of the Atlantic, which began in 1939 and ended with Germany's surrender in 1945. British Prime Minister Winston Churchill later wrote "The only thing that really frightened me during the war was the U-boat peril." Cross-Atlantic trade in war supplies and food was extensive and critical for Britain's survival. The continuous action surrounding Allied shipping became known as the Battle of the Atlantic.
As convoying had been key in the defeat of German submarines during World War I, the British began organizing convoys at once in September 1939. The most common U-boat attack against convoys during the early years of the war was conducted on the surface and at night. During 1939 the Germans made a few attempts to attack convoys with their new 'wolfpack' tactic, but these were not successful. The invasion of Norway in April 1940 halted temporarily all U-boat operations against merchant shipping. During the invasion many technical problems with the German torpedoes were exposed and only in August 1940 could the campaign against convoys be revived. There were now fewer U-boats operational than at the beginning of the war, but thanks to the new bases in France and Norway U-boats could reach their operation grounds far more easily. During the following months the U-boats put their 'wolfpack' tactic against convoy in practice with spectacular results. This period, before the Allied forces developed truly effective antisubmarine warfare tactics, was referred to by German submariners as "" or the First Happy Time.
In the beginning of 1941 British countermeasures began to take effect: in March 1941 the three leading U-boat aces were sunk during convoy battles. In May 1941 the British were able to break into German secret naval Enigma communications and could henceforth reroute convoys around U-boat concentrations. When American warships started to escort Atlantic convoys, the U-boats were restricted in their operations as Hitler wanted to avoid possible conflict with the US. The campaign against merchant shipping received further impediments when Hitler interfered on two occasions: first he insisted that a small force of U-boats be kept on station in the Arctic as a precaution against a possible Allied invasion in Norway and next he ordered a substantial force of U-boats to operate in the Mediterranean in order to support the Italians and Rommel's Afrika Korps.
When the US entered war, the focus of U-boat operations shifted to the Atlantic coast of the United States and Canada, where no convoys were organized and anti-submarine measures were inadequate. There followed a Second Happy Time when U-boats could extend their successful operation to the Gulf of Mexico and the Caribbean Sea. By mid 1942 an adequate defense was organized in these regions and then U-boats returned to their original and crucial hunting grounds on the North Atlantic convoy lanes. The renewed offensive against convoys reached its climax in March 1943, when two thirds of all ships sunk, were ships sailing in convoys. But the Allies put effective countermeasures into effect and only two months later on 24 May Dönitz had to stop the campaign due to heavy losses.
U-boats operated also off the southern African coasts and even as far east as the Arabian Sea and Indian Ocean.
By the end of the war, almost 3,000 Allied ships (175 warships; 2,825 merchant ships) had been sunk by U-boat torpedoes. In total 1131 U-boats entered service before the German surrender, of which 863 have executed war patrols, and 785 were lost. Of the 154 U-boats surrendered, 121 were scuttled in deep water off Lisahally, Northern Ireland, or Loch Ryan, Scotland, in late 1945 and early 1946 during Operation Deadlight.
Torpedo developments
The U-boats' main weapon was the torpedo, though mines and deck guns (while surfaced) were also used. Early German World War II torpedoes were fitted with one of two types of pistol triggers – impact, which detonated the warhead upon contact with a solid object, and magnetic, which detonated upon sensing a change in the magnetic field within a few meters. Initially, the depth-keeping equipment and magnetic and contact exploders were notoriously unreliable. During the first eight months of the war, torpedoes often ran at an improper depth, detonated prematurely, or failed to explode altogethersometimes bouncing harmlessly off the hull of the target ship. This was most evident in Operation Weserübung, the invasion of Norway, where various skilled U-boat commanders failed to inflict damage on British transports and warships because of faulty torpedoes. The faults were largely due to a lack of testing. The magnetic detonator was sensitive to mechanical oscillations during the torpedo run, and to fluctuations in the Earth's magnetic field at high latitudes. These early magnetic detonators were eventually phased out. The depth-keeping problem remained problematic, not until January 1942 was the last fault discovered by accident: when ventilating the onboard torpedoes during maintenance, it was possible that the excess internal air-pressure in the U-boat offset the depth setting mechanism in the balance chamber of the torpedo.
In order to give U-boats better opportunities against well-defended convoys, several types of "pattern-running" torpedoes were developed. The FAT (Flächen-Absuch-Torpedo or Federapparat-Torpedo) and LUT (LageUnabhängiger Torpedo) was an electric torpedo which ran straight out to a preset distance, then traveled in either a circular or ladder-like pattern through the convoy lanes. This increased the probability of a hit. The torpedo had one setting to regulate the length of the prerun, after which one of four other possible settings kicked in and made the torpedo zigzag towards either left or right and either on short (1200 m) or long (1900 m) legs. When fired, the firing U-boat sent out a warning to the other U-boats in the vicinity so these could dive to avoid being hit by the random running torpedo. The FAT torpedo became available end of 1942 and was in regular use during the convoy battles of March 1943.
Germany also developed acoustic homing torpedoes. In February 1943 the first acoustic torpedo, the T4 "Falke", was tested on a small scale with moderate success, but this torpedo could only be used against large, slow ships. The acoustic torpedo ran straight to an arming distance of 1000 m and then turned toward the loudest noise detected. Its successor, the T5 "Zaunkönig", was designed to combat small and fast warships, and entered service in September 1943. The Allies countered acoustic torpedoes with noisemaker decoys such as Foxer, FXR, CAT, and Fanfare.
U-boat developments
In 1940 the Germans made successful tests with the V-80 experimental submarine featuring a new type of propulsion: on the surface it used the classic Diesel engines but submerged it used a revolutionary hydrogen peroxide air-independent propellant system designed by Hellmuth Walter. With this Walter-turbine a U-boat could achieve underwater speeds of more than 20 knots, much more than the 4 knot cruising and 6 knot maximum speed of electrical engines powered by batteries. Four more experimental Type XVIIA U-boats with Walter turbines were built and tested, but the Germans could not put this design in use for a big frontline U-boat. Unlike a classic U-boat that could recharge its batteries with the diesel engines, once a Walter U-boat had consumed its hydrogen peroxide propellant it could not submerge anymore. The Germans did not possess the resources and plants to produce sufficient hydrogen peroxide to operate a fleet of Walter submarines. Despite these limitations, 24 frontline Type XVIIB coastal submarines were ordered, but only three were built and none were operational before the end of the war.
The Walter U-boats had very large hulls in order to store the fuel for submerged propulsion. Once it became clear these Walter U-boats would not be operational in time, the Walter U-boat hull design was reused with a different approach: the space for the hydrogen peroxide tanks was used to store much larger batteries. With the much increased battery power U-boats were also able to reach much higher speeds and endurance when submerged. Based on the design of an Atlantic Walter U-boat, the Type XXI "" was designed to boost submerged performance. Smaller Type XXIII coastal were also taken into production. These were mass-produced, with prefabricated segments constructed at different sites and then assembled at the bigger shipyards.
After the German invasion of the Netherlands in 1940 the Germans captured some Dutch submarines equipped with a Schnorchel (snorkel), but saw no need for them until 1943. The was a retractable pipe that supplied air to the diesel engines while submerged at periscope depth, allowing the boats to cruise submerged on diesel engines and recharge their batteries. It was far from a perfect solution: problems occurred with the device's valve sticking shut or closing as it dunked in rough weather; since the system used the entire pressure hull as a buffer, the diesels would instantaneously suck huge volumes of air from the boat's compartments, and the crew often suffered painful ear injuries. Speed was limited to , lest the device snap from stress. Whilst running submerged with the , the Gruppenhorchgerät was useless because of interference with the noisy diesel engines. But the allowed the old Type VII and IX U-boats to operate in waters which were previously denied to them. Finally, Allied radar eventually became sufficiently advanced that the mast could be detected.
Classes
Type I: first design for a large 750-ton U-boat. Only 2 built as the design was not very successful.
Type II: small coastal submarines used mainly for training purposes. The latest subtype IID had saddle tanks which gave it a range to operate in the Atlantic, which it did until 1941
Type VII: the "workhorse" of the U-boats with 709 completed in World War II
Type IX: these long-range U-boats operated as far as the Indian Ocean with the Japanese (Monsun Gruppe), and the South Atlantic
Type X: long-range minelayers but mainly used to resupply other U-boats
Type XIV: unarmed U-boat, used to resupply other U-boats; nicknamed the ("Milk Cow")
Type XVII: small experimental coastal submarines powered by experimental hydrogen peroxide propulsion systems, not put into service
Type XXI: known as the . The design was taken into mass production, but only two set out for a war patrol before the end of the war
Type XXIII: smaller version of the XXI used for coastal operations. operated on a small scale during 1945
Midget submarines, including Biber, Hai, Molch, and Seehund
Uncompleted U-boat projects
Countermeasures
Throughout the war, an arms race evolved between the Allies and the Kriegsmarine. Sonar (ASDIC in Britain) allowed Allied warships to detect submerged U-boats, but was not effective against a surfaced vessel; thus, early in the war, a U-boat at night or in bad weather was actually safer on the surface. Advancements in radar became deadly for the U-boat crews, especially once aircraft-mounted units were developed. As a countermeasure, U-boats were fitted with radar warning receivers, to give them ample time to dive before the enemy closed in, as well as more antiaircraft guns, but by early to mid-1943, the Allies switched to centimetric radar (unknown to Germany), which rendered the radar detectors ineffective. U-boat radar systems were also developed, but many captains chose not to use them for fear of broadcasting their position to the enemy. Against ASDIC the Germans developed Bold, a chemical bubble-making decoy.
Advances in convoy tactics, high-frequency direction finding, referred to as "Huff-Duff", radar, sonar, depth charges, anti-submarine weapons such as "Hedgehog" and "FIDO", the intermittent cracking of the German Naval Enigma code, the introduction of the Leigh Light, long range patrol aircraft, escort carriers and the enormous US shipbuilding capacity, all turned the tide against the U-boats. At the same time, the Allies targeted the U-boat shipyards and their bases with strategic bombing. In May 1941 code books, an Enigma machine and its settings were captured from the which could be boarded before she sank. A team including Alan Turing used special-purpose "Bombes" and early computers to break new German codes as they were introduced. The speedy decoding of messages allowed rerouting convoys around U-boat patrol lines. In February 1942 the naval Enigma machines were altered and this advantage was lost until the new code was broken in October 1942, when was boarded as she was sinking, and crucial code books were salvaged.
Post–World War II and Cold War (after 1945)
From 1955, the West German was allowed to have a small navy. Initially, two sunken Type XXIIIs and a Type XXI were raised and repaired. In the 1960s, the Federal Republic of Germany (West Germany) restarted building submarines. Because West Germany was initially restricted to a 450-tonne displacement limit, the focused on small coastal submarines to protect against the Soviet threat in the Baltic Sea. The Germans sought to use advanced technologies to offset the small displacement, such as amagnetic steel to protect against naval mines and magnetic anomaly detectors.
The initial Type 201 was a failure because of hull cracking; the subsequent Type 205, first commissioned in 1967, was a success, so 12 were built for the German navy. To continue the U-boat tradition, the new boats received the classic "U" designation starting with the U-1.
With the Danish government's purchase of two Type 205 boats, the West German government realized the potential for the submarine as an export, developing a customized version Type 207. Small and agile submarines were built during the Cold War to operate in the shallow Baltic Sea, resulting in the Type 206. Three of the improved Type 206 boats were later sold to the Israeli Navy, becoming the Type 540. The German Type 209 diesel-electric submarine was the most popular export-sales submarine in the world from the late 1960s into the first years of the 21st century. With a larger 1,000–1,500 tonne displacement, the class was very customizable and has seen service with 14 navies, with 51 examples being built as of 2006. Germany continued to reap successes with derivations or on the basis of the successful type 209, as are the Type 800 sold to Israel and the TR-1700 sold to Argentina.
Germany continued to succeed as an exporter of submarines as the Klasse 210 sold to Norway, considered the most silent and maneuverable submarines in the world. This demonstrated its capacity and put its export seal on the world.
Germany has brought the U-boat name into the 21st century with the new Type 212; it features an air-independent propulsion system using hydrogen fuel cells. This system is safer than previous closed-cycle diesel engines and steam turbines, cheaper than a nuclear reactor, and quieter than either. While the Type 212 is also being purchased by Italy and Norway, the Type 214 has been designed as the follow-on export model and has been sold to Greece, South Korea, and Turkey, and based on it would get the Type U 209PN sold to Portugal.
In recent years Germany introduced new models such as the Type 216 and the Type 218, the latter being sold to Singapore.
In 2016, Germany commissioned its newest U-boat, the U-36, a Type 212.
| Technology | Naval warfare | null |
31911 | https://en.wikipedia.org/wiki/Ultrafilter | Ultrafilter | In the mathematical field of order theory, an ultrafilter on a given partially ordered set (or "poset") is a certain subset of namely a maximal filter on that is, a proper filter on that cannot be enlarged to a bigger proper filter on
If is an arbitrary set, its power set ordered by set inclusion, is always a Boolean algebra and hence a poset, and ultrafilters on are usually called . An ultrafilter on a set may be considered as a finitely additive 0-1-valued measure on . In this view, every subset of is either considered "almost everything" (has measure 1) or "almost nothing" (has measure 0), depending on whether it belongs to the given ultrafilter or not.
Ultrafilters have many applications in set theory, model theory, topology and combinatorics.
Ultrafilters on partial orders
In order theory, an ultrafilter is a subset of a partially ordered set that is maximal among all proper filters. This implies that any filter that properly contains an ultrafilter has to be equal to the whole poset.
Formally, if is a set, partially ordered by then
a subset is called a filter on if
is nonempty,
for every there exists some element such that and and
for every and implies that is in too;
a proper subset of is called an ultrafilter on if
is a filter on and
there is no proper filter on that properly extends (that is, such that is a proper subset of ).
Every ultrafilter falls into exactly one of two categories: principal or free. A principal (or fixed, or trivial) ultrafilter is a filter containing a least element. Consequently, each principal ultrafilter is of the form for some element of the given poset. In this case is called the of the ultrafilter. Any ultrafilter that is not principal is called a free (or non-principal) ultrafilter. For arbitrary , the set is a filter, called the principal filter at ; it is a principal ultrafilter only if it is maximal.
For ultrafilters on a powerset a principal ultrafilter consists of all subsets of that contain a given element Each ultrafilter on that is also a principal filter is of this form. Therefore, an ultrafilter on is principal if and only if it contains a finite set. If is infinite, an ultrafilter on is hence non-principal if and only if it contains the Fréchet filter of cofinite subsets of If is finite, every ultrafilter is principal.
If is infinite then the Fréchet filter is not an ultrafilter on the power set of but it is an ultrafilter on the finite–cofinite algebra of
Every filter on a Boolean algebra (or more generally, any subset with the finite intersection property) is contained in an ultrafilter (see ultrafilter lemma) and free ultrafilters therefore exist, but the proofs involve the axiom of choice (AC) in the form of Zorn's lemma. On the other hand, the statement that every filter is contained in an ultrafilter does not imply AC. Indeed, it is equivalent to the Boolean prime ideal theorem (BPIT), a well-known intermediate point between the axioms of Zermelo–Fraenkel set theory (ZF) and the ZF theory augmented by the axiom of choice (ZFC). In general, proofs involving the axiom of choice do not produce explicit examples of free ultrafilters, though it is possible to find explicit examples in some models of ZFC; for example, Gödel showed that this can be done in the constructible universe where one can write down an explicit global choice function. In ZF without the axiom of choice, it is possible that every ultrafilter is principal.
Ultrafilter on a Boolean algebra
An important special case of the concept occurs if the considered poset is a Boolean algebra. In this case, ultrafilters are characterized by containing, for each element of the Boolean algebra, exactly one of the elements and (the latter being the Boolean complement of ):
If is a Boolean algebra and is a proper filter on then the following statements are equivalent:
is an ultrafilter on
is a prime filter on
for each either or ()
A proof that 1. and 2. are equivalent is also given in (Burris, Sankappanavar, 2012, Corollary 3.13, p.133).
Moreover, ultrafilters on a Boolean algebra can be related to maximal ideals and homomorphisms to the 2-element Boolean algebra {true, false} (also known as 2-valued morphisms) as follows:
Given a homomorphism of a Boolean algebra onto {true, false}, the inverse image of "true" is an ultrafilter, and the inverse image of "false" is a maximal ideal.
Given a maximal ideal of a Boolean algebra, its complement is an ultrafilter, and there is a unique homomorphism onto {true, false} taking the maximal ideal to "false".
Given an ultrafilter on a Boolean algebra, its complement is a maximal ideal, and there is a unique homomorphism onto {true, false} taking the ultrafilter to "true".
Ultrafilter on the power set of a set
Given an arbitrary set its power set ordered by set inclusion, is always a Boolean algebra; hence the results of the above section apply. An (ultra)filter on is often called just an "(ultra)filter on ". Given an arbitrary set an ultrafilter on is a set consisting of subsets of such that:
The empty set is not an element of .
If is an element of then so is every superset .
If and are elements of then so is the intersection .
If is a subset of then either or its complement is an element of .
Equivalently, a family of subsets of is an ultrafilter if and only if for any finite collection of subsets of , there is some such that where is the principal ultrafilter seeded by . In other words, an ultrafilter may be seen as a family of sets which "locally" resembles a principal ultrafilter.
An equivalent form of a given is a 2-valued morphism, a function on defined as if is an element of and otherwise. Then is finitely additive, and hence a on and every property of elements of is either true almost everywhere or false almost everywhere. However, is usually not , and hence does not define a measure in the usual sense.
For a filter that is not an ultrafilter, one can define if and if leaving undefined elsewhere.
Applications
Ultrafilters on power sets are useful in topology, especially in relation to compact Hausdorff spaces, and in model theory in the construction of ultraproducts and ultrapowers. Every ultrafilter on a compact Hausdorff space converges to exactly one point. Likewise, ultrafilters on Boolean algebras play a central role in Stone's representation theorem. In set theory ultrafilters are used to show that the axiom of constructibility is incompatible with the existence of a measurable cardinal . This is proved by taking the ultrapower of the set theoretical universe modulo a -complete, non-principal ultrafilter.
The set of all ultrafilters of a poset can be topologized in a natural way, that is in fact closely related to the above-mentioned representation theorem. For any element of , let This is most useful when is again a Boolean algebra, since in this situation the set of all is a base for a compact Hausdorff topology on . Especially, when considering the ultrafilters on a powerset , the resulting topological space is the Stone–Čech compactification of a discrete space of cardinality
The ultraproduct construction in model theory uses ultrafilters to produce a new model starting from a sequence of -indexed models; for example, the compactness theorem can be proved this way.
In the special case of ultrapowers, one gets elementary extensions of structures. For example, in nonstandard analysis, the hyperreal numbers can be constructed as an ultraproduct of the real numbers, extending the domain of discourse from real numbers to sequences of real numbers. This sequence space is regarded as a superset of the reals by identifying each real with the corresponding constant sequence. To extend the familiar functions and relations (e.g., + and <) from the reals to the hyperreals, the natural idea is to define them pointwise. But this would lose important logical properties of the reals; for example, pointwise < is not a total ordering. So instead the functions and relations are defined "pointwise modulo" , where is an ultrafilter on the index set of the sequences; by Łoś' theorem, this preserves all properties of the reals that can be stated in first-order logic. If is nonprincipal, then the extension thereby obtained is nontrivial.
In geometric group theory, non-principal ultrafilters are used to define the asymptotic cone of a group. This construction yields a rigorous way to consider , that is the large scale geometry of the group. Asymptotic cones are particular examples of ultralimits of metric spaces.
Gödel's ontological proof of God's existence uses as an axiom that the set of all "positive properties" is an ultrafilter.
In social choice theory, non-principal ultrafilters are used to define a rule (called a social welfare function) for aggregating the preferences of infinitely many individuals. Contrary to Arrow's impossibility theorem for finitely many individuals, such a rule satisfies the conditions (properties) that Arrow proposes (for example, Kirman and Sondermann, 1972). Mihara (1997, 1999) shows, however, such rules are practically of limited interest to social scientists, since they are non-algorithmic or non-computable.
| Mathematics | Order theory | null |
31990 | https://en.wikipedia.org/wiki/Ultraviolet | Ultraviolet | Ultraviolet radiation, also known as simply UV, is electromagnetic radiation of wavelengths of 10–400 nanometers, shorter than that of visible light, but longer than X-rays. UV radiation is present in sunlight, and constitutes about 10% of the total electromagnetic radiation output from the Sun. It is also produced by electric arcs, Cherenkov radiation, and specialized lights, such as mercury-vapor lamps, tanning lamps, and black lights.
The photons of ultraviolet have greater energy than those of visible light, from about 3.1 to 12 electron volts, around the minimum energy required to ionize atoms. Although long-wavelength ultraviolet is not considered an ionizing radiation because its photons lack sufficient energy, it can induce chemical reactions and cause many substances to glow or fluoresce. Many practical applications, including chemical and biological effects, are derived from the way that UV radiation can interact with organic molecules. These interactions can involve absorption or adjusting energy states in molecules, but do not necessarily involve heating. Short-wave ultraviolet light is ionizing radiation. Consequently, short-wave UV damages DNA and sterilizes surfaces with which it comes into contact.
For humans, suntan and sunburn are familiar effects of exposure of the skin to UV, along with an increased risk of skin cancer. The amount of UV radiation produced by the Sun means that the Earth would not be able to sustain life on dry land if most of that light were not filtered out by the atmosphere. More energetic, shorter-wavelength "extreme" UV below 121 nm ionizes air so strongly that it is absorbed before it reaches the ground. However, UV (specifically, UVB) is also responsible for the formation of vitamin D in most land vertebrates, including humans. The UV spectrum, thus, has effects both beneficial and detrimental to life.
The lower wavelength limit of the visible spectrum is conventionally taken as 400 nm. Although ultraviolet rays are not generally visible to humans, 400 nm is not a sharp cutoff, with shorter and shorter wavelengths becoming less and less visible in this range. Insects, birds, and some mammals can see near-UV (NUV), i.e., somewhat shorter wavelengths than what humans can see.
Visibility
Ultraviolet rays are usually invisible to most humans. The lens of the human eye blocks most radiation in the wavelength range of 300–400 nm; shorter wavelengths are blocked by the cornea. Humans also lack color receptor adaptations for ultraviolet rays. Nevertheless, the photoreceptors of the retina are sensitive to near-UV, and people lacking a lens (a condition known as aphakia) perceive near-UV as whitish-blue or whitish-violet. Under some conditions, children and young adults can see ultraviolet down to wavelengths around 310 nm. Near-UV radiation is visible to insects, some mammals, and some birds. Birds have a fourth color receptor for ultraviolet rays; this, coupled with eye structures that transmit more UV gives smaller birds "true" UV vision.
History and discovery
"Ultraviolet" means "beyond violet" (from Latin ultra, "beyond"), violet being the color of the highest frequencies of visible light. Ultraviolet has a higher frequency (thus a shorter wavelength) than violet light.
UV radiation was discovered in February 1801 when the German physicist Johann Wilhelm Ritter observed that invisible rays just beyond the violet end of the visible spectrum darkened silver chloride-soaked paper more quickly than violet light itself. He announced the discovery in a very brief letter to the Annalen der Physik and later called them "(de-)oxidizing rays" () to emphasize chemical reactivity and to distinguish them from "heat rays", discovered the previous year at the other end of the visible spectrum. The simpler term "chemical rays" was adopted soon afterwards, and remained popular throughout the 19th century, although some said that this radiation was entirely different from light (notably John William Draper, who named them "tithonic rays"). The terms "chemical rays" and "heat rays" were eventually dropped in favor of ultraviolet and infrared radiation, respectively. In 1878, the sterilizing effect of short-wavelength light by killing bacteria was discovered. By 1903, the most effective wavelengths were known to be around 250 nm. In 1960, the effect of ultraviolet radiation on DNA was established.
The discovery of the ultraviolet radiation with wavelengths below 200 nm, named "vacuum ultraviolet" because it is strongly absorbed by the oxygen in air, was made in 1893 by German physicist Victor Schumann.
Subtypes
The electromagnetic spectrum of ultraviolet radiation (UVR), defined most broadly as 10–400 nanometers, can be subdivided into a number of ranges recommended by the ISO standard ISO 21348:
Several solid-state and vacuum devices have been explored for use in different parts of the UV spectrum. Many approaches seek to adapt visible light-sensing devices, but these can suffer from unwanted response to visible light and various instabilities. Ultraviolet can be detected by suitable photodiodes and photocathodes, which can be tailored to be sensitive to different parts of the UV spectrum. Sensitive UV photomultipliers are available. Spectrometers and radiometers are made for measurement of UV radiation. Silicon detectors are used across the spectrum.
Vacuum UV, or VUV, wavelengths (shorter than 200 nm) are strongly absorbed by molecular oxygen in the air, though the longer wavelengths around 150–200 nm can propagate through nitrogen. Scientific instruments can, therefore, use this spectral range by operating in an oxygen-free atmosphere (pure nitrogen, or argon for shorter wavelengths), without the need for costly vacuum chambers. Significant examples include 193-nm photolithography equipment (for semiconductor manufacturing) and circular dichroism spectrometers.
Technology for VUV instrumentation was largely driven by solar astronomy for many decades. While optics can be used to remove unwanted visible light that contaminates the VUV, in general, detectors can be limited by their response to non-VUV radiation, and the development of solar-blind devices has been an important area of research. Wide-gap solid-state devices or vacuum devices with high-cutoff photocathodes can be attractive compared to silicon diodes.
Extreme UV (EUV or sometimes XUV) is characterized by a transition in the physics of interaction with matter. Wavelengths longer than about 30 nm interact mainly with the outer valence electrons of atoms, while wavelengths shorter than that interact mainly with inner-shell electrons and nuclei. The long end of the EUV spectrum is set by a prominent He+ spectral line at 30.4 nm. EUV is strongly absorbed by most known materials, but synthesizing multilayer optics that reflect up to about 50% of EUV radiation at normal incidence is possible. This technology was pioneered by the NIXT and MSSTA sounding rockets in the 1990s, and it has been used to make telescopes for solar imaging. | Physical sciences | Electrodynamics | null |
32055 | https://en.wikipedia.org/wiki/Urology | Urology | Urology (from Greek οὖρον ouron "urine" and -logia "study of"), also known as genitourinary surgery, is the branch of medicine that focuses on surgical and medical diseases of the urinary system and the reproductive organs. Organs under the domain of urology include the kidneys, adrenal glands, ureters, urinary bladder, urethra, and the male reproductive organs (testes, epididymides, vasa deferentia, seminal vesicles, prostate, and penis).
The urinary and reproductive tracts are closely linked, and disorders of one often affect the other. Thus a major spectrum of the conditions managed in urology exists under the domain of genitourinary disorders. Urology combines the management of medical (i.e., non-surgical) conditions, such as urinary-tract infections and benign prostatic hyperplasia, with the management of surgical conditions such as bladder or prostate cancer, kidney stones, congenital abnormalities, traumatic injury, and stress incontinence.
Urological techniques include minimally invasive robotic and laparoscopic surgery, laser-assisted surgeries, and other scope-guided procedures. Urologists receive training in open and minimally invasive surgical techniques, employing real-time ultrasound guidance, fiber-optic endoscopic equipment, and various lasers in the treatment of multiple benign and malignant conditions. Urology is closely related to (and urologists often collaborate with the practitioners of) oncology, nephrology, gynaecology, andrology, pediatric surgery, colorectal surgery, gastroenterology, and endocrinology.
Urology is one of the most competitive and highly sought surgical specialties for physicians, with new urologists comprising less than 1.5% of United States medical-school graduates each year.
Urologists are physicians which have specialized in the field after completing their general degree in medicine. Upon successful completion of a residency program, many urologists choose to undergo further advanced training in a subspecialty area of expertise through a fellowship lasting an additional 12 to 36 months. Subspecialties may include: urologic surgery, urologic oncology and urologic oncological surgery, endourology and endourologic surgery, urogynecology and urogynecologic surgery, reconstructive urologic surgery (a form of reconstructive surgery), minimally-invasive urologic surgery, pediatric urology and pediatric urologic surgery (including adolescent urology, the treatment of premature or delayed puberty, and the treatment of congenital urological syndromes, malformations, and deformations), transplant urology (the field of transplant medicine and surgery concerned with transplantation of organs such as the kidneys, bladder tissue, ureters, and, recently, penises), voiding dysfunction, paruresis, neurourology, and androurology and sexual medicine. Additionally, some urologists supplement their fellowships with a master's degree (2–3 years) or with a Ph.D. (4–6 years) in related topics to prepare them for academic as well as focused clinical employment.
Training
United States
As of 2022, there are 146 residency programs that offered 356 categorical positions. Urology is one of the early match programs, with results given to applicants by early February (6 weeks before NRMP match). Applications are accepted starting Sep 1, with some programs accepting applications until early Jan.
It is a relatively competitive specialty to match into, with only 65.6% of US seniors matching in the 2022 match cycle. The number of positions has grown from 278 in 2012 to 356 in 2022. Matching is significantly more difficult for IMGs and students who have a year or more off before residency - match rates were 27% and 55% respectively in 2012.
The medical school environment may also be a factor. A study in 2012 also showed after an analysis of match rates from schools between 2005 and 2009 that 20 schools sent more than 15 students into urology (1 standard deviation above the median), with Northwestern University sending 44 students over those five years.
After urology residency, there are seven subspecialties recognized by the AUA (American Urological Association):
Oncology
Calculi
Female Urology
Infertility
Pediatrics
Transplant (renal)
Neurourology.
Australia
Training is completed through the Royal Australasian College of Surgeons (RACS). The program requires six years of full-time training (for those who commenced prior to 2016), or five years for those who commenced after 2016. The program is accredited by the Australian Medical Council.
Nepal
In Nepal, the formal urologist degree awarded is MCh (Magister Chirurgiae). This is a three years course post masters and includes thesis and a mandatory publication. This degree is awarded after completing MBBS (four and half year plus a one-year rotatory internship) and MS (Mastery of surgery) in general surgery (three years course). Till now two universities Tribhuvan University and Kathmandu University as well as two Autonomous institutes BP Koirala Institute of health sciences and National Academy of Medical Sciences (Bir Hospital) run the MCh Urology programme. This degree is equivalent to Clinical PhD and called as "Chikitsa Bidhyabaridhi" by Tribhuvan University (Government University) and is considered to be the highest degree among the surgical discipline degrees.
Ethiopia
In Ethiopia, in 2001, there were only five qualified urologists. All trained abroad, in countries like India, Tanzania and Hungary. Before this chapter all urology cases were managed by general surgeons. The only urological unit in the country was at Tikur Anbessa Tertiary Hospital. The services provided included ESWL and endo-urology. The urology training program was started in 2009 with a curriculum for general surgeons which had a three-year training program. Up to 2019, six urologists have graduated by this program for general surgeons. The first residency program started accepting general practitioners in 2010 for a five-year program. The first two years were trainings in general surgery, the next three years were dedicated urology training program, which included the same three-year training as of the general surgeons three year curriculum. It started with two residents who graduated in 2015 with a certificate in specialty of urology. Up to 2019, seventeen urologists have graduated from this five-year residency program. From the start these programs in 2009 up to 2019, a total of 23 urologists have been trained in Tikur Anbessa Tertioary Hospital. As of 2020, there were 26 trainees in the programme. All of the urologists who graduated from Tikur Anbessa Tertioary Hospital were as of 2020 working in different parts of the country.
Subdisciplines
As a medical discipline that involves the care of many organs and physiological systems, urology can be broken down into several subdisciplines. At many larger academic centers and university hospitals that excel in patient care and clinical research, urologists often specialize in a particular sub discipline.
Endourology
Endourology is the branch of urology that deals with the closed manipulation of the urinary tract. It has lately grown to include all minimally invasive urologic surgical procedures. As opposed to open surgery, endourology is performed using small cameras and instruments inserted into the urinary tract. Transurethral surgery has been the cornerstone of endourology. Most of the urinary tract can be reached via the urethra, enabling prostate surgery, surgery of tumors of the urothelium, stone surgery, and simple urethral and ureteral procedures. Recently, the addition of laparoscopy and robotics has further subdivided this branch of urology.
Laparoscopy
Laparoscopy is a rapidly evolving branch of urology and has replaced some open surgical procedures. Robot-assisted surgery of the prostate, kidney, and ureter has been expanding this field. Today, many prostatectomies in the United States are carried out by so-called robotic assistance. This has created controversy, however, as robotics greatly increase the cost of surgery and the benefit for the patient may or may not be proportional to the extra cost. Moreover, current (2011) market situation for robotic equipment is a de facto monopoly of one publicly held corporation which further fuels the cost-effectiveness controversy.
Urologic oncology
Urologic oncology concerns the surgical treatment of malignant genitourinary diseases such as cancer of the prostate, adrenal glands, bladder, kidneys, ureters, testicles, and penis, as well as the skin and subcutaneous tissue and muscle and fascia of those areas (that particular subspecialty overlaps with dermatological oncology and related areas of oncology). The treatment of genitourinary cancer is managed by either a urologist or an oncologist, depending on the treatment type (surgical or medical). Most urologic oncologists in Western countries use minimally invasive techniques (laparoscopy or endourology, robotic-assisted surgery) to manage urologic cancers amenable to surgical management.
Neurourology
Neurourology concerns nervous system control of the genitourinary system, and of conditions causing abnormal urination. Neurological diseases and disorders such as a stroke, multiple sclerosis, Parkinson's disease, and spinal cord injury can disrupt the lower urinary tract and result in conditions such as urinary incontinence, detrusor overactivity, urinary retention, and detrusor sphincter dyssynergia. Urodynamic studies play an important diagnostic role in neurourology. Therapy for nervous system disorders includes clean intermittent self-catheterization of the bladder, anticholinergic drugs, injection of Botulinum toxin into the bladder wall and advanced and less commonly used therapies such as sacral neuromodulation.
Less marked neurological abnormalities can cause urological disorders as well—for example, abnormalities of the sensory nervous system are thought by many researchers to play a role in disorders of painful or frequent urination (e.g. painful bladder syndrome also known as interstitial cystitis).
Pediatric urology
Pediatric urology concerns urologic disorders in children. Such disorders include cryptorchidism (undescended testes), congenital abnormalities of the genitourinary tract, enuresis, underdeveloped genitalia (due to delayed growth or delayed puberty, often an endocrinological problem), and vesicoureteral reflux.
Andrology
Andrology is the medical specialty that deals with male health, particularly relating to the problems of the male reproductive system and urological problems that are unique to men such as prostate cancer, male fertility problems, and surgery of the male reproductive system. It is the counterpart to gynaecology, which deals with medical issues that are specific to female health, especially reproductive and urologic health.
Reconstructive urology
Reconstructive urology is a highly specialized field of male urology that restores both structure and function to the genitourinary tract. Prostate procedures, full or partial hysterectomies, trauma (auto accidents, gunshot wounds, industrial accidents, straddle injuries, etc.), disease, obstructions, blockages (e.g., urethral strictures), and occasionally, childbirth, can necessitate reconstructive surgery. The urinary bladder, ureters (the tubes that lead from the kidneys to the urinary bladder) and genitalia are other examples of reconstructive urology.
Female urology
Female urology is a branch of urology dealing with overactive bladder, pelvic organ prolapse, and urinary incontinence. Many of these physicians also practice neurourology and reconstructive urology as mentioned above. Female urologists (many of whom are men) complete a 1–3-year fellowship after completion of a 5–6-year urology residency. Thorough knowledge of the female pelvic floor together with intimate understanding of the physiology and pathology of voiding are necessary to diagnose and treat these disorders. Depending on the cause of the individual problem, a medical or surgical treatment can be the solution. Their field of practice heavily overlaps with that of urogynecologists, physicians in a sub-discipline of gynecology, who have done a three-year fellowship after a four-year OBGYN residency.
Journals and organizations
There are a number of peer-reviewed journals and publications about urology, including The Journal of Urology, European Urology, the African Journal of Urology, British Journal of Urology International, BMC Urology, Indian Journal of Urology, Nature Reviews Urology, and Urology''.
There are national organizations such as the American Urological Association, the American Association of Clinical Urologists, European Association of Urology, the Large Urology Group Practice Association (LUGPA), and The Society for Basic Urologic Research. Urology is also included under the auspices of the International Continence Society.
Teaching organizations include the European Board of Urology, as well as the Vattikuti Urology Institute in Detroit, which also hosts an annual International Robotic Urology Symposium devoted to new technologies. The American non-profit IVUMed teaches urology in developing countries.
List of urological topics
Benign prostatic hyperplasia
Bladder cancer
Bladder stones
Cystitis
Development of the urinary and reproductive organs
Epididymitis
Erectile dysfunction
Hard flaccid syndrome
Interstitial cystitis
Kidney cancer
Kidney stone
Kidney transplant
Peyronie's disease
Postorgasmic illness syndrome
Prostate cancer
Prostatitis
Replantation
Retrograde pyelogram
Retrograde ureteral
Testicular cancer
Vasectomy
Vasectomy reversal
| Biology and health sciences | Fields of medicine | null |
32073 | https://en.wikipedia.org/wiki/USB | USB | Universal Serial Bus (USB) is an industry standard, developed by USB Implementers Forum (USB-IF), that allows data exchange and delivery of power between many types of electronics. It specifies its architecture, in particular its physical interface, and communication protocols for data transfer and power delivery to and from hosts, such as personal computers, to and from peripheral devices, e.g. displays, keyboards, and mass storage devices, and to and from intermediate hubs, which multiply the number of a host's ports.
Introduced in 1996, USB was originally designed to standardize the connection of peripherals to computers, replacing various interfaces such as serial ports, parallel ports, game ports, and ADB ports. Early versions of USB became commonplace on a wide range of devices, such as keyboards, mice, cameras, printers, scanners, flash drives, smartphones, game consoles, and power banks. USB has since evolved into a standard to replace virtually all common ports on computers, mobile devices, peripherals, power supplies, and manifold other small electronics.
In the current standard, the USB-C connector replaces the many various connectors for power (up to 240 W), displays (e.g. DisplayPort, HDMI), and many other uses, as well as all previous USB connectors.
USB consists of four generations of specifications: USB 1.x, USB 2.0, USB 3.x, and USB4. USB4 enhances the data transfer and power delivery functionality with
USB4 particularly supports the tunneling of the Thunderbolt 3 protocols, namely PCI Express (PCIe, load/store interface) and DisplayPort (display interface). USB4 also adds host-to-host interfaces.
Each specification sub-version supports different signaling rates from 1.5 and 12 Mbit/s half-duplex in USB 1.0/1.1 to 80 Gbit/s full-duplex in USB4 2.0. USB also provides power to peripheral devices; the latest versions of the standard extend the power delivery limits for battery charging and devices requiring up to 240 watts as defined in USB Power Delivery (USB-PD) Rev. V3.1. Over the years, USB(-PD) has been adopted as the standard power supply and charging format for many mobile devices, such as mobile phones, reducing the need for proprietary chargers.
Overview
USB was designed to standardize the connection of peripherals to personal computers, both to exchange data and to supply electric power. It has largely replaced interfaces such as serial ports and parallel ports and has become commonplace on various devices. Peripherals connected via USB include computer keyboards and mice, video cameras, printers, portable media players, mobile (portable) digital telephones, disk drives, and network adapters.
USB connectors have been increasingly replacing other types of charging cables for portable devices.
USB connector interfaces are classified into three types: the many various legacy Type-A (upstream) and Type-B (downstream) connectors found on hosts, hubs, and peripheral devices, and the modern Type-C (USB-C) connector, which replaces the many legacy connectors as the only applicable connector for USB4.
The Type-A and Type-B connectors came in Standard, Mini, and Micro sizes. The standard format was the largest and was mainly used for desktop and larger peripheral equipment. The Mini-USB connectors (Mini-A, Mini-B, Mini-AB) were introduced for mobile devices. Still, they were quickly replaced by the thinner Micro-USB connectors (Micro-A, Micro-B, Micro-AB). The Type-C connector, also known as USB-C, is not exclusive to USB, is the only current standard for USB, is required for USB4, and is required by other standards, including modern DisplayPort and Thunderbolt. It is reversible and can support various functionalities and protocols, including USB; some are mandatory, and many are optional, depending on the type of hardware: host, peripheral device, or hub.
USB specifications provide backward compatibility, usually resulting in decreased signaling rates, maximal power offered, and other capabilities. The USB 1.1 specification replaces USB 1.0. The USB 2.0 specification is backward-compatible with USB 1.0/1.1. The USB 3.2 specification replaces USB 3.1 (and USB 3.0) while including the USB 2.0 specification. USB4 "functionally replaces" USB 3.2 while retaining the USB 2.0 bus operating in parallel.
The USB 3.0 specification defined a new architecture and protocol named SuperSpeed (aka SuperSpeed USB, marketed as SS), which included a new lane for a new signal coding scheme (8b/10b symbols, 5 Gbit/s; later also known as Gen 1) providing full-duplex data transfers that physically required five additional wires and pins, while preserving the USB 2.0 architecture and protocols and therefore keeping the original four pins/wires for the USB 2.0 backward-compatibility resulting in 9 wires (with 9 or 10 pins at connector interfaces; ID-pin is not wired) in total.
The USB 3.1 specification introduced an Enhanced SuperSpeed System – while preserving the SuperSpeed architecture and protocol (SuperSpeed USB) – with an additional SuperSpeedPlus architecture and protocol (aka SuperSpeedPlus USB) adding a new coding schema (128b/132b symbols, 10 Gbit/s; also known as Gen 2); for some time marketed as SuperSpeed+ (SS+).
The USB 3.2 specification added a second lane to the Enhanced SuperSpeed System besides other enhancements so that the SuperSpeedPlus USB system part implements the Gen 1×2, Gen 2×1, and Gen 2×2 operation modes. However, the SuperSpeed USB part of the system still implements the one-lane Gen 1×1 operation mode. Therefore, two-lane operations, namely USB 3.2 Gen 1×2 (10 Gbit/s) and Gen 2×2 (20 Gbit/s), are only possible with Full-Featured USB-C. As of 2023, they are somewhat rarely implemented; Intel, however, started to include them in its 11th-generation SoC processor models, but Apple never provided them. On the other hand, USB 3.2 Gen 1(×1) (5 Gbit/s) and Gen 2(×1) (10 Gbit/s) have been quite common for some years.
Connector type quick reference
Each USB connection is made using two connectors: a receptacle and a plug. Pictures show only receptacles:
Objectives
The Universal Serial Bus was developed to simplify and improve the interface between personal computers and peripheral devices, such as cell phones, computer accessories, and monitors, when compared with previously existing standard or ad hoc proprietary interfaces.
From the computer user's perspective, the USB interface improves ease of use in several ways:
The USB interface is self-configuring, eliminating the need for the user to adjust the device's settings for speed or data format, or configure interrupts, input/output addresses, or direct memory access channels.
USB connectors are standardized at the host, so any peripheral can use most available receptacles.
USB takes full advantage of the additional processing power that can be economically put into peripheral devices so that they can manage themselves. As such, USB devices often do not have user-adjustable interface settings.
The USB interface is hot-swappable (devices can be exchanged without shutting the host computer down).
Small devices can be powered directly from the USB interface, eliminating the need for additional power supply cables.
Because use of the USB logo is only permitted after compliance testing, the user can have confidence that a USB device will work as expected without extensive interaction with settings and configuration.
The USB interface defines protocols for recovery from common errors, improving reliability over previous interfaces.
Installing a device that relies on the USB standard requires minimal operator action. When a user plugs a device into a port on a running computer, it either entirely automatically configures using existing device drivers, or the system prompts the user to locate a driver, which it then installs and configures automatically.
The USB standard also provides multiple benefits for hardware manufacturers and software developers, specifically in the relative ease of implementation:
The USB standard eliminates the requirement to develop proprietary interfaces to new peripherals.
The wide range of transfer speeds available from a USB interface suits devices ranging from keyboards and mice up to streaming video interfaces.
A USB interface can be designed to provide the best available latency for time-critical functions or can be set up to do background transfers of bulk data with little impact on system resources.
The USB interface is generalized with no signal lines dedicated to only one function of one device.
Limitations
As with all standards, USB possesses multiple limitations to its design:
USB cables are limited in length, as the standard was intended for peripherals on the same tabletop, not between rooms or buildings. However, a USB port can be connected to a gateway that accesses distant devices.
USB data transfer rates are slower than those of other interconnects such as 100 Gigabit Ethernet.
USB has a strict tree network topology and master/slave protocol for addressing peripheral devices; slave devices cannot interact with one another except via the host, and two hosts cannot communicate over their USB ports directly. Some extension to this limitation is possible through USB On-The-Go, Dual-Role-Devices and protocol bridge.
A host cannot broadcast signals to all peripherals at once; each must be addressed individually.
While converters exist between certain legacy interfaces and USB, they might not provide a full implementation of the legacy hardware. For example, a USB-to-parallel-port converter might work well with a printer, but not with a scanner that requires bidirectional use of the data pins.
For a product developer, using USB requires the implementation of a complex protocol and implies an "intelligent" controller in the peripheral device. Developers of USB devices intended for public sale generally must obtain a USB ID, which requires that they pay a fee to the USB Implementers Forum (USB-IF). Developers of products that use the USB specification must sign an agreement with the USB-IF. Use of the USB logos on the product requires annual fees and membership in the organization.
History
A group of seven companies began the development of USB in 1995: Compaq, DEC, IBM, Intel, Microsoft, NEC, and Nortel. The goal was to make it fundamentally easier to connect external devices to PCs by replacing the multitude of connectors at the back of PCs, addressing the usability issues of existing interfaces, and simplifying software configuration of all devices connected to USB, as well as permitting greater data transfer rates for external devices and plug and play features. Ajay Bhatt and his team worked on the standard at Intel; the first integrated circuits supporting USB were produced by Intel in 1995.
USB 1.x
Released in January 1996, USB 1.0 specified signaling rates of 1.5 Mbit/s (Low Bandwidth or Low Speed) and 12 Mbit/s (Full Speed). It did not allow for extension cables, due to timing and power limitations. Few USB devices made it to the market until USB 1.1 was released in August 1998. USB 1.1 was the earliest revision that was widely adopted and led to what Microsoft designated the "Legacy-free PC".
Neither USB 1.0 nor 1.1 specified a design for any connector smaller than the standard type A or type B. Though many designs for a miniaturized type B connector appeared on many peripherals, conformity to the USB 1.x standard was hampered by treating peripherals that had miniature connectors as though they had a tethered connection (that is: no plug or receptacle at the peripheral end). There was no known miniature type A connector until USB 2.0 (revision 1.01) introduced one.
USB 2.0
USB 2.0 was released in April 2000, adding a higher maximum signaling rate of 480 Mbit/s (maximum theoretical data throughput 53 MByte/s) named High Speed or High Bandwidth, in addition to the USB 1.x Full Speed signaling rate of 12 Mbit/s (maximum theoretical data throughput 1.2 MByte/s).
Modifications to the USB specification have been made via engineering change notices (ECNs). The most important of these ECNs are included into the USB 2.0 specification package available from USB.org:
Mini-A and Mini-B Connector
Micro-USB Cables and Connectors Specification 1.01
InterChip USB Supplement
On-The-Go Supplement 1.3 USB On-The-Go makes it possible for two USB devices to communicate with each other without requiring a separate USB host
Battery Charging Specification 1.1 Added support for dedicated chargers, host chargers behavior for devices with dead batteries
Battery Charging Specification 1.2: with increased current of 1.5 A on charging ports for unconfigured devices, allowing high-speed communication while having a current up to 1.5 A
Link Power Management Addendum ECN, which adds a sleep power state
USB 3.x
The USB 3.0 specification was released on 12 November 2008, with its management transferring from USB 3.0 Promoter Group to the USB Implementers Forum (USB-IF) and announced on 17 November 2008 at the SuperSpeed USB Developers Conference.
USB 3.0 adds a new architecture and protocol named SuperSpeed, with associated backward-compatible plugs, receptacles, and cables. SuperSpeed plugs and receptacles are identified with a distinct logo and blue inserts in standard format receptacles.
The SuperSpeed architecture provides for an operation mode at a rate of 5.0 Gbit/s, in addition to the three existing operation modes. Its efficiency is dependent on a number of factors including physical symbol encoding and link-level overhead. At a 5 Gbit/s signaling rate with 8b/10b encoding, each byte needs 10 bits to transmit, so the raw throughput is 500 MB/s. When flow control, packet framing and protocol overhead are considered, it is realistic for about two thirds of the raw throughput, or 330 MB/s to transmit to an application. SuperSpeed's architecture is full-duplex; all earlier implementations, USB 1.0-2.0, are all half-duplex, arbitrated by the host.
Low-power and high-power devices remain operational with this standard, but devices implementing SuperSpeed can provide increased current of between 150 mA and 900 mA, by discrete steps of 150 mA.
USB 3.0 also introduced the USB Attached SCSI protocol (UASP), which provides generally faster transfer speeds than the BOT (Bulk-Only-Transfer) protocol.
USB 3.1, released in July 2013 has two variants. The first one preserves USB 3.0's SuperSpeed architecture and protocol and its operation mode is newly named USB 3.1 Gen 1, and the second version introduces a distinctively new SuperSpeedPlus architecture and protocol with a second operation mode named as USB 3.1 Gen 2 (marketed as SuperSpeed+ USB). SuperSpeed+ doubles the maximum signaling rate to 10 Gbit/s (later marketed as SuperSpeed USB 10 Gbps by the USB 3.2 specification), while reducing line encoding overhead to just 3% by changing the encoding scheme to 128b/132b.
USB 3.2, released in September 2017, preserves existing USB 3.1 SuperSpeed and SuperSpeedPlus architectures and protocols and their respective operation modes, but introduces two additional SuperSpeedPlus operation modes (USB 3.2 Gen 1×2 and USB 3.2 Gen 2×2) with the new USB-C Fabric with signaling rates of 10 and 20 Gbit/s (raw data rates of 1212 and 2424 MB/s). The increase in bandwidth is a result of two-lane operation over existing wires that were originally intended for flip-flop capabilities of the USB-C connector.
Naming scheme
Starting with the USB 3.2 specification, USB-IF introduced a new naming scheme. To help companies with the branding of the different operation modes, USB-IF recommended branding the 5, 10, and 20 Gbit/s capabilities as SuperSpeed USB 5Gbps, SuperSpeed USB 10 Gbps, and SuperSpeed USB 20 Gbps, respectively.
In 2023, they were replaced again, removing "SuperSpeed", with USB 5Gbps, USB 10Gbps, and USB 20Gbps. With new Packaging and Port logos.
USB4
The USB4 specification was released on 29 August 2019 by the USB Implementers Forum.
The USB4 2.0 specification was released on 1 September 2022 by the USB Implementers Forum.
USB4 is based on the Thunderbolt 3 protocol. It supports 40 Gbit/s throughput, is compatible with Thunderbolt 3, and backward compatible with USB 3.2 and USB 2.0. The architecture defines a method to share a single high-speed link with multiple end device types dynamically that best serves the transfer of data by type and application.
During CES 2020, USB-IF and Intel stated their intention to allow USB4 products that support all the optional functionality as Thunderbolt 4 products.
USB4 2.0 with 80 Gbit/s speeds was to be revealed in November 2022. Further technical details were to be released at two USB developer days scheduled for November 2022.
The USB4 specification states that the following technologies shall be supported by USB4:
September 2022 naming scheme
Because of the previous confusing naming schemes, USB-IF decided to change it once again. As of 2 September 2022, marketing names follow the syntax "USB xGbps", where x is the speed of transfer in Gbit/s. Overview of the updated names and logos can be seen in the adjacent table.
The operation modes USB 3.2 Gen 2×2 and USB4 Gen 2×2 – or: USB 3.2 Gen 2×1 and USB4 Gen 2×1 – are not interchangeable or compatible; all participating controllers must operate with the same mode.
Version history
Release versions
Power-related standards
System design
A USB system consists of a host with one or more downstream facing ports (DFP), and multiple peripherals, forming a tiered-star topology. Additional USB hubs may be included, allowing up to five tiers. A USB host may have multiple controllers, each with one or more ports. Up to 127 devices may be connected to a single host controller. USB devices are linked in series through hubs. The hub built into the host controller is called the root hub.
A USB device may consist of several logical sub-devices that are referred to as device functions. A composite device may provide several functions, for example, a webcam (video device function) with a built-in microphone (audio device function). An alternative to this is a compound device, in which the host assigns each logical device a distinct address and all logical devices connect to a built-in hub that connects to the physical USB cable.
USB device communication is based on pipes (logical channels). A pipe connects the host controller to a logical entity within a device, called an endpoint. Because pipes correspond to endpoints, the terms are sometimes used interchangeably. Each USB device can have up to 32 endpoints (16 in and 16 out), though it is rare to have so many. Endpoints are defined and numbered by the device during initialization (the period after physical connection called "enumeration") and so are relatively permanent, whereas pipes may be opened and closed.
There are two types of pipe: stream and message.
A message pipe is bi-directional and is used for control transfers. Message pipes are typically used for short, simple commands to the device, and for status responses from the device, used, for example, by the bus control pipe number 0.
A stream pipe is a uni-directional pipe connected to a uni-directional endpoint that transfers data using an isochronous, interrupt, or bulk transfer:
Isochronous transfers At some guaranteed data rate (for fixed-bandwidth streaming data) but with possible data loss (e.g., realtime audio or video)
Interrupt transfers Devices that need guaranteed quick responses (bounded latency) such as pointing devices, mice, and keyboards
Bulk transfers Large sporadic transfers using all remaining available bandwidth, but with no guarantees on bandwidth or latency (e.g., file transfers)
When a host starts a data transfer, it sends a TOKEN packet containing an endpoint specified with a tuple of (device_address, endpoint_number). If the transfer is from the host to the endpoint, the host sends an OUT packet (a specialization of a TOKEN packet) with the desired device address and endpoint number. If the data transfer is from the device to the host, the host sends an IN packet instead. If the destination endpoint is a uni-directional endpoint whose manufacturer's designated direction does not match the TOKEN packet (e.g. the manufacturer's designated direction is IN while the TOKEN packet is an OUT packet), the TOKEN packet is ignored. Otherwise, it is accepted and the data transaction can start. A bi-directional endpoint, on the other hand, accepts both IN and OUT packets.
Endpoints are grouped into interfaces and each interface is associated with a single device function. An exception to this is endpoint zero, which is used for device configuration and is not associated with any interface. A single device function composed of independently controlled interfaces is called a composite device. A composite device only has a single device address because the host only assigns a device address to a function.
When a USB device is first connected to a USB host, the USB device enumeration process is started. The enumeration starts by sending a reset signal to the USB device. The signaling rate of the USB device is determined during the reset signaling. After reset, the USB device's information is read by the host and the device is assigned a unique 7-bit address. If the device is supported by the host, the device drivers needed for communicating with the device are loaded and the device is set to a configured state. If the USB host is restarted, the enumeration process is repeated for all connected devices.
The host controller directs traffic flow to devices, so no USB device can transfer any data on the bus without an explicit request from the host controller. In USB 2.0, the host controller polls the bus for traffic, usually in a round-robin fashion. The throughput of each USB port is determined by the slower speed of either the USB port or the USB device connected to the port.
High-speed USB 2.0 hubs contain devices called transaction translators that convert between high-speed USB 2.0 buses and full and low speed buses. There may be one translator per hub or per port.
Because there are two separate controllers in each USB 3.0 host, USB 3.0 devices transmit and receive at USB 3.0 signaling rates regardless of USB 2.0 or earlier devices connected to that host. Operating signaling rates for earlier devices are set in the legacy manner.
Device classes
The functionality of a USB device is defined by a class code sent to a USB host. This allows the host to load software modules for the device and to support new devices from different manufacturers.
Device classes include:
USB mass storage / USB drive
The USB mass storage device class (MSC or UMS) standardizes connections to storage devices. At first intended for magnetic and optical drives, it has been extended to support flash drives and SD card readers. The ability to boot a write-locked SD card with a USB adapter is particularly advantageous for maintaining the integrity and non-corruptible, pristine state of the booting medium.
Though most personal computers since early 2005 can boot from USB mass storage devices, USB is not intended as a primary bus for a computer's internal storage. However, USB has the advantage of allowing hot-swapping, making it useful for mobile peripherals, including drives of various kinds.
Several manufacturers offer external portable USB hard disk drives, or empty enclosures for disk drives. These offer performance comparable to internal drives, limited by the number and types of attached USB devices, and by the upper limit of the USB interface. Other competing standards for external drive connectivity include eSATA, ExpressCard, FireWire (IEEE 1394), and most recently Thunderbolt.
Another use for USB mass storage devices is the portable execution of software applications (such as web browsers and VoIP clients) with no need to install them on the host computer.
Media Transfer Protocol
Media Transfer Protocol (MTP) was designed by Microsoft to give higher-level access to a device's filesystem than USB mass storage, at the level of files rather than disk blocks. It also has optional DRM features. MTP was designed for use with portable media players, but it has since been adopted as the primary storage access protocol of the Android operating system from the version 4.1 Jelly Bean as well as Windows Phone 8 (Windows Phone 7 devices had used the Zune protocol—an evolution of MTP). The primary reason for this is that MTP does not require exclusive access to the storage device the way UMS does, alleviating potential problems should an Android program request the storage while it is attached to a computer. The main drawback is that MTP is not as well supported outside of Windows operating systems.
Human interface devices
A USB mouse or keyboard can usually be used with older computers that have PS/2 ports with the aid of a small USB-to-PS/2 adapter. For mice and keyboards with dual-protocol support, a passive adapter that contains no logic circuitry may be used: the USB hardware in the keyboard or mouse is designed to detect whether it is connected to a USB or PS/2 port, and communicate using the appropriate protocol. Active converters that connect USB keyboards and mice (usually one of each) to PS/2 ports also exist.
Device Firmware Upgrade mechanism
Device Firmware Upgrade (DFU) is a generic mechanism for upgrading the firmware of USB devices with improved versions provided by their manufacturers, offering (for example) a way to deploy firmware bug fixes. During the firmware upgrade operation, USB devices change their operating mode effectively becoming a PROM programmer. Any class of USB device can implement this capability by following the official DFU specifications. Doing so allows use of DFU-compatible host tools to update the device.
DFU is sometimes used as a flash memory programming protocol in microcontrollers with built-in USB bootloader functionality.
Audio streaming
The USB Device Working Group has laid out specifications for audio streaming, and specific standards have been developed and implemented for audio class uses, such as microphones, speakers, headsets, telephones, musical instruments, etc. The working group has published three versions of audio device specifications: USB Audio 1.0, 2.0, and 3.0, referred to as "UAC" or "ADC".
UAC 3.0 primarily introduces improvements for portable devices, such as reduced power usage by bursting the data and staying in low power mode more often, and power domains for different components of the device, allowing them to be shut down when not in use.
UAC 2.0 introduced support for High Speed USB (in addition to Full Speed), allowing greater bandwidth for multi-channel interfaces, higher sample rates, lower inherent latency, and 8× improvement in timing resolution in synchronous and adaptive modes. UAC2 also introduced the concept of clock domains, which provides information to the host about which input and output terminals derive their clocks from the same source, as well as improved support for audio encodings like DSD, audio effects, channel clustering, user controls, and device descriptions.
UAC 1.0 devices are still common, however, due to their cross-platform driverless compatibility, and also partly due to Microsoft's failure to implement UAC 2.0 for over a decade after its publication, having finally added support to Windows 10 through the Creators Update on 20 March 2017. UAC 2.0 is also supported by macOS, iOS, and Linux, however Android only implements a subset of the UAC 1.0 specification.
USB provides three isochronous (fixed-bandwidth) synchronization types, all of which are used by audio devices:
Asynchronous — The ADC or DAC are not synced to the host computer's clock at all, operating off a free-running clock local to the device.
Synchronous — The device's clock is synced to the USB start-of-frame (SOF) or Bus Interval signals. For instance, this can require syncing an 11.2896 MHz clock to a 1 kHz SOF signal, a large frequency multiplication.
Adaptive — The device's clock is synced to the amount of data sent per frame by the host
While the USB spec originally described asynchronous mode being used in "low cost speakers" and adaptive mode in "high-end digital speakers", the opposite perception exists in the hi-fi world, where asynchronous mode is advertised as a feature, and adaptive/synchronous modes have a bad reputation. In reality, all types can be high-quality or low-quality, depending on the quality of their engineering and the application. Asynchronous has the benefit of being untied from the computer's clock, but the disadvantage of requiring sample rate conversion when combining multiple sources.
Connectors
The connectors the USB committee specifies support a number of USB's underlying goals, and reflect lessons learned from the many connectors the computer industry has used. The female connector mounted on the host or device is called the receptacle, and the male connector attached to the cable is called the plug. The official USB specification documents also periodically define the term male to represent the plug, and female to represent the receptacle.
The design is intended to make it difficult to insert a USB plug into its receptacle incorrectly. The USB specification requires that the cable plug and receptacle be marked so the user can recognize the proper orientation. The USB-C plug however is reversible. USB cables and small USB devices are held in place by the gripping force from the receptacle, with no screws, clips, or thumb-turns as some connectors use.
The different A and B plugs prevent accidentally connecting two power sources. However, some of this directed topology is lost with the advent of multi-purpose USB connections (such as USB On-The-Go in smartphones, and USB-powered Wi-Fi routers), which require A-to-A, B-to-B, and sometimes Y/splitter cables.
USB connector types multiplied as the specification progressed. The original USB specification detailed standard-A and standard-B plugs and receptacles. The connectors were different so that users could not connect one computer receptacle to another. The data pins in the standard plugs are recessed compared to the power pins, so that the device can power up before establishing a data connection. Some devices operate in different modes depending on whether the data connection is made. Charging docks supply power, and do not include a host device or data pins, allowing any capable USB device to charge or operate from a standard USB cable. Charging cables provide power connections but not data. In a charge-only cable, the data wires are shorted at the device end; otherwise, the device may reject the charger as unsuitable.
Cabling
The USB 1.1 standard specifies that a standard cable can have a maximum length of with devices operating at full speed (12 Mbit/s), and a maximum length of with devices operating at low speed (1.5 Mbit/s).
USB 2.0 provides for a maximum cable length of for devices running at high speed (480 Mbit/s).
The USB 3.0 standard does not directly specify a maximum cable length, requiring only that all cables meet an electrical specification: for copper cabling with AWG 26 wires the maximum practical length is .
USB bridge cables
USB bridge cables, or data transfer cables can be found within the market, offering direct PC to PC connections. A bridge cable is a special cable with a chip and active electronics in the middle of the cable. The chip in the middle of the cable acts as a peripheral to both computers and allows for peer-to-peer communication between the computers. The USB bridge cables are used to transfer files between two computers via their USB ports.
Popularized by Microsoft as Windows Easy Transfer, the Microsoft utility used a special USB bridge cable to transfer personal files and settings from a computer running an earlier version of Windows to a computer running a newer version. In the context of the use of Windows Easy Transfer software, the bridge cable can sometimes be referenced as Easy Transfer cable.
Many USB bridge / data transfer cables are still USB 2.0, but there are also a number of USB 3.0 transfer cables. Despite USB 3.0 being 10 times faster than USB 2.0, USB 3.0 transfer cables are only 2 to 3 times faster given their design.
The USB 3.0 specification introduced an A-to-A cross-over cable without power for connecting two PCs. These are not meant for data transfer but are aimed at diagnostic uses.
Dual-role USB connections
USB bridge cables have become less important with USB dual-role-device capabilities introduced with the USB 3.1 specification. Under the most recent specifications, USB supports most scenarios connecting systems directly with a Type-C cable. For the capability to work, however, connected systems must support role-switching. Dual-role capabilities requires there be two controllers within the system, as well as a role controller. While this can be expected in a mobile platform such as a tablet or a phone, desktop PCs and laptops often will not support dual roles.
Power
Upstream USB connectors supply power at a nominal 5 V DC via the V_BUS pin to downstream USB devices.
Low-power and high-power devices
This section describes the power distribution model of USB that existed before Power-Delivery (USB-PD). On devices that do not use PD, USB provides up to 4.5 W through Type-A and Type-B connectors, and up to 15 W through USB-C. All pre-PD USB power is provided at 5 V.
For a host providing power to devices, USB has a concept of the unit load. Any device may draw power of one unit, and devices may request more power in these discrete steps. It is not required that the host provide requested power, and a device may not draw more power than negotiated.
Devices that draw no more than one unit are said to be low-power devices. All devices must act as low-power devices when starting out as unconfigured. For USB devices up to USB 2.0 a unit load is 100 mA (or 500 mW), while USB 3.0 defines a unit load as 150 mA (750 mW). Full-featured USB-C can support low-power devices with a unit load of 250 mA (or 1250 mW).
Devices that draw more than one unit are high-power devices (such as typical 2.5-inch hard disk drives). USB up to 2.0 allows a host or hub to provide up to 2.5 W to each device, in five discrete steps of 100 mA, and SuperSpeed devices (USB 3.x) allows a host or a hub to provide up to 4.5 W in six steps of 150 mA.
USB-C allows for dual-lane operation of USB 3.x with larger unit load (250 mA; up to 7.5 W). USB-C also allows for Type-C Current as a replacement for USB BC, signaling power availability in a simple way, without needing any data connection.
To recognize Battery Charging mode, a dedicated charging port places a resistance not exceeding 200 Ω across the D+ and D− terminals. Shorted or near-shorted data lanes with less than 200 Ω of resistance across the D+ and D− terminals signify a dedicated charging port (DCP) with indefinite charging rates.
In addition to standard USB, there is a proprietary high-powered system known as PoweredUSB, developed in the 1990s, and mainly used in point-of-sale terminals such as cash registers.
Signaling
USB signals are transmitted using differential signaling on twisted-pair data wires with characteristic impedance. USB 2.0 and earlier specifications define a single pair in half-duplex (HDx). USB 3.0 and later specifications define one dedicated pair for USB 2.0 compatibility and two or four pairs for data transfer: two data wire pairs realising full-duplex (FDx) for single lane (×1) variants require at least SuperSpeed (SS) connectors; four pairs realising full-duplex for two lane (×2) variants require USB-C connectors.
USB4 Gen 4 requires the use of all four pairs but allow for asymmetrical pairs configuration. In this case one data wire pair is used for the upstream data and the other three for the downstream data or vice-versa. USB4 Gen 4 use pulse amplitude modulation on 3 levels, providing a trit of information every baud transmitted, the transmission frequency of 12.8 GHz translate to a transmission rate of 25.6 GBd and the 11-bit–to–7-trit translation provides a theoretical maximum transmission speed just over 40.2 Gbit/s.
Low-speed (LS) and Full-speed (FS) modes use a single data wire pair, labeled D+ and D−, in half-duplex. Transmitted signal levels are for logical low, and for logical high level. The signal lines are not terminated.
High-speed (HS) uses the same wire pair, but with different electrical conventions. Lower signal voltages of for low and for logical high level, and termination of 45 Ω to ground or 90 Ω differential to match the data cable impedance.
SuperSpeed (SS) adds two additional pairs of shielded twisted data wires (and new, mostly compatible expanded connectors) besides another grounding wire. These are dedicated to full-duplex SuperSpeed operation. The SuperSpeed link operates independently from the USB 2.0 channel and takes precedence on connection. Link configuration is performed using LFPS (Low Frequency Periodic Signaling, approximately at 20 MHz frequency), and electrical features include voltage de-emphasis at the transmitter side, and adaptive linear equalization on the receiver side to combat electrical losses in transmission lines, and thus the link introduces the concept of link training.
SuperSpeed+ (SS+) uses a new coding scheme with an increased signaling rate (Gen 2×1 mode) and/or the additional lane of USB-C (Gen 1×2 and Gen 2×2 modes).
A USB connection is always between an A end, either a host or a downstream port of a hub, and a B end, either a peripheral device or the upstream port of a hub. Historically this was made clear by the fact that hosts had only Type-A and peripheral devices had only Type-B ports, and every compatible cable had one Type-A plug and one Type-B plug. USB-C (Type-C) is a single connector that replaces all legacy Type-A and Type-B connectors, so when both sides are equipment with USB Type-C ports they negotiate which is the host and which is the device.
Protocol layer
During USB communication, data is transmitted as packets. Initially, all packets are sent from the host via the root hub, and possibly more hubs, to devices. Some of those packets direct a device to send some packets in reply.
Transactions
The basic transactions of USB are:
OUT transaction
IN transaction
SETUP transaction
Control transfer exchange
Related standards
Media Agnostic USB
The USB Implementers Forum introduced the Media Agnostic USB (MA-USB) v.1.0 wireless communication standard based on the USB protocol on 29 July 2015. Wireless USB is a cable-replacement technology, and uses ultra-wideband wireless technology for data rates of up to 480 Mbit/s.
The USB-IF used WiGig Serial Extension v1.2 specification as its initial foundation for the MA-USB specification and is compliant with SuperSpeed USB (3.0 and 3.1) and Hi-Speed USB (USB 2.0). Devices that use MA-USB will be branded as "Powered by MA-USB", provided the product qualifies its certification program.
InterChip USB
InterChip USB is a chip-to-chip variant that eliminates the conventional transceivers found in normal USB. The HSIC physical layer uses about 50% less power and 75% less board area compared to USB 2.0. It is an alternative standard to SPI and I2C.
USB-C
USB-C (officially USB Type-C) is a standard that defines a new connector, and several new connection features. Among them it supports Alternate Mode, which allows transporting other protocols via the USB-C connector and cable. This is commonly used to support the DisplayPort or HDMI protocols, which allows connecting a display, such as a computer monitor or television set, via USB-C.
All other connectors are not capable of two-lane operations (Gen 1×2 and Gen 2×2) in USB 3.2, but can be used for one-lane operations (Gen 1×1 and Gen 2×1).
DisplayLink
DisplayLink is a technology which allows multiple displays to be connected to a computer via USB. It was introduced around 2006, and before the advent of Alternate Mode over USB-C it was the only way to connect displays via USB. It is a proprietary technology, not standardized by the USB Implementers Forum and typically requires a separate device driver on the computer.
Comparisons with other connection methods
FireWire (IEEE 1394)
At first, USB was considered a complement to FireWire (IEEE 1394) technology, which was designed as a high-bandwidth serial bus that efficiently interconnects peripherals such as disk drives, audio interfaces, and video equipment. In the initial design, USB operated at a far lower data rate and used less sophisticated hardware. It was suitable for small peripherals such as keyboards and pointing devices.
The most significant technical differences between FireWire and USB include:
USB networks use a tiered-star topology, while IEEE 1394 networks use a tree topology.
USB 1.0, 1.1, and 2.0 use a "speak-when-spoken-to" protocol, meaning that each peripheral communicates with the host when the host specifically requests communication. USB 3.0 allows for device-initiated communications towards the host. A FireWire device can communicate with any other node at any time, subject to network conditions.
A USB network relies on a single host at the top of the tree to control the network. All communications are between the host and one peripheral. In a FireWire network, any capable node can control the network.
USB runs with a 5 V power line, while FireWire supplies 12 V and theoretically can supply up to 30 V.
Standard USB hub ports can provide from the typical 500 mA/2.5 W of current, only 100 mA from non-hub ports. USB 3.0 and USB On-The-Go supply 1.8 A/9.0 W (for dedicated battery charging, 1.5 A/7.5 W full bandwidth or 900 mA/4.5 W high bandwidth), while FireWire can in theory supply up to 60 watts of power, although 10 to 20 watts is more typical.
These and other differences reflect the differing design goals of the two buses: USB was designed for simplicity and low cost, while FireWire was designed for high performance, particularly in time-sensitive applications such as audio and video. Although similar in theoretical maximum signaling rate, FireWire 400 is faster than USB 2.0 high-bandwidth in real-use, especially in high-bandwidth use such as external hard drives. The newer FireWire 800 standard is twice as fast as FireWire 400 and faster than USB 2.0 high-bandwidth both theoretically and practically. However, FireWire's speed advantages rely on low-level techniques such as direct memory access (DMA), which in turn have created opportunities for security exploits such as the DMA attack.
The chipset and drivers used to implement USB and FireWire have a crucial impact on how much of the bandwidth prescribed by the specification is achieved in the real world, along with compatibility with peripherals.
Ethernet
The IEEE 802.3af, 802.3at, and 802.3bt Power over Ethernet (PoE) standards specify more elaborate power negotiation schemes than powered USB. They operate at 48 V DC and can supply more power (up to 12.95 W for 802.3af, 25.5 W for 802.3at, a.k.a. PoE+, 71 W for 802.3bt, a.k.a. 4PPoE) over a cable up to 100 meters compared to USB 2.0, which provides 2.5 W with a maximum cable length of 5 meters. This has made PoE popular for Voice over IP telephones, security cameras, wireless access points, and other networked devices within buildings. However, USB is cheaper than PoE provided that the distance is short and power demand is low.
Ethernet standards require electrical isolation between the networked device (computer, phone, etc.) and the network cable up to 1500 V AC or 2250 V DC for 60 seconds. USB has no such requirement as it was designed for peripherals closely associated with a host computer, and in fact it connects the peripheral and host grounds. This gives Ethernet a significant safety advantage over USB with peripherals such as cable and DSL modems connected to external wiring that can assume hazardous voltages under certain fault conditions.
MIDI
The USB Device Class Definition for MIDI Devices transmits Music Instrument Digital Interface (MIDI) music data over USB. The MIDI capability is extended to allow up to sixteen simultaneous virtual MIDI cables, each of which can carry the usual MIDI sixteen channels and clocks.
USB is competitive for low-cost and physically adjacent devices. However, Power over Ethernet and the MIDI plug standard have an advantage in high-end devices that may have long cables. USB can cause ground loop problems between equipment, because it connects ground references on both transceivers. By contrast, the MIDI plug standard and Ethernet have built-in isolation to or more.
eSATA/eSATAp
The eSATA connector is a more robust SATA connector, intended for connection to external hard drives and SSDs. eSATA's transfer rate (up to 6 Gbit/s) is similar to that of USB 3.0 (up to 5 Gbit/s) and USB 3.1 (up to 10 Gbit/s). A device connected by eSATA appears as an ordinary SATA device, giving both full performance and full compatibility associated with internal drives.
eSATA does not supply power to external devices. This is an increasing disadvantage compared to USB. Even though USB 3.0's 4.5 W is sometimes insufficient to power external hard drives, technology is advancing, and external drives gradually need less power, diminishing the eSATA advantage. eSATAp (power over eSATA, a.k.a. ESATA/USB) is a connector introduced in 2009 that supplies power to attached devices using a new, backward compatible, connector. On a notebook eSATAp usually supplies only 5 V to power a 2.5-inch HDD/SSD; on a desktop workstation it can additionally supply 12 V to power larger devices including 3.5-inch HDD/SSD and 5.25-inch optical drives.
eSATAp support can be added to a desktop machine in the form of a bracket connecting the motherboard SATA, power, and USB resources.
eSATA, like USB, supports hot plugging, although this might be limited by OS drivers and device firmware.
Thunderbolt
Thunderbolt combines PCI Express and DisplayPort into a new serial data interface. Original Thunderbolt implementations have two channels, each with a transfer speed of 10 Gbit/s, resulting in an aggregate unidirectional bandwidth of 20 Gbit/s.
Thunderbolt 2 uses link aggregation to combine the two 10 Gbit/s channels into one bidirectional 20 Gbit/s channel.
Thunderbolt 3 and Thunderbolt 4 use USB-C. Thunderbolt 3 has two physical 20 Gbit/s bi-directional channels, aggregated to appear as a single logical 40 Gbit/s bi-directional channel. Thunderbolt 3 controllers can incorporate a USB 3.1 Gen 2 controller to provide compatibility with USB devices. They are also capable of providing DisplayPort Alternate Mode as well as DisplayPort over USB4 Fabric, making the function of a Thunderbolt 3 port a superset of that of a USB 3.1 Gen 2 port.
DisplayPort Alternate Mode 2.0: USB4 (requiring USB-C) requires that hubs support DisplayPort 2.0 over a USB-C Alternate Mode. DisplayPort 2.0 can support 8K resolution at 60 Hz with HDR10 color. DisplayPort 2.0 can use up to 80 Gbit/s, which is double the amount available to USB data, because it sends all the data in one direction (to the monitor) and can thus use all eight data wires at once.
After the specification was made royalty-free and custodianship of the Thunderbolt protocol was transferred from Intel to the USB Implementers Forum, Thunderbolt 3 has been effectively implemented in the USB4 specification – with compatibility with Thunderbolt 3 optional but encouraged for USB4 products.
Interoperability
Various protocol converters are available that convert USB data signals to and from other communications standards.
Security threats
Due to the prevalency of the USB standard, there are many exploits using the USB standard. One of the biggest instances of this today is known as the USB killer, a device that damages USB devices by sending high voltage pulses across the data lines.
In versions of Microsoft Windows before Windows XP, Windows would automatically run a script (if present) on certain devices via AutoRun, one of which are USB mass storage devices, which may contain malicious software.
| Technology | User interface | null |
32101 | https://en.wikipedia.org/wiki/Umbriel | Umbriel | Umbriel () is the third-largest moon of Uranus. It was discovered on October 24, 1851, by William Lassell at the same time as neighboring moon Ariel. It was named after a character in Alexander Pope's 1712 poem The Rape of the Lock. Umbriel consists mainly of ice with a substantial fraction of rock, and may be differentiated into a rocky core and an icy mantle. The surface is the darkest among Uranian moons, and appears to have been shaped primarily by impacts, but the presence of canyons suggests early internal processes, and the moon may have undergone an early endogenically driven resurfacing event that obliterated its older surface.
Covered by numerous impact craters reaching in diameter, Umbriel is the second-most heavily cratered satellite of Uranus after Oberon. The most prominent surface feature is a ring of bright material on the floor of Wunda crater. This moon, like all regular moons of Uranus, probably formed from an accretion disk that surrounded the planet just after its formation. Umbriel has been studied up close only once, by the spacecraft Voyager 2 in January 1986. It took several images of Umbriel, which allowed mapping of about 40% of the moon's surface.
Discovery and name
Umbriel, along with another Uranian satellite, Ariel, was discovered by William Lassell on October 24, 1851. Although William Herschel, the discoverer of Titania and Oberon, claimed at the end of the 18th century that he had observed four additional moons of Uranus, his observations were not confirmed and those four objects are now thought to be spurious.
All of Uranus's moons are named after characters created by William Shakespeare or Alexander Pope. The names of all four satellites of Uranus then known were suggested by John Herschel (son of William) in 1852 at the request of Lassell, though it is uncertain if Herschel devised the names, or if Lassell did so and then sought Herschel's permission. Umbriel is the "dusky melancholy sprite" in Alexander Pope's The Rape of the Lock, and the name suggests the Latin , meaning . The moon is also designated Uranus II.
Orbit
Umbriel orbits Uranus at the distance of about , being the third farthest from the planet among its five major moons. Umbriel's orbit has a small eccentricity and is inclined very little relative to the equator of Uranus. Its orbital period is around 4.1 Earth days, coincident with its rotational period, making it a synchronous or tidally locked satellite, with one face always pointing toward its parent planet. Umbriel's orbit lies completely inside the Uranian magnetosphere. This is important, because the trailing hemispheres of airless satellites orbiting inside a magnetosphere (like Umbriel) are struck by magnetospheric plasma, which co-rotates with the planet. This bombardment may lead to the darkening of the trailing hemispheres, which is observed for all Uranian moons except Oberon (see below). Umbriel also serves as a sink of the magnetospheric charged particles, which creates a pronounced dip in energetic particle count near the moon's orbit as observed by Voyager 2 in 1986.
Because Uranus orbits the Sun almost on its side, and its moons orbit in the planet's equatorial plane, Umbriel and the other moons are subject to an extreme seasonal cycle. Both northern and southern poles spend 42 years in complete darkness, and another 42 years in continuous sunlight, with the Sun rising close to the zenith over one of the poles at each solstice. The Voyager 2 flyby coincided with the southern hemisphere's 1986 summer solstice, when nearly the entire northern hemisphere was unilluminated. Once every 42 years, when Uranus has an equinox and its equatorial plane intersects the Earth, mutual occultations of Uranus's moons become possible. In 2007–2008, several such events were observed including two occultations of Titania by Umbriel on August 15 and December 8, 2007, as well as of Ariel by Umbriel on August 19, 2007.
Currently, Umbriel is not involved in any orbital resonance with other Uranian satellites. Early in its history however, it may have been in a 1:3 resonance with Miranda. This would have increased Miranda's orbital eccentricity, contributing to the internal heating and geological activity of that moon, while Umbriel's orbit would have been less affected. Due to Uranus's lower oblateness and smaller size relative to its satellites, its moons can escape more easily from a mean motion resonance than those of Jupiter or Saturn. After Miranda escaped from this resonance (through a mechanism that probably resulted in its anomalously high orbital inclination), its eccentricity would have been damped, turning off the heat source.
Composition and internal structure
Umbriel is the third-largest and third-most massive of the Uranian moons. Umbriel is the 13th-largest moon in the Solar System, and it is also the 13th-most massive. The moon's density is 1.54 g/cm3, which indicates that it mainly consists of water ice, with a dense non-ice component constituting around 40% of its mass. The latter could be made of rock and carbonaceous material including heavy organic compounds known as tholins. The presence of water ice is supported by infrared spectroscopic observations, which have revealed crystalline water ice on the surface of the moon. Water ice absorption bands are stronger on Umbriel's leading hemisphere than on the trailing hemisphere. The cause of this asymmetry is not known, but it may be related to the bombardment by charged particles from the magnetosphere of Uranus, which is stronger on the trailing hemisphere (due to the plasma's co-rotation). The energetic particles tend to sputter water ice, decompose methane trapped in ice as clathrate hydrate and darken other organics, leaving a dark, carbon-rich residue behind.
Except for water, the only other compound identified on the surface of Umbriel by the infrared spectroscopy is carbon dioxide, which is concentrated mainly on the trailing hemisphere. The origin of the carbon dioxide is not completely clear. It might be produced locally from carbonates or organic materials under the influence of the energetic charged particles coming from the magnetosphere of Uranus or the solar ultraviolet radiation. This hypothesis would explain the asymmetry in its distribution, as the trailing hemisphere is subject to a more intense magnetospheric influence than the leading hemisphere. Another possible source is the outgassing of the primordial CO2 trapped by water ice in Umbriel's interior. The escape of CO2 from the interior may be a result of past geological activity on this moon.
Umbriel may be differentiated into a rocky core surrounded by an icy mantle. If this is the case, the radius of the core (317 km) is about 54% of the radius of the moon, and its mass is around 40% of the moon's mass—the parameters are dictated by the moon's composition. The pressure in the center of Umbriel is about 0.24 GPa (2.4 kbar). The current state of the icy mantle is unclear, although the existence of a subsurface ocean is considered unlikely.
Surface features
Umbriel's surface is the darkest of the Uranian moons, and reflects less than half as much light as Ariel, a sister satellite of similar size. Umbriel has a very low Bond albedo of only about 10% as compared to 23% for Ariel. The reflectivity of the moon's surface decreases from 26% at a phase angle of 0° (geometric albedo) to 19% at an angle of about 1°. This phenomenon is called opposition surge. The surface of Umbriel is slightly blue in color, while fresh bright impact deposits (in Wunda crater, for instance) are even bluer. There may be an asymmetry between the leading and trailing hemispheres; the former appears to be redder than the latter. The reddening of the surfaces probably results from space weathering from bombardment by charged particles and micrometeorites over the age of the Solar System. However, the color asymmetry of Umbriel is likely caused by accretion of a reddish material coming from outer parts of the Uranian system, possibly, from irregular satellites, which would occur predominately on the leading hemisphere. The surface of Umbriel is relatively homogeneous—it does not demonstrate strong variation in either albedo or color.
Scientists have so far recognized only one class of geological feature on Umbriel—craters. The surface of Umbriel has far more and larger craters than do Ariel and Titania. It shows the least geological activity. In fact, among the Uranian moons only Oberon has more impact craters than Umbriel. The observed crater diameters range from a few kilometers at the low end to 210 kilometers for the largest known crater, Wokolo. All recognized craters on Umbriel have central peaks, but no crater has rays.
Near Umbriel's equator lies the most prominent surface feature: Wunda crater, which has a diameter of about 131 km. Wunda has a large ring of bright material on its floor, which may be an impact deposit or a deposit of pure carbon dioxide ice, which formed when the radiolytically formed carbon dioxide migrated from all over the surface of Umbriel and then got trapped in relatively cold Wunda. Nearby, seen along the terminator, are the craters Vuver and Skynd, which lack bright rims but possess bright central peaks. Study of limb profiles of Umbriel revealed a possible very large impact feature having the diameter of about 400 km and depth of approximately 5 km.
Much like other moons of Uranus, the surface of Umbriel is cut by a system of canyons trending northeast–southwest. They are not officially recognized due to the poor imaging resolution and generally bland appearance of this moon, which hinders geological mapping.
Umbriel's heavily cratered surface has probably been stable since the Late Heavy Bombardment. The only signs of the ancient internal activity are canyons and dark polygons—dark patches with complex shapes measuring from tens to hundreds of kilometers across. The polygons were identified from precise photometry of Voyager 2's images and are distributed more or less uniformly on the surface of Umbriel, trending northeast–southwest. Some polygons correspond to depressions of a few kilometers deep and may have been created during an early episode of tectonic activity. Currently there is no explanation for why Umbriel is so dark and uniform in appearance. Its surface may be covered by a relatively thin layer of dark material (so called umbral material) excavated by an impact or expelled in an explosive volcanic eruption. Alternatively, Umbriel's crust may be entirely composed of the dark material, which prevented formation of bright features like crater rays. However, the presence of the bright feature within Wunda seems to contradict this hypothesis.
Origin and evolution
Umbriel is thought to have formed from an accretion disc or subnebula; a disc of gas and dust that either existed around Uranus for some time after its formation or was created by the giant impact that most likely gave Uranus its large obliquity. The precise composition of the subnebula is not known, but the higher density of Uranian moons compared to the moons of Saturn indicates that it may have been relatively water-poor. Significant amounts of nitrogen and carbon may have been present in the form of carbon monoxide (CO) and molecular nitrogen (N2) instead of ammonia and methane. The moons that formed in such a subnebula would contain less water ice (with CO and N2 trapped as clathrate) and more rock, explaining the higher density.
Umbriel's accretion probably lasted for several thousand years. The impacts that accompanied accretion caused heating of the moon's outer layer. The maximum temperature of around 180 K was reached at the depth of about 3 km. After the end of formation, the subsurface layer cooled, while the interior of Umbriel heated due to decay of radioactive elements in its rocks. The cooling near-surface layer contracted, while the interior expanded. This caused strong extensional stresses in the moon's crust, which may have led to cracking. This process probably lasted for about 200 million years, implying that any endogenous activity ceased billions of years ago.
The initial accretional heating together with continued decay of radioactive elements may have led to melting of the ice if an antifreeze like ammonia (in the form of ammonia hydrate) or some salt was present. The melting may have led to the separation of ice from rocks and formation of a rocky core surrounded by an icy mantle. A layer of liquid water (ocean) rich in dissolved ammonia may have formed at the core–mantle boundary. The eutectic temperature of this mixture is 176 K. The ocean is likely to have frozen long ago. Among Uranian moons Umbriel was least subjected to endogenic resurfacing processes, although it may, like other Uranian moons, have experienced a very early resurfacing event.
Exploration
The only close-up images of Umbriel have been from the Voyager 2 probe, which photographed the moon during its flyby of Uranus in January 1986. Since the closest distance between Voyager 2 and Umbriel was , the best images of this moon have a spatial resolution of about 5.2 km. The images cover about 40% of the surface, but only 20% was photographed with enough quality for geological mapping. At the time of the flyby the southern hemisphere of Umbriel (like those of the other moons) was pointed towards the Sun, so the northern (dark) hemisphere could not be studied.
| Physical sciences | Solar System | Astronomy |
32133 | https://en.wikipedia.org/wiki/Urethra | Urethra | The urethra (: urethras or urethrae) is the tube that connects the urinary bladder to the urinary meatus, through which placental mammals urinate and ejaculate. In non-mammalian vertebrates, the urethra also transports semen but is separate from the urinary tract.
The external urethral sphincter is a striated muscle that allows voluntary control over urination. The internal sphincter, formed by the involuntary smooth muscles lining the bladder neck and urethra, receives its nerve supply by the sympathetic division of the autonomic nervous system. The internal sphincter is present both in males and females.
Structure
The urethra is a fibrous and muscular tube which connects the urinary bladder to the external urethral meatus. Its length differs between the sexes, because it passes through the penis in males.
Male
In the human male, the urethra is on average long and opens at the end of the external urethral meatus.
The urethra is divided into four parts in men, named after the location:
There is inadequate data for the typical length of the male urethra; however, a study of 109 men showed an average length of 22.3 cm (SD = 2.4 cm), ranging from 15 cm to 29 cm.
The urethra in male placental mammals is typically longer than in females.
Female
In the human female, the urethra is about 4 cm long, having 6 mm diameter, and exits the body between the clitoris and the vaginal opening, extending from the internal to the external urethral orifice. The meatus is located below the clitoris. It is placed behind the symphysis pubis, embedded in the anterior wall of the vagina, and its direction is obliquely downward and forward; it is slightly curved with the concavity directed forward. The proximal two-thirds of the urethra is lined by transitional epithelial cells, while the distal third is lined by stratified squamous epithelial cells.
Between the superior and inferior fascia of the urogenital diaphragm, the female urethra is surrounded by the urethral sphincter.
The urethra in female placental mammals is typically shorter than in the male.
Microanatomy
The cells lining the urethra (the epithelium) start off as transitional cells as it exits the bladder, which are variable layers of flat to cuboidal cells that change shape depending on whether they are compressed by the contents of the urethra. Further along the urethra there are pseudostratified columnar and stratified columnar epithelia. The lining becomes multiple layers of flat cells near the end of the urethra, which is the same as the external skin around it.
There are small mucus-secreting urethral glands, as well as bulbo-urethral glands of Cowper, that secrete mucous acting to lubricate the urethra.
The urethra consists of three coats: muscular, erectile, and mucous, the muscular layer being a continuation of that of the bladder.
Blood and nerve supply and lymphatics
Somatic (conscious) innervation of the external urethral sphincter is supplied by the pudendal nerve.
Development
In the developing embryo, at the hind end lies a cloaca. This, over the fourth to the seventh week, divides into a urogenital sinus and the beginnings of the anal canal, with a wall forming between these two inpouchings called the urorectal septum. The urogenital sinus divides into three parts, with the middle part forming the urethra; the upper part is largest and becomes the urinary bladder, and the lower part then changes depending on the biological sex of the embryo. The cells lining the urethra (the epithelium) come from endoderm, whereas the connective tissue and smooth muscle parts are derived from mesoderm.
After the third month, urethra also contributes to the development of associated structures depending on the biological sex of the embryo. In the male, the epithelium multiples to form the prostate. In the female, the upper part of the urethra forms the urethra and paraurethral glands.
Function
Urination
The urethra is the vessel through which urine passes after leaving the bladder. During urination, the smooth muscle lining the urethra relaxes in concert with bladder contraction(s) to forcefully expel the urine in a pressurized stream. Following this, the urethra re-establishes muscle tone by contracting the smooth muscle layer, and the bladder returns to a relaxed, quiescent state. Urethral smooth muscle cells are mechanically coupled to each other to coordinate mechanical force and electrical signaling in an organized, unitary fashion.
Ejaculation
The male urethra is the conduit for semen during orgasm. Urine is removed before ejaculation by pre-ejaculate fluid – called Cowper's fluid – from the bulbourethral gland.
Clinical significance
Infection of the urethra is urethritis, which often causes purulent urethral discharge. It is most often due to a sexually transmitted infection such as gonorrhoea or chlamydia, and less commonly due to other bacteria such as ureaplasma or mycoplasma; trichomonas vaginalis; or the viruses herpes simplex virus and adenovirus. Investigations such as a gram stain of the discharge might reveal the cause; nucleic acid testing based on the first urine sample passed in a day, or a swab of the urethra sent for bacterial culture and sensitivity may also be used. Treatment usually involves antibiotics that treat both gonorrhoea and chlamydia, as these often occur together. A person being treated for urethritis should not have sex until the infection is treated, so that they do not spread the infection to others. Because of this spread, which may occur during an incubation period before a person gets symptoms, there is often contact tracing so that sexual partners of an affected person can be found and treatment offered.
Cancer can also develop in the lining of the urethra. When cancer is present, the most common symptom in an affected person is blood in the urine; a physical medical examination may be otherwise normal, except in late disease. Cancer of the urethra is most often due to cancer of the cells lining the urethra, called transitional cell carcinoma, although it can more rarely occur as a squamous cell carcinoma if the type of cells lining the urethra have changed, such as due to a chronic schistosomiasis infection. Investigations performed usually include collecting a sample of urine for an inspection for malignant cells under a microscope, called cytology, as well as examination with a flexible camera through the urethra, called urethroscopy. If a malignancy is found, a biopsy will be taken, and a CT scan will be performed of other body parts (a CT scan of the chest, abdomen and pelvis) to look for additional lesions. After the cancer is staged, treatment may involve chemotherapy.
Injury
Passage of kidney stones through the urethra can be painful. Damage to the urethra, such as by kidney stones, chronic infection, cancer, or from catheterisation, can lead to narrowing, called a urethral stricture. The location and structure of the narrowing can be investigated with a medical imaging scan in which dye is injected through the urinary meatus into the urethra, called a retrograde urethrogram. Additional forms of imaging, such as ultrasound, computed tomography and magnetic resonance imaging may also be used to provide further details.
Injuries to the urethra (e.g., from a pelvic fracture)
Foreign bodies in the urethra are uncommon, but there have been medical case reports of self-inflicted injuries, a result of insertion of foreign bodies into the urethra such as an electrical wire.
Other
Hypospadias and epispadias are forms of abnormal development of the urethra in the male, where the meatus is not located at the distal end of the penis (it occurs lower than normal with hypospadias, and higher with epispadias). In a severe chordee, the urethra can develop between the penis and the scrotum.
Catheterisation
A tube called a catheter can be inserted through the urethra to drain urine from the bladder, called an indwelling urinary catheter; or, to bypass the urethra, a catheter may be directly inserted through the abdominal wall into the bladder, called a suprapubic catheter. This may be to relieve or bypass an obstruction, to monitor how much urine someone produces, or because a person has difficulty urinating, for example due to a neurological cause such as multiple sclerosis. Complications that are associated with catheter insertion can include catheter-associated infections, injury to the urethra or nearby structures, or pain.
Other animals
In all mammals, with the exception of monotremes, and in both sexes, the urethra serves primarily to drain and excrete urine, which in mammals, collects in the urinary bladder and is released from there into the urethra. In addition, the closing mechanisms of the urethra, together with immunoglobulins, largely prevent germs from penetrating the inside of the body. In marsupials, the female's urethra empties into the urogenital sinus.
History
The word "urethra" comes from the Ancient Greek οὐρήθρα – ourḗthrā. The stem "uro" relating to urination, with the structure described as early as the time of Hippocrates. Confusingly however, at the time it was called "ureter". Thereafter, terms "ureter" and "urethra" were variably used to refer to each other thereafter for more than a millennia. It was only in the 1550s that anatomists such as Bartolomeo Eustacchio and Jacques Dubois began to use the terms to specifically and consistently refer to what is in modern English called the ureter and the urethra. Following this, in the 19th and 20th centuries, multiple terms relating to the structures such as urethritis and urethrography, were coined.
Kidney stones have been identified and recorded about as long as written historical records exist. The urinary tract as well as its function to drain urine from the kidneys, has been described by Galen in the second century AD. Surgery to the urethra to remove kidney stones has been described since at least the first century AD by Aulus Cornelius Celsus.
Additional images
| Biology and health sciences | Urinary system | Biology |
32161 | https://en.wikipedia.org/wiki/Urinary%20tract%20infection | Urinary tract infection | A urinary tract infection (UTI) is an infection that affects a part of the urinary tract. Lower urinary tract infections may involve the bladder (cystitis) or urethra (urethritis) while upper urinary tract infections affect the kidney (pyelonephritis). Symptoms from a lower urinary tract infection include suprapubic pain, painful urination (dysuria), frequency and urgency of urination despite having an empty bladder. Symptoms of a kidney infection, on the other hand, are more systemic and include fever or flank pain usually in addition to the symptoms of a lower UTI. Rarely, the urine may appear bloody. Symptoms may be vague or non-specific at the extremities of age (i.e. in patients who are very young or old).
The most common cause of infection is Escherichia coli, though other bacteria or fungi may sometimes be the cause. Risk factors include female anatomy, sexual intercourse, diabetes, obesity, catheterisation, and family history. Although sexual intercourse is a risk factor, UTIs are not classified as sexually transmitted infections (STIs). Pyelonephritis usually occurs due to an ascending bladder infection but may also result from a blood-borne bacterial infection. Diagnosis in young healthy women can be based on symptoms alone. In those with vague symptoms, diagnosis can be difficult because bacteria may be present without there being an infection. In complicated cases or if treatment fails, a urine culture may be useful.
In uncomplicated cases, UTIs are treated with a short course of antibiotics such as nitrofurantoin or trimethoprim/sulfamethoxazole. Resistance to many of the antibiotics used to treat this condition is increasing. In complicated cases, a longer course or intravenous antibiotics may be needed. If symptoms do not improve in two or three days, further diagnostic testing may be needed. Phenazopyridine may help with symptoms. In those who have bacteria or white blood cells in their urine but have no symptoms, antibiotics are generally not needed, unless they are pregnant. In those with frequent infections, a short course of antibiotics may be taken as soon as symptoms begin or long-term antibiotics may be used as a preventive measure.
About 150million people develop a urinary tract infection in a given year. They are more common in women than men, but similar between anatomies while carrying indwelling catheters. In women, they are the most common form of bacterial infection. Up to 10% of women have a urinary tract infection in a given year, and half of women have at least one infection at some point in their lifetime. They occur most frequently between the ages of 16 and 35years. Recurrences are common. Urinary tract infections have been described since ancient times with the first documented description in the Ebers Papyrus dated to c. 1550 BC.
Signs and symptoms
Lower urinary tract infection is also referred to as a bladder infection. The most common symptoms are burning with urination and having to urinate frequently (or an urge to urinate) in the absence of vaginal discharge and significant pain. These symptoms may vary from mild to severe and in healthy women last an average of sixdays. Some pain above the pubic bone or in the lower back may be present. People experiencing an upper urinary tract infection, or pyelonephritis, may experience flank pain, fever, or nausea and vomiting in addition to the classic symptoms of a lower urinary tract infection. Rarely, the urine may appear bloody or contain visible pus in the urine.
UTIs have been associated with onset or worsening of delirium, dementia, and neuropsychiatric disorders such as depression and psychosis. However, there is insufficient evidence to determine whether UTI causes confusion. The reasons for this are unknown, but may involve a UTI-mediated systemic inflammatory response which affects the brain. Cytokines such as interleukin-6 produced as part of the inflammatory response may produce neuroinflammation, in turn affecting dopaminergic and/or glutamatergic neurotransmission as well as brain glucose metabolism.
Children
In young children, the only symptom of a urinary tract infection (UTI) may be a fever. Because of the lack of more obvious symptoms, when females under the age of two or uncircumcised males less than a year exhibit a fever, a culture of the urine is recommended by many medical associations. Infants may feed poorly, vomit, sleep more, or show signs of jaundice. In older children, new onset urinary incontinence (loss of bladder control) may occur. About 1 in 400 infants of one to three months of age with a UTI also have bacterial meningitis.
Elderly
Urinary tract symptoms are frequently lacking in the elderly. The presentations may be vague and include incontinence, a change in mental status, or fatigue as the only symptoms, while some present to a health care provider with sepsis, an infection of the blood, as the first symptoms. Diagnosis can be complicated by the fact that many elderly people have preexisting incontinence or dementia.
It is reasonable to obtain a urine culture in those with signs of systemic infection that may be unable to report urinary symptoms, such as when advanced dementia is present. Systemic signs of infection include a fever or increase in temperature of more than from usual, chills, and an increased white blood cell count.
Cause
Uropathogenic E. coli from the gut is the cause of 80–85% of community-acquired urinary tract infections, with Staphylococcus saprophyticus being the cause in 5–10%. Rarely they may be due to viral or fungal infections. Healthcare-associated urinary tract infections (mostly related to urinary catheterization) involve a much broader range of pathogens including: E. coli (27%), Klebsiella (11%), Pseudomonas (11%), the fungal pathogen Candida albicans (9%), and Enterococcus (7%) among others. During recent years of intensive care, Enterococcus spp. have several times been found as the primary cause of urinary tract infection, suggested related to broad treatment with cephalosporin antibiotics against which they are tolerant. Urinary tract infections due to Staphylococcus aureus typically occur secondary to blood-borne infections. Chlamydia trachomatis and Mycoplasma genitalium can infect the urethra but not the bladder. These infections are usually classified as a urethritis rather than urinary tract infection.
Intercourse
In young sexually active women, sexual activity is the cause of 75–90% of bladder infections, with the risk of infection related to the frequency of sex. The term "honeymoon cystitis" has been applied to this phenomenon of frequent UTIs during early marriage. In post-menopausal women, sexual activity does not affect the risk of developing a UTI. Spermicide use, independent of sexual frequency, increases the risk of UTIs. Diaphragm use is also associated. Condom use without spermicide or use of birth control pills does not increase the risk of uncomplicated urinary tract infection.
Sex
Women are more prone to UTIs than men because, in females, the urethra is much shorter and closer to the anus. As a woman's estrogen levels decrease with menopause, her risk of urinary tract infections increases due to the loss of protective vaginal flora. Additionally, vaginal atrophy that can sometimes occur after menopause is associated with recurrent urinary tract infections.
Chronic prostatitis in the forms of chronic prostatitis/chronic pelvic pain syndrome and chronic bacterial prostatitis (not acute bacterial prostatitis or asymptomatic inflammatory prostatitis) may cause recurrent urinary tract infections in males. Risk of infections increases as males age. While bacteria is commonly present in the urine of older males this does not appear to affect the risk of urinary tract infections.
Urinary catheters
Urinary catheterization increases the risk for urinary tract infections. The risk of bacteriuria (bacteria in the urine) is between three and six percent per day and prophylactic antibiotics are not effective in decreasing symptomatic infections. The risk of an associated infection occurs liniearly for enteric bacteria, and can be decreased by catheterizing only when necessary, using aseptic technique for insertion, and maintaining unobstructed closed drainage of the catheter.
Male scuba divers using condom catheters and female divers using external catching devices for their dry suits are also susceptible to urinary tract infections.
Others
A predisposition for bladder infections may run in families. This is believed to be related to genetics. Other risk factors include diabetes, being uncircumcised, and having a large prostate. In children UTIs are associated with vesicoureteral reflux (an abnormal movement of urine from the bladder into ureters or kidneys) and constipation.
Persons with spinal cord injury are at increased risk for urinary tract infection in part because of chronic use of catheter, and in part because of voiding dysfunction. It is the most common cause of infection in this population, as well as the most common cause of hospitalization.
Pathogenesis
The bacteria that cause urinary tract infections typically enter the bladder via the urethra. However, infection may also occur via the blood or lymph. It is believed that the bacteria are usually transmitted to the urethra from the bowel, with females at greater risk due to their anatomy. After gaining entry to the bladder, E. Coli are able to attach to the bladder wall and form a biofilm that resists the body's immune response.
Escherichia coli is the single most common microorganism, followed by Klebsiella and Proteus spp., to cause urinary tract infection. Klebsiella and Proteus spp., are frequently associated with stone disease. The presence of Gram positive bacteria such as Enterococcus and Staphylococcus is increased.
The increased resistance of urinary pathogens to quinolone antibiotics has been reported worldwide and might be the consequence of overuse and misuse of quinolones.
Diagnosis
In straightforward cases, a diagnosis may be made and treatment given based on symptoms alone without further laboratory confirmation. In complicated or questionable cases, it may be useful to confirm the diagnosis via urinalysis, looking for the presence of urinary nitrites, white blood cells (leukocytes), or leukocyte esterase. Another test, urine microscopy, looks for the presence of red blood cells, white blood cells, or bacteria. Urine culture is deemed positive if it shows a bacterial colony count of greater than or equal to 103 colony-forming units per mL of a typical urinary tract organism. Antibiotic sensitivity can also be tested with these cultures, making them useful in the selection of antibiotic treatment. However, women with negative cultures may still improve with antibiotic treatment. As symptoms can be vague and without reliable tests for urinary tract infections, diagnosis can be difficult in the elderly.
Based on pH
Normal urine pH is slightly acidic, with usual values of 6.0 to 7.5, but the normal range is 4.5 to 8.0. A urine pH of 8.5 or 9.0 is indicative of a urea-splitting organism, such as Proteus, Klebsiella, or Ureaplasma urealyticum; therefore, an asymptomatic patient with a high pH means UTI regardless of the other urine test results. Alkaline pH also can signify struvite kidney stones, which are also known as "infection stones".
Classification
A urinary tract infection may involve only the lower urinary tract, in which case it is known as a bladder infection. Alternatively, it may involve the upper urinary tract, in which case it is known as pyelonephritis. If the urine contains significant bacteria but there are no symptoms, the condition is known as asymptomatic bacteriuria. If a urinary tract infection involves the upper tract, and the person has diabetes mellitus, is pregnant, is male, or immunocompromised, it is considered complicated. Otherwise if a woman is healthy and premenopausal it is considered uncomplicated. In children when a urinary tract infection is associated with a fever, it is deemed to be an upper urinary tract infection.
Children
To make the diagnosis of a urinary tract infection in children, a positive urinary culture is required. Contamination poses a frequent challenge depending on the method of collection used, thus a cutoff of 105CFU/mL is used for a "clean-catch" mid stream sample, 104CFU/mL is used for catheter-obtained specimens, and 102CFU/mL is used for suprapubic aspirations (a sample drawn directly from the bladder with a needle). The use of "urine bags" to collect samples is discouraged by the World Health Organization due to the high rate of contamination when cultured, and catheterization is preferred in those not toilet trained. Some, such as the American Academy of Pediatrics recommends renal ultrasound and voiding cystourethrogram (watching a person's urethra and urinary bladder with real time x-rays while they urinate) in all children less than two years old who have had a urinary tract infection. However, because there is a lack of effective treatment if problems are found, others such as the National Institute for Health and Care Excellence only recommends routine imaging in those less than six months old or who have unusual findings.
Differential diagnosis
In women with cervicitis (inflammation of the cervix) or vaginitis (inflammation of the vagina) and in young men with UTI symptoms, a Chlamydia trachomatis or Neisseria gonorrhoeae infection may be the cause. These infections are typically classified as a urethritis rather than a urinary tract infection. Vaginitis may also be due to a yeast infection. Interstitial cystitis (chronic pain in the bladder) may be considered for people who experience multiple episodes of UTI symptoms but urine cultures remain negative and not improved with antibiotics. Prostatitis (inflammation of the prostate) may also be considered in the differential diagnosis.
Hemorrhagic cystitis, characterized by blood in the urine, can occur secondary to a number of causes including: infections, radiation therapy, underlying cancer, medications and toxins. Medications that commonly cause this problem include the chemotherapeutic agent cyclophosphamide with rates of 2–40%. Eosinophilic cystitis is a rare condition where eosinophiles are present in the bladder wall. Signs and symptoms are similar to a bladder infection. Its cause is not entirely clear; however, it may be linked to food allergies, infections, and medications among others.
Prevention
A number of measures have not been confirmed to affect UTI frequency including: urinating immediately after intercourse, the type of underwear used, personal hygiene methods used after urinating or defecating, or whether a person typically bathes or showers. There is similarly a lack of evidence surrounding the effect of holding one's urine, tampon use, and douching. In those with frequent urinary tract infections who use spermicide or a diaphragm as a method of contraception, they are advised to use alternative methods. In those with benign prostatic hyperplasia urinating in a sitting position appears to improve bladder emptying which might decrease urinary tract infections in this group.
Using urinary catheters as little and as short of time as possible and appropriate care of the catheter when used prevents catheter-associated urinary tract infections. They should be inserted using sterile technique in hospital however non-sterile technique may be appropriate in those who self catheterize. The urinary catheter set up should also be kept sealed. Evidence does not support a significant decrease in risk when silver-alloy catheters are used.
Medications
For those with recurrent infections, taking a short course of antibiotics when each infection occurs is associated with the lowest antibiotic use. A prolonged course of daily antibiotics is also effective. Medications frequently used include nitrofurantoin and trimethoprim/sulfamethoxazole. Some recommend against prolonged use due to concerns of antibiotic resistance. Methenamine is another agent used for this purpose as in the bladder where the acidity is low it produces formaldehyde to which resistance does not develop. A UK study showed that methenamine is as effective daily low-dose antibiotics at preventing UTIs among women who experience recurrent UTIs. As methenamine is an antiseptic, it may avoid the issue of antibiotic resistance.
In cases where infections are related to intercourse, taking antibiotics afterwards may be useful. In post-menopausal women, topical vaginal estrogen has been found to reduce recurrence. As opposed to topical creams, the use of vaginal estrogen from pessaries has not been as useful as low dose antibiotics. Antibiotics following short term urinary catheterization decreases the subsequent risk of a bladder infection. A number of UTI vaccines are in development as of 2018.
Children
The evidence that preventive antibiotics decrease urinary tract infections in children is poor. However recurrent UTIs are a rare cause of further kidney problems if there are no underlying abnormalities of the kidneys, resulting in less than a third of a percent (0.33%) of chronic kidney disease in adults.
Male circumcision
Circumcision of boys has been observed to exhibit a strong protective effect against UTIs, with some research suggesting as much as a 90% reduction in symptomatic UTI incidence among male infants, if they are circumcised. The protective effect is even stronger in boys born with urogenital abnormalities.
Dietary supplements
When used as an adjuvant to antibiotics and other standard treatments, cranberry supplements decrease the number of UTIs in people who get them frequently. A 2023 review concluded that cranberry products can reduce the risk of UTIs in certain groups (women with reoccurring UTIs, children, and people having had clinical interventions), but not in pregnant women, the elderly or people with urination disorders. Some evidence suggests that cranberry juice is more effective at UTI control than dehydrated tablets or capsules. Cranberry has not been effective in attempts to replace antibiotics for the treatment of active infections. Cranberry supplements are also high in sugar content, which may worsen the risks associated with UTIs in patients with diabetes mellitus.
As of 2015, probiotics require further study to determine if they are beneficial for UTI.
Treatment
The mainstay of treatment is antibiotics. Phenazopyridine is occasionally prescribed during the first few days in addition to antibiotics to help with the burning and urgency sometimes felt during a bladder infection. However, it is not routinely recommended due to safety concerns with its use, specifically an elevated risk of methemoglobinemia (higher than normal level of methemoglobin in the blood). Paracetamol may be used for fevers. There is no good evidence for the use of cranberry products for treating current infections.
Fosfomycin can be used as an effective treatment for both UTIs and complicated UTIs including acute pyelonephritis. The standard regimen for complicated UTIs is an oral 3g dose administered once every 48 or 72 hours for a total of 3 doses or a 6 grams every 8 hours for 7 days to 14 days when fosfomycin is given in IV form.
Uncomplicated
Uncomplicated infections can be diagnosed and treated based on symptoms alone. Antibiotics taken by mouth such as trimethoprim/sulfamethoxazole, nitrofurantoin, or fosfomycin are typically first line. Cephalosporins, amoxicillin/clavulanic acid, or a fluoroquinolone may also be used. However, antibiotic resistance to fluoroquinolones among the bacteria that cause urinary infections has been increasing. The Food and Drug Administration (FDA) recommends against the use of fluoroquinolones, including a Boxed Warning, when other options are available due to higher risks of serious side effects, such as tendinitis, tendon rupture and worsening of myasthenia gravis. These medications substantially shorten the time to recovery with all being equally effective. A three-day treatment with trimethoprim/sulfamethoxazole, or a fluoroquinolone is usually sufficient, whereas nitrofurantoin requires 5–7days. Fosfomycin may be used as a single dose but is not as effective.
Fluoroquinolones are not recommended as a first treatment. The Infectious Diseases Society of America states this due to the concern of generating resistance to this class of medication. Amoxicillin-clavulanate appears less effective than other options. Despite this precaution, some resistance has developed to all of these medications related to their widespread use. Trimethoprim alone is deemed to be equivalent to trimethoprim/sulfamethoxazole in some countries. For simple UTIs, children often respond to a three-day course of antibiotics. Women with recurrent simple UTIs are over 90% accurate in identifying new infections. They may benefit from self-treatment upon occurrence of symptoms with medical follow-up only if the initial treatment fails.
The combination sulopenem etzadroxil/probenecid (Orlynvah) was approved for medical use in the United States in October 2024.
Complicated
Complicated UTIs are more difficult to treat and usually requires more aggressive evaluation, treatment, and follow-up. It may require identifying and addressing the underlying complication. Increasing antibiotic resistance is causing concern about the future of treating those with complicated and recurrent UTI.
Asymptomatic bacteriuria
Those who have bacteria in the urine but no symptoms should not generally be treated with antibiotics. This includes those who are old, those with spinal cord injuries, and those who have urinary catheters. Pregnancy is an exception and it is recommended that women take sevendays of antibiotics. If not treated it causes up to 30% of mothers to develop pyelonephritis and increases risk of low birth weight and preterm birth. Some also support treatment of those with diabetes mellitus and treatment before urinary tract procedures which will likely cause bleeding.
Pregnant women
Urinary tract infections, even asymptomatic presence of bacteria in the urine, are more concerning in pregnancy due to the increased risk of kidney infections. During pregnancy, high progesterone levels elevate the risk of decreased muscle tone of the ureters and bladder, which leads to a greater likelihood of reflux, where urine flows back up the ureters and towards the kidneys. While pregnant women do not have an increased risk of asymptomatic bacteriuria, if bacteriuria is present they do have a 25–40% risk of a kidney infection. Thus if urine testing shows signs of an infection—even in the absence of symptoms—treatment is recommended. Cephalexin or nitrofurantoin are typically used because they are generally considered safe in pregnancy. A kidney infection during pregnancy may result in preterm birth or pre-eclampsia (a state of high blood pressure and kidney dysfunction during pregnancy that can lead to seizures). Some women have UTIs that keep coming back in pregnancy. There is insufficient research on how to best treat these recurrent infections.
Pyelonephritis
Pyelonephritis is treated more aggressively than a simple bladder infection using either a longer course of oral antibiotics or intravenous antibiotics. Seven days of the oral fluoroquinolone ciprofloxacin is typically used in areas where the resistance rate is less than 10%. If the local antibiotic resistance rates are greater than 10%, a dose of intravenous ceftriaxone is often prescribed. Trimethoprim/sulfamethoxazole or amoxicillin/clavulanate orally for 14 days is another reasonable option. In those who exhibit more severe symptoms, admission to a hospital for ongoing antibiotics may be needed. Complications such as ureteral obstruction from a kidney stone may be considered if symptoms do not improve following two or three days of treatment.
Prognosis
With treatment, symptoms generally improve within 36hours. Up to 42% of uncomplicated infections may resolve on their own within a few days or weeks.
15–25% of adults and children have chronic symptomatic UTIs including recurrent infections, persistent infections (infection with the same pathogen), a re-infection (new pathogen), or a relapsed infection (the same pathogen causes a new infection after it was completely gone). Recurrent urinary tract infections are defined as at least two infections (episodes) in a six-month time period or three infections in twelve months, can occur in adults and in children.
Cystitis refers to a urinary tract infection that involves the lower urinary tract (bladder). An upper urinary tract infection which involves the kidney is called pyelonephritis. About 10–20% of pyelonephritis will go on and develop scarring of the affected kidney. Then, 10–20% of those develop scarring will have increased risk of hypertension in later life.
Epidemiology
Urinary tract infections are the most frequent bacterial infection in women. They occur most frequently between the ages of 16 and 35years, with 10% of women getting an infection yearly and more than 40–60% having an infection at some point in their lives. Recurrences are common, with nearly half of people getting a second infection within a year. Urinary tract infections occur four times more frequently in females than males. Pyelonephritis occurs between 20 and 30 times less frequently. They are the most common cause of hospital-acquired infections accounting for approximately 40%. Rates of asymptomatic bacteria in the urine increase with age from two to seven percent in women of child-bearing age to as high as 50% in elderly women in care homes. Rates of asymptomatic bacteria in the urine among men over 75 are between 7–10%. 2–10% of pregnant women have asymptomatic bacteria in the urine and higher rates are reported in women who live in some underdeveloped countries.
Urinary tract infections may affect 10% of people during childhood. Among children, urinary tract infections are most common in uncircumcised males less than three months of age, followed by females less than one year. Estimates of frequency among children, however, vary widely. In a group of children with a fever, ranging in age between birth and two years, 2–20% were diagnosed with a UTI.
History
Urinary tract infections have been described since ancient times with the first documented description in the Ebers Papyrus dated to c. 1550 BC. It was described by the Egyptians as "sending forth heat from the bladder". Effective treatment did not occur until the development and availability of antibiotics in the 1930s, before which time herbs, bloodletting and rest were recommended.
| Biology and health sciences | Infectious diseases by site | Health |
32167 | https://en.wikipedia.org/wiki/Ubiquitin | Ubiquitin | Ubiquitin is a small (8.6 kDa) regulatory protein found in most tissues of eukaryotic organisms, i.e., it is found ubiquitously. It was discovered in 1975 by Gideon Goldstein and further characterized throughout the late 1970s and 1980s. Four genes in the human genome code for ubiquitin: UBB, UBC, UBA52 and RPS27A.
The addition of ubiquitin to a substrate protein is called ubiquitylation (or ubiquitination or ubiquitinylation). Ubiquitylation affects proteins in many ways: it can mark them for degradation via the proteasome, alter their cellular location, affect their activity, and promote or prevent protein interactions. Ubiquitylation involves three main steps: activation, conjugation, and ligation, performed by ubiquitin-activating enzymes (E1s), ubiquitin-conjugating enzymes (E2s), and ubiquitin ligases (E3s), respectively. The result of this sequential cascade is to bind ubiquitin to lysine residues on the protein substrate via an isopeptide bond, cysteine residues through a thioester bond; serine, threonine, and tyrosine residues through an ester bond; or the amino group of the protein's N-terminus via a peptide bond.
The protein modifications can be either a single ubiquitin protein (monoubiquitylation) or a chain of ubiquitin (polyubiquitylation). Secondary ubiquitin molecules are always linked to one of the seven lysine residues or the N-terminal methionine of the previous ubiquitin molecule. These 'linking' residues are represented by a "K" or "M" (the one-letter amino acid notation of lysine and methionine, respectively) and a number, referring to its position in the ubiquitin molecule as in K48, K29 or M1. The first ubiquitin molecule is covalently bound through its C-terminal carboxylate group to a particular lysine, cysteine, serine, threonine or N-terminus of the target protein. Polyubiquitylation occurs when the C-terminus of another ubiquitin is linked to one of the seven lysine residues or the first methionine on the previously added ubiquitin molecule, creating a chain. This process repeats several times, leading to the addition of several ubiquitins. Only polyubiquitylation on defined lysines, mostly on K48 and K29, is related to degradation by the proteasome (referred to as the "molecular kiss of death"), while other polyubiquitylations (e.g. on K63, K11, K6 and M1) and monoubiquitylations may regulate processes such as endocytic trafficking, inflammation, translation and DNA repair.
The discovery that ubiquitin chains target proteins to the proteasome, which degrades and recycles proteins, was honored with the Nobel Prize in Chemistry in 2004.
Identification
Ubiquitin (originally, ubiquitous immunopoietic polypeptide) was first identified in 1975 as an 8.6 kDa protein expressed in all eukaryotic cells. The basic functions of ubiquitin and the components of the ubiquitylation pathway were elucidated in the early 1980s at the Technion by Aaron Ciechanover, Avram Hershko, and Irwin Rose for which the Nobel Prize in Chemistry was awarded in 2004.
The ubiquitylation system was initially characterised as an ATP-dependent proteolytic system present in cellular extracts. A heat-stable polypeptide present in these extracts, ATP-dependent proteolysis factor 1 (APF-1), was found to become covalently attached to the model protein substrate lysozyme in an ATP- and Mg2+-dependent process. Multiple APF-1 molecules were linked to a single substrate molecule by an isopeptide linkage, and conjugates were found to be rapidly degraded with the release of free APF-1. Soon after APF-1-protein conjugation was characterised, APF-1 was identified as ubiquitin. The carboxyl group of the C-terminal glycine residue of ubiquitin (Gly76) was identified as the moiety conjugated to substrate lysine residues.
The protein
Ubiquitin is a small protein that exists in all eukaryotic cells. It performs its myriad functions through conjugation to a large range of target proteins. A variety of different modifications can occur. The ubiquitin protein itself consists of 76 amino acids and has a molecular mass of about 8.6 kDa. Key features include its C-terminal tail and the 7 lysine residues. It is highly conserved throughout eukaryote evolution; human and yeast ubiquitin share 96% sequence identity.
Genes
Ubiquitin is encoded in mammals by four different genes. UBA52 and RPS27A genes code for a single copy of ubiquitin fused to the ribosomal proteins L40 and S27a, respectively. The UBB and UBC genes code for polyubiquitin precursor proteins.
Ubiquitylation
Ubiquitylation (also known as ubiquitination or ubiquitinylation) is an enzymatic post-translational modification in which an ubiquitin protein is attached to a substrate protein. This process most commonly binds the last amino acid of ubiquitin (glycine 76) to a lysine residue on the substrate. An isopeptide bond is formed between the carboxyl group (COO−) of the ubiquitin's glycine and the epsilon-amino group (ε-) of the substrate's lysine. Trypsin cleavage of a ubiquitin-conjugated substrate leaves a di-glycine "remnant" that is used to identify the site of ubiquitylation. Ubiquitin can also be bound to other sites in a protein which are electron-rich nucleophiles, termed "non-canonical ubiquitylation". This was first observed with the amine group of a protein's N-terminus being used for ubiquitylation, rather than a lysine residue, in the protein MyoD and has been observed since in 22 other proteins in multiple species, including ubiquitin itself. There is also increasing evidence for nonlysine residues as ubiquitylation targets using non-amine groups, such as the sulfhydryl group on cysteine, and the hydroxyl group on threonine and serine. The end result of this process is the addition of one ubiquitin molecule (monoubiquitylation) or a chain of ubiquitin molecules (polyubiquitination) to the substrate protein.
Ubiquitination requires three types of enzyme: ubiquitin-activating enzymes, ubiquitin-conjugating enzymes, and ubiquitin ligases, known as E1s, E2s, and E3s, respectively. The process consists of three main steps:
Activation: Ubiquitin is activated in a two-step reaction by an E1 ubiquitin-activating enzyme, which is dependent on ATP. The initial step involves production of a ubiquitin-adenylate intermediate. The E1 binds both ATP and ubiquitin and catalyses the acyl-adenylation of the C-terminus of the ubiquitin molecule. The second step transfers ubiquitin to an active site cysteine residue, with release of AMP. This step results in a thioester linkage between the C-terminal carboxyl group of ubiquitin and the E1 cysteine sulfhydryl group. The human genome contains two genes that produce enzymes capable of activating ubiquitin: UBA1 and UBA6.
Conjugation: E2 ubiquitin-conjugating enzymes catalyse the transfer of ubiquitin from E1 to the active site cysteine of the E2 via a trans(thio)esterification reaction. In order to perform this reaction, the E2 binds to both activated ubiquitin and the E1 enzyme. Humans possess 35 different E2 enzymes, whereas other eukaryotic organisms have between 16 and 35. They are characterised by their highly conserved structure, known as the ubiquitin-conjugating catalytic (UBC) fold.
Ligation: E3 ubiquitin ligases catalyse the final step of the ubiquitylation cascade. Most commonly, they create an isopeptide bond between a lysine of the target protein and the C-terminal glycine of ubiquitin. In general, this step requires the activity of one of the hundreds of E3s. E3 enzymes function as the substrate recognition modules of the system and are capable of interaction with both E2 and substrate. Some E3 enzymes also activate the E2 enzymes. E3 enzymes possess one of two domains: the homologous to the E6-AP carboxyl terminus (HECT) domain and the really interesting new gene (RING) domain (or the closely related U-box domain). HECT domain E3s transiently bind ubiquitin in this process (an obligate thioester intermediate is formed with the active-site cysteine of the E3), whereas RING domain E3s catalyse the direct transfer from the E2 enzyme to the substrate. The anaphase-promoting complex (APC) and the SCF complex (for Skp1-Cullin-F-box protein complex) are two examples of multi-subunit E3s involved in recognition and ubiquitylation of specific target proteins for degradation by the proteasome.
In the ubiquitylation cascade, E1 can bind with many E2s, which can bind with hundreds of E3s in a hierarchical way. Having levels within the cascade allows tight regulation of the ubiquitylation machinery. Other ubiquitin-like proteins (UBLs) are also modified via the E1–E2–E3 cascade, although variations in these systems do exist.
E4 enzymes, or ubiquitin-chain elongation factors, are capable of adding pre-formed polyubiquitin chains to substrate proteins. For example, multiple monoubiquitylation of the tumor suppressor p53 by Mdm2 can be followed by addition of a polyubiquitin chain using p300 and CBP.
Types
Ubiquitylation affects cellular process by regulating the degradation of proteins (via the proteasome and lysosome), coordinating the cellular localization of proteins, activating and inactivating proteins, and modulating protein–protein interactions. These effects are mediated by different types of substrate ubiquitylation, for example the addition of a single ubiquitin molecule (monoubiquitylation) or different types of ubiquitin chains (polyubiquitylation).
Monoubiquitylation
Monoubiquitylation is the addition of one ubiquitin molecule to one substrate protein residue. Multi-monoubiquitylation is the addition of one ubiquitin molecule to multiple substrate residues. The monoubiquitylation of a protein can have different effects to the polyubiquitylation of the same protein. The addition of a single ubiquitin molecule is thought to be required prior to the formation of polyubiquitin chains. Monoubiquitylation affects cellular processes such as membrane trafficking, endocytosis and viral budding.
Polyubiquitin chains
Polyubiquitylation is the formation of a ubiquitin chain on a single lysine residue on the substrate protein. Following addition of a single ubiquitin moiety to a protein substrate, further ubiquitin molecules can be added to the first, yielding a polyubiquitin chain. These chains are made by linking the glycine residue of a ubiquitin molecule to a lysine of ubiquitin bound to a substrate. Ubiquitin has seven lysine residues and an N-terminus that serves as points of ubiquitination; they are K6, K11, K27, K29, K33, K48, K63 and M1, respectively. Lysine 48-linked chains were the first identified and are the best-characterised type of ubiquitin chain. K63 chains have also been well-characterised, whereas the function of other lysine chains, mixed chains, branched chains, M1-linked linear chains, and heterologous chains (mixtures of ubiquitin and other ubiquitin-like proteins) remains more unclear.
Lysine 48-linked polyubiquitin chains target proteins for destruction, by a process known as proteolysis. Multi-ubiquitin chains at least four ubiquitin molecules long must be attached to a lysine residue on the condemned protein in order for it to be recognised by the 26S proteasome. This is a barrel-shape structure comprising a central proteolytic core made of four ring structures, flanked by two cylinders that selectively allow entry of ubiquitylated proteins. Once inside, the proteins are rapidly degraded into small peptides (usually 3–25 amino acid residues in length). Ubiquitin molecules are cleaved off the protein immediately prior to destruction and are recycled for further use. Although the majority of protein substrates are ubiquitylated, there are examples of non-ubiquitylated proteins targeted to the proteasome. The polyubiquitin chains are recognised by a subunit of the proteasome: S5a/Rpn10. This is achieved by a ubiquitin-interacting motif (UIM) found in a hydrophobic patch in the C-terminal region of the S5a/Rpn10 unit.
Lysine 63-linked chains are not associated with proteasomal degradation of the substrate protein. Instead, they allow the coordination of other processes such as endocytic trafficking, inflammation, translation, and DNA repair. In cells, lysine 63-linked chains are bound by the ESCRT-0 complex, which prevents their binding to the proteasome. This complex contains two proteins, Hrs and STAM1, that contain a UIM, which allows it to bind to lysine 63-linked chains.
Methionine 1-linked (or linear) polyubiquitin chains are another type of non-degradative ubiquitin chains. In this case, ubiquitin is linked in a head-to-tail manner, meaning that the C-terminus of the last ubiquitin molecule binds directly to the N-terminus of the next one. Although initially believed to target proteins for proteasomal degradation, linear ubiquitin later proved to be indispensable for NF-kB signaling. Currently, there is only one known E3 ubiquitin ligase generating M1-linked polyubiquitin chains - linear ubiquitin chain assembly complex (LUBAC).
Less is understood about atypical (non-lysine 48-linked) ubiquitin chains but research is starting to suggest roles for these chains. There is evidence that atypical chains linked by lysine 6, 11, 27, 29 and methionine 1 can induce proteasomal degradation.
Branched ubiquitin chains containing multiple linkage types can be formed. The function of these chains is unknown.
Structure
Differently linked chains have specific effects on the protein to which they are attached, caused by differences in the conformations of the protein chains. K29-, K33-, K63- and M1-linked chains have a fairly linear conformation; they are known as open-conformation chains. K6-, K11-, and K48-linked chains form closed conformations. The ubiquitin molecules in open-conformation chains do not interact with each other, except for the covalent isopeptide bonds linking them together. In contrast, the closed conformation chains have interfaces with interacting residues. Altering the chain conformations exposes and conceals different parts of the ubiquitin protein, and the different linkages are recognized by proteins that are specific for the unique topologies that are intrinsic to the linkage. Proteins can specifically bind to ubiquitin via ubiquitin-binding domains (UBDs). The distances between individual ubiquitin units in chains differ between lysine 63- and 48-linked chains. The UBDs exploit this by having small spacers between ubiquitin-interacting motifs that bind lysine 48-linked chains (compact ubiquitin chains) and larger spacers for lysine 63-linked chains. The machinery involved in recognising polyubiquitin chains can also differentiate between K63-linked chains and M1-linked chains, demonstrated by the fact that the latter can induce proteasomal degradation of the substrate.
Function
The ubiquitylation system functions in a wide variety of cellular processes, including:
Antigen processing
Apoptosis
Biogenesis of organelles
Cell cycle and division
DNA transcription and repair
Differentiation and development
Immune response and inflammation
Neural and muscular degeneration
Maintenance of pluripotency
Morphogenesis of neural networks
Modulation of cell surface receptors, ion channels and the secretory pathway
Response to stress and extracellular modulators
Ribosome biogenesis
Viral infection
Membrane proteins
Multi-monoubiquitylation can mark transmembrane proteins (for example, receptors) for removal from membranes (internalisation) and fulfil several signalling roles within the cell. When cell-surface transmembrane molecules are tagged with ubiquitin, the subcellular localization of the protein is altered, often targeting the protein for destruction in lysosomes. This serves as a negative feedback mechanism, because often the stimulation of receptors by ligands increases their rate of ubiquitylation and internalisation. Like monoubiquitylation, lysine 63-linked polyubiquitin chains also has a role in the trafficking some membrane proteins.
Genomic maintenance
Proliferating cell nuclear antigen (PCNA) is a protein involved in DNA synthesis. Under normal physiological conditions PCNA is sumoylated (a similar post-translational modification to ubiquitylation). When DNA is damaged by ultra-violet radiation or chemicals, the SUMO molecule that is attached to a lysine residue is replaced by ubiquitin. Monoubiquitylated PCNA recruits polymerases that can carry out DNA synthesis with damaged DNA; but this is very error-prone, possibly resulting in the synthesis of mutated DNA. Lysine 63-linked polyubiquitylation of PCNA allows it to perform a less error-prone mutation bypass known by the template switching pathway.
Ubiquitylation of histone H2AX is involved in DNA damage recognition of DNA double-strand breaks. Lysine 63-linked polyubiquitin chains are formed on H2AX histone by the E2/E3 ligase pair, Ubc13-Mms2/RNF168. This K63 chain appears to recruit RAP80, which contains a UIM, and RAP80 then helps localize BRCA1. This pathway will eventually recruit the necessary proteins for homologous recombination repair.
Transcriptional regulation
Histones can be ubiquitinated, usually in the form of monoubiquitylation, although polyubiquitylated forms do occur. Histone ubiquitylation alters chromatin structure and allows the access of enzymes involved in transcription. Ubiquitin on histones also acts as a binding site for proteins that either activate or inhibit transcription and also can induce further post-translational modifications of the protein. These effects can all modulate the transcription of genes.
Deubiquitination
Deubiquitinating enzymes (deubiquitinases; DUBs) oppose the role of ubiquitylation by removing ubiquitin from substrate proteins. They are cysteine proteases that cleave the amide bond between the two proteins. They are highly specific, as are the E3 ligases that attach the ubiquitin, with only a few substrates per enzyme. They can cleave both isopeptide (between ubiquitin and lysine) and peptide bonds (between ubiquitin and the N-terminus).
In addition to removing ubiquitin from substrate proteins, DUBs have many other roles within the cell. Ubiquitin is either expressed as multiple copies joined in a chain (polyubiquitin) or attached to ribosomal subunits. DUBs cleave these proteins to produce active ubiquitin. They also recycle ubiquitin that has been bound to small nucleophilic molecules during the ubiquitylation process. Monoubiquitin is formed by DUBs that cleave ubiquitin from free polyubiquitin chains that have been previously removed from proteins.
Ubiquitin-binding domains
Ubiquitin-binding domains (UBDs) are modular protein domains that non-covalently bind to ubiquitin, these motifs control various cellular events. Detailed molecular structures are known for a number of UBDs, binding specificity determines their mechanism of action and regulation, and how it regulates cellular proteins and processes.
Disease associations
Pathogenesis
The ubiquitin pathway has been implicated in the pathogenesis of a wide range of diseases and disorders, including:
Neurodegeneration
Infection and immunity
Genetic disorders
Cancer
Neurodegeneration
Ubiquitin is implicated in neurodegenerative diseases associated with proteostasis dysfunction, including Alzheimer's disease, motor neuron disease, Huntington's disease and Parkinson's disease. Transcript variants encoding different isoforms of ubiquilin-1 are found in lesions associated with Alzheimer's and Parkinson's disease. Higher levels of ubiquilin in the brain have been shown to decrease malformation of amyloid precursor protein (APP), which plays a key role in triggering Alzheimer's disease. Conversely, lower levels of ubiquilin-1 in the brain have been associated with increased malformation of APP. A frameshift mutation in ubiquitin B can result in a truncated peptide missing the C-terminal glycine. This abnormal peptide, known as UBB+1, has been shown to accumulate selectively in Alzheimer's disease and other tauopathies.
Infection and immunity
Ubiquitin and ubiquitin-like molecules extensively regulate immune signal transduction pathways at virtually all stages, including steady-state repression, activation during infection, and attenuation upon clearance. Without this regulation, immune activation against pathogens may be defective, resulting in chronic disease or death. Alternatively, the immune system may become hyperactivated and organs and tissues may be subjected to autoimmune damage.
On the other hand, viruses must block or redirect host cell processes including immunity to effectively replicate, yet many viruses relevant to disease have informationally limited genomes. Because of its very large number of roles in the cell, manipulating the ubiquitin system represents an efficient way for such viruses to block, subvert or redirect critical host cell processes to support their own replication.
The retinoic acid-inducible gene I (RIG-I) protein is a primary immune system sensor for viral and other invasive RNA in human cells. The RIG-I-like receptor (RLR) immune signaling pathway is one of the most extensively studied in terms of the role of ubiquitin in immune regulation.
Genetic disorders
Angelman syndrome is caused by a disruption of UBE3A, which encodes a ubiquitin ligase (E3) enzyme termed E6-AP.
Von Hippel–Lindau syndrome involves disruption of a ubiquitin E3 ligase termed the VHL tumor suppressor, or VHL gene.
Fanconi anemia: Eight of the thirteen identified genes whose disruption can cause this disease encode proteins that form a large ubiquitin ligase (E3) complex.
3-M syndrome is an autosomal-recessive growth retardation disorder associated with mutations of the Cullin7 E3 ubiquitin ligase.
Diagnostic use
Immunohistochemistry using antibodies to ubiquitin can identify abnormal accumulations of this protein inside cells, indicating a disease process. These protein accumulations are referred to as inclusion bodies (which is a general term for any microscopically visible collection of abnormal material in a cell). Examples include:
Neurofibrillary tangles in Alzheimer's disease
Lewy body in Parkinson's disease
Pick bodies in Pick's disease
Inclusions in motor neuron disease and Huntington's disease
Mallory bodies in alcoholic liver disease
Rosenthal fibers in astrocytes
Link to cancer
Post-translational modification of proteins is a generally used mechanism in eukaryotic cell signaling. Ubiquitylation, ubiquitin conjugation to proteins, is a crucial process for cell cycle progression and cell proliferation and development. Although ubiquitylation usually serves as a signal for protein degradation through the 26S proteasome, it could also serve for other fundamental cellular processes, in endocytosis, enzymatic activation and DNA repair. Moreover, since ubiquitylation functions to tightly regulate the cellular level of cyclins, its misregulation is expected to have severe impacts. First evidence of the importance of the ubiquitin/proteasome pathway in oncogenic processes was observed due to the high antitumor activity of proteasome inhibitors. Various studies have shown that defects or alterations in ubiquitylation processes are commonly associated with or present in human carcinoma. Malignancies could be developed through loss of function mutation directly at the tumor suppressor gene, increased activity of ubiquitylation, and/or indirect attenuation of ubiquitylation due to mutation in related proteins.
Direct loss of function mutation of E3 ubiquitin ligase
Renal cell carcinoma
The VHL (Von Hippel–Lindau) gene encodes a component of an E3 ubiquitin ligase. VHL complex targets a member of the hypoxia-inducible transcription factor family (HIF) for degradation by interacting with the oxygen-dependent destruction domain under normoxic conditions. HIF activates downstream targets such as the vascular endothelial growth factor (VEGF), promoting angiogenesis. Mutations in VHL prevent degradation of HIF and thus lead to the formation of hypervascular lesions and renal tumors.
Breast cancer
The BRCA1 gene is another tumor suppressor gene in humans which encodes the BRCA1 protein that is involved in response to DNA damage. The protein contains a RING motif with E3 Ubiquitin Ligase activity. BRCA1 could form dimer with other molecules, such as BARD1 and BAP1, for its ubiquitylation activity. Mutations that affect the ligase function are often found and associated with various cancers.
Cyclin E
As processes in cell cycle progression are the most fundamental processes for cellular growth and differentiation, and are the most common to be altered in human carcinomas, it is expected for cell cycle-regulatory proteins to be under tight regulation. The level of cyclins, as the name suggests, is high only at certain a time point during the cell cycle. This is achieved by continuous control of cyclins or CDKs levels through ubiquitylation and degradation. When cyclin E is partnered with CDK2 and gets phosphorylated, an SCF-associated F-box protein Fbw7 recognizes the complex and thus targets it for degradation. Mutations in Fbw7 have been found in more than 30% of human tumors, characterizing it as a tumor suppressor protein.
Increased ubiquitination activity
Cervical cancer
Oncogenic types of the human papillomavirus (HPV) are known to hijack cellular ubiquitin-proteasome pathway for viral infection and replication. The E6 proteins of HPV will bind to the N-terminus of the cellular E6-AP E3 ubiquitin ligase, redirecting the complex to bind p53, a well-known tumor suppressor gene whose inactivation is found in many types of cancer. Thus, p53 undergoes ubiquitylation and proteasome-mediated degradation. Meanwhile, E7, another one of the early-expressed HPV genes, will bind to Rb, also a tumor suppressor gene, mediating its degradation. The loss of p53 and Rb in cells allows limitless cell proliferation to occur.
p53 regulation
Gene amplification often occur in various tumor cases, including of MDM2, a gene encodes for a RING E3 Ubiquitin ligase responsible for downregulation of p53 activity. MDM2 targets p53 for ubiquitylation and proteasomal degradation thus keeping its level appropriate for normal cell condition. Overexpression of MDM2 causes loss of p53 activity and therefore allowing cells to have a limitless replicative potential.
p27
Another gene that is a target of gene amplification is SKP2. SKP2 is an F-box protein with a role in substrate recognition for ubiquitylation and degradation. SKP2 targets p27Kip-1, an inhibitor of cyclin-dependent kinases (CDKs). CDKs2/4 partner with the cyclins E/D, respectively, forming a family of cell cycle regulators which control cell cycle progression through the G1 phase. Low level of p27Kip-1 protein is often found in various cancers and is due to overactivation of ubiquitin-mediated proteolysis through overexpression of SKP2.
Efp
Efp, or estrogen-inducible RING-finger protein, is an E3 ubiquitin ligase whose overexpression has been shown to be the major cause of estrogen-independent breast cancer. Efp's substrate is 14-3-3 protein which negatively regulates cell cycle.
Evasion of ubiquitination
Colorectal cancer
The gene associated with colorectal cancer is the adenomatous polyposis coli (APC), which is a classic tumor suppressor gene. APC gene product targets beta-catenin for degradation via ubiquitylation at the N-terminus, thus regulating its cellular level. Most colorectal cancer cases are found with mutations in the APC gene. However, in cases where APC gene is not mutated, mutations are found in the N-terminus of beta-catenin which renders it ubiquitination-free and thus increased activity.
Glioblastoma
As the most aggressive cancer originated in the brain, mutations found in patients with glioblastoma are related to the deletion of a part of the extracellular domain of the epidermal growth factor receptor (EGFR). This deletion causes CBL E3 ligase unable to bind to the receptor for its recycling and degradation via a ubiquitin-lysosomal pathway. Thus, EGFR is constitutively active in the cell membrane and activates its downstream effectors that are involved in cell proliferation and migration.
Phosphorylation-dependent ubiquitylation
The interplay between ubiquitylation and phosphorylation has been an ongoing research interest since phosphorylation often serves as a marker where ubiquitylation leads to degradation. Moreover, ubiquitylation can also act to turn on/off the kinase activity of a protein. The critical role of phosphorylation is largely underscored in the activation and removal of autoinhibition in the Cbl protein. Cbl is an E3 ubiquitin ligase with a RING finger domain that interacts with its tyrosine kinase binding (TKB) domain, preventing interaction of the RING domain with an E2 ubiquitin-conjugating enzyme. This intramolecular interaction is an autoinhibition regulation that prevents its role as a negative regulator of various growth factors and tyrosine kinase signaling and T-cell activation. Phosphorylation of Y363 relieves the autoinhibition and enhances binding to E2. Mutations that render the Cbl protein dysfunctional due to the loss of its ligase/tumor suppressor function and maintenance of its positive signaling/oncogenic function have been shown to cause the development of cancer.
As a drug target
Screening for ubiquitin ligase substrates
Deregulation of E3-substrate interactions is a key cause of many human disorders, therefore identifying E3 ligase substrates is crucial. In 2008, 'Global Protein Stability (GPS) Profiling' was developed to discover E3 ubiquitin ligase substrates. This high-throughput system made use of reporter proteins fused with thousands of potential substrates independently. By inhibition of the ligase activity (through the making of Cul1 dominant negative thus renders ubiquitination not to occur), increased reporter activity shows that the identified substrates are being accumulated. This approach added a large number of new substrates to the list of E3 ligase substrates.
Possible therapeutic applications
Blocking of specific substrate recognition by the E3 ligases, e.g. bortezomib.
Challenge
Finding a specific molecule that selectively inhibits the activity of a certain E3 ligase and/or the protein–protein interactions implicated in the disease remains as one of the important and expanding research area. Moreover, as ubiquitination is a multi-step process with various players and intermediate forms, consideration of the much complex interactions between components needs to be taken heavily into account while designing the small molecule inhibitors.
Similar proteins
Ubiquitin is the most-understood post-translation modifier, however, several family of ubiquitin-like proteins (UBLs) can modify cellular targets in a parallel but distinct route. Known UBLs include: small ubiquitin-like modifier (SUMO), ubiquitin cross-reactive protein (UCRP, also known as interferon-stimulated gene-15 ISG15), ubiquitin-related modifier-1 (URM1), neuronal-precursor-cell-expressed developmentally downregulated protein-8 (NEDD8, also called Rub1 in S. cerevisiae), human leukocyte antigen F-associated (FAT10), autophagy-8 (ATG8) and -12 (ATG12), Few ubiquitin-like protein (FUB1), MUB (membrane-anchored UBL), ubiquitin fold-modifier-1 (UFM1) and ubiquitin-like protein-5 (UBL5, which is but known as homologous to ubiquitin-1 [Hub1] in S. pombe). Although these proteins share only modest primary sequence identity with ubiquitin, they are closely related three-dimensionally. For example, SUMO shares only 18% sequence identity, but they contain the same structural fold. This fold is called "ubiquitin fold". FAT10 and UCRP contain two. This compact globular beta-grasp fold is found in ubiquitin, UBLs, and proteins that comprise a ubiquitin-like domain, e.g. the S. cerevisiae spindle pole body duplication protein, Dsk2, and NER protein, Rad23, both contain N-terminal ubiquitin domains.
These related molecules have novel functions and influence diverse biological processes. There is also cross-regulation between the various conjugation pathways, since some proteins can become modified by more than one UBL, and sometimes even at the same lysine residue. For instance, SUMO modification often acts antagonistically to that of ubiquitination and serves to stabilize protein substrates. Proteins conjugated to UBLs are typically not targeted for degradation by the proteasome but rather function in diverse regulatory activities. Attachment of UBLs might, alter substrate conformation, affect the affinity for ligands or other interacting molecules, alter substrate localization, and influence protein stability.
UBLs are structurally similar to ubiquitin and are processed, activated, conjugated, and released from conjugates by enzymatic steps that are similar to the corresponding mechanisms for ubiquitin. UBLs are also translated with C-terminal extensions that are processed to expose the invariant C-terminal LRGG. These modifiers have their own specific E1 (activating), E2 (conjugating) and E3 (ligating) enzymes that conjugate the UBLs to intracellular targets. These conjugates can be reversed by UBL-specific isopeptidases that have similar mechanisms to that of the deubiquitinating enzymes.
Within some species, the recognition and destruction of sperm mitochondria through a mechanism involving ubiquitin is responsible for sperm mitochondria's disposal after fertilization occurs.
Prokaryotic origins
Ubiquitin is believed to have descended from bacterial proteins similar to ThiS () or MoaD (). These prokaryotic proteins, despite having little sequence identity (ThiS has 14% identity to ubiquitin), share the same protein fold. These proteins also share sulfur chemistry with ubiquitin. MoaD, which is involved in molybdopterin biosynthesis, interacts with MoeB, which acts like an E1 ubiquitin-activating enzyme for MoaD, strengthening the link between these prokaryotic proteins and the ubiquitin system. A similar system exists for ThiS, with its E1-like enzyme ThiF. It is also believed that the Saccharomyces cerevisiae protein Urm1, a ubiquitin-related modifier, is a "molecular fossil" that connects the evolutionary relation with the prokaryotic ubiquitin-like molecules and ubiquitin.
Archaea have a functionally closer homolog of the ubiquitin modification system, where "sampylation" with SAMPs (small archaeal modifier proteins) is performed. The sampylation system only uses E1 to guide proteins to the proteosome. Proteoarchaeota, which are related to the ancestor of eukaryotes, possess all of the E1, E2, and E3 enzymes plus a regulated Rpn11 system. Unlike SAMP which are more similar to ThiS or MoaD, Proteoarchaeota ubiquitin are most similar to eukaryotic homologs.
Prokaryotic ubiquitin-like protein (Pup) and ubiquitin bacterial (UBact)
Prokaryotic ubiquitin-like protein (Pup) is a functional analog of ubiquitin which has been found in the gram-positive bacterial phylum Actinomycetota. It serves the same function (targeting proteins for degradations), although the enzymology of ubiquitylation and pupylation is different, and the two families share no homology. In contrast to the three-step reaction of ubiquitylation, pupylation requires two steps, therefore only two enzymes are involved in pupylation.
In 2017, homologs of Pup were reported in five phyla of gram-negative bacteria, in seven candidate bacterial phyla and in one archaeon The sequences of the Pup homologs are very different from the sequences of Pup in gram-positive bacteria and were termed Ubiquitin bacterial (UBact), although the distinction has yet not been proven to be phylogenetically supported by a separate evolutionary origin and is without experimental evidence.
The finding of the Pup/UBact-proteasome system in both gram-positive and gram-negative bacteria suggests that either the Pup/UBact-proteasome system evolved in bacteria prior to the split into gram positive and negative clades over 3000 million years ago or, that these systems were acquired by different bacterial lineages through horizontal gene transfer(s) from a third, yet unknown, organism. In support of the second possibility, two UBact loci were found in the genome of an uncultured anaerobic methanotrophic Archaeon (ANME-1;locus CBH38808.1 and locus CBH39258.1).
Human proteins containing ubiquitin domain
These include ubiquitin-like proteins.
ANUBL1; BAG1; BAT3/BAG6; C1orf131; DDI1; DDI2; FAU; HERPUD1; HERPUD2; HOPS; IKBKB; ISG15; LOC391257; MIDN; NEDD8; OASL; PARK2; RAD23A; RAD23B; RPS27A; SACS; 8U SF3A1; SUMO1; SUMO2; SUMO3; SUMO4; TMUB1; TMUB2; UBA52; UBB; UBC; UBD; UBFD1; UBL4A; UBL4B; UBL7; UBLCP1; UBQLN1; UBQLN2; UBQLN3; UBQLN4; UBQLNL; UBTD1; UBTD2; UHRF1; UHRF2;
Related proteins
Ubiquitin-associated protein domain
Prediction of ubiquitination
Currently available prediction programs are:
UbiPred is a SVM-based prediction server using 31 physicochemical properties for predicting ubiquitylation sites.
UbPred is a random forest-based predictor of potential ubiquitination sites in proteins. It was trained on a combined set of 266 non-redundant experimentally verified ubiquitination sites available from our experiments and from two large-scale proteomics studies.
CKSAAP_UbSite is SVM-based prediction that employs the composition of k-spaced amino acid pairs surrounding a query site (i.e. any lysine in a query sequence) as input, uses the same dataset as UbPred.
| Biology and health sciences | Molecular biology | Biology |
32188 | https://en.wikipedia.org/wiki/UTF-8 | UTF-8 | UTF-8 is a character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from Unicode Transformation Format 8-bit. Almost every webpage is stored in UTF-8.
UTF-8 is capable of encoding all 1,112,064 valid Unicode scalar values using a variable-width encoding of one to four one-byte (8-bit) code units.
Code points with lower numerical values, which tend to occur more frequently, are encoded using fewer bytes. It was designed for backward compatibility with ASCII: the first 128 characters of Unicode, which correspond one-to-one with ASCII, are encoded using a single byte with the same binary value as ASCII, so that a UTF-8-encoded file using only those characters is identical to an ASCII file. Most software designed for any extended ASCII can read and write UTF-8 (including on Microsoft Windows) and this results in fewer internationalization issues than any alternative text encoding.
UTF-8 is dominant for all countries/languages on the internet, with 99% global average use, is used in most standards, often the only allowed encoding, and is supported by all modern operating systems and programming languages.
History
The International Organization for Standardization (ISO) set out to compose a universal multi-byte character set in 1989. The draft ISO 10646 standard contained a non-required annex called UTF-1 that provided a byte stream encoding of its 32-bit code points. This encoding was not satisfactory on performance grounds, among other problems, and the biggest problem was probably that it did not have a clear separation between ASCII and non-ASCII: new UTF-1 tools would be backward compatible with ASCII-encoded text, but UTF-1-encoded text could confuse existing code expecting ASCII (or extended ASCII), because it could contain continuation bytes in the range 0x21–0x7E that meant something else in ASCII, e.g., 0x2F for /, the Unix path directory separator.
In July 1992, the X/Open committee XoJIG was looking for a better encoding. Dave Prosser of Unix System Laboratories submitted a proposal for one that had faster implementation characteristics and introduced the improvement that 7-bit ASCII characters would only represent themselves; multi-byte sequences would only include bytes with the high bit set. The name File System Safe UCS Transformation Format (FSS-UTF) and most of the text of this proposal were later preserved in the final specification. In August 1992, this proposal was circulated by an IBM X/Open representative to interested parties. A modification by Ken Thompson of the Plan 9 operating system group at Bell Labs made it self-synchronizing, letting a reader start anywhere and immediately detect character boundaries, at the cost of being somewhat less bit-efficient than the previous proposal. It also abandoned the use of biases that prevented overlong encodings. Thompson's design was outlined on September 2, 1992, on a placemat in a New Jersey diner with Rob Pike. In the following days, Pike and Thompson implemented it and updated Plan 9 to use it throughout, and then communicated their success back to X/Open, which accepted it as the specification for FSS-UTF.
UTF-8 was first officially presented at the USENIX conference in San Diego, from January 25 to 29, 1993. The Internet Engineering Task Force adopted UTF-8 in its Policy on Character Sets and Languages in RFC 2277 (BCP 18) for future internet standards work in January 1998, replacing Single Byte Character Sets such as Latin-1 in older RFCs.
In November 2003, UTF-8 was restricted by to match the constraints of the UTF-16 character encoding: explicitly prohibiting code points corresponding to the high and low surrogate characters removed more than 3% of the three-byte sequences, and ending at U+10FFFF removed more than 48% of the four-byte sequences and all five- and six-byte sequences.
Description
UTF-8 encodes code points in one to four bytes, depending on the value of the code point. In the following table, the characters to are replaced by the bits of the code point, from the positions :
The first 128 code points (ASCII) need 1 byte. The next 1,920 code points need two bytes to encode, which covers the remainder of almost all Latin-script alphabets, and also IPA extensions, Greek, Cyrillic, Coptic, Armenian, Hebrew, Arabic, Syriac, Thaana and N'Ko alphabets, as well as Combining Diacritical Marks. Three bytes are needed for the remaining 61,440 codepoints of the Basic Multilingual Plane (BMP), including most Chinese, Japanese and Korean characters. Four bytes are needed for the 1,048,576 non-BMP code points, which include emoji, less common CJK characters, and other useful characters.
This is a prefix code and it is unnecessary to read past the last byte of a code point to decode it. Unlike many earlier multi-byte text encodings such as Shift-JIS, it is self-synchronizing so searches for short strings or characters are possible and that the start of a code point can be found from a random position by backing up at most 3 bytes. The values chosen for the lead bytes means sorting a list of UTF-8 strings puts them in the same order as sorting UTF-32 strings.
Overlong encodings
Using a row in the above table to encode a code point less than "First code point" (thus using more bytes than necessary) is termed an overlong encoding. These are a security problem because they allow the same code point to be encoded in multiple ways. Overlong encodings (of for example) have been used to bypass security validations in high-profile products including Microsoft's IIS web server and Apache's Tomcat servlet container. Overlong encodings should therefore be considered an error and never decoded. Modified UTF-8 allows an overlong encoding of .
Byte map
The chart below gives the detailed meaning of each byte in a stream encoded in UTF-8.
Error handling
Not all sequences of bytes are valid UTF-8. A UTF-8 decoder should be prepared for:
Bytes that never appear in UTF-8: , ,
A "continuation byte" () at the start of a character
A non-continuation byte (or the string ending) before the end of a character
An overlong encoding ( followed by less than , or followed by less than )
A 4-byte sequence that decodes to a value greater than ( followed by or greater)
Many of the first UTF-8 decoders would decode these, ignoring incorrect bits. Carefully crafted invalid UTF-8 could make them either skip or create ASCII characters such as , slash, or quotes, leading to security vulnerabilities. It is also common to throw an exception or truncate the string at an error but this turns what would otherwise be harmless errors (i.e. "file not found") into a denial of service, for instance early versions of Python 3.0 would exit immediately if the command line or environment variables contained invalid UTF-8.
states "Implementations of the decoding algorithm MUST protect against decoding invalid sequences." The Unicode Standard requires decoders to: "... treat any ill-formed code unit sequence as an error condition. This guarantees that it will neither interpret nor emit an ill-formed code unit sequence." The standard now recommends replacing each error with the replacement character "�" (U+FFFD) and continue decoding.
Some decoders consider the sequence (a truncated 3-byte code followed by a space) as a single error. This is not a good idea as a search for a space character would find the one hidden in the error. Since Unicode 6 (October 2010) the standard (chapter 3) has recommended a "best practice" where the error is either one continuation byte, or ends at the first byte that is disallowed, so is a two-byte error followed by a space. This means an error is no more than three bytes long and never contains the start of a valid character, and there are different possible errors. Technically this makes UTF-8 no longer a prefix code (you have to read one byte past some errors to figure out they are an error), but searching still works if the searched-for string does not contain any errors.
Making each byte be an error, in which case is two errors followed by a space, also still allows searching for a valid string. This means there are only 128 different errors which makes it practical to store the errors in the output string, or replace them with characters from a legacy encoding.
Only a small subset of possible byte strings are error-free UTF-8: several bytes cannot appear; a byte with the high bit set cannot be alone; and in a truly random string a byte with a high bit set has only a chance of starting a valid UTF-8 character. This has the (possibly unintended) consequence of making it easy to detect if a legacy text encoding is accidentally used instead of UTF-8, making conversion of a system to UTF-8 easier and avoiding the need to require a Byte Order Mark or any other metadata.
Surrogates
Since RFC 3629 (November 2003), the high and low surrogates used by UTF-16 ( through ) are not legal Unicode values, and their UTF-8 encodings must be treated as an invalid byte sequence. These encodings all start with followed by or higher. This rule is often ignored as surrogates are allowed in Windows filenames and this means there must be a way to store them in a string. UTF-8 that allows these surrogate halves has been (informally) called , while another variation that also encodes all non-BMP characters as two surrogates (6 bytes instead of 4) is called CESU-8.
Byte-order mark
If the Unicode byte-order mark is at the start of a UTF-8 file, the first three bytes will be , , .
The Unicode Standard neither requires nor recommends the use of the BOM for UTF-8, but warns that it may be encountered at the start of a file trans-coded from another encoding. While ASCII text encoded using UTF-8 is backward compatible with ASCII, this is not true when Unicode Standard recommendations are ignored and a BOM is added. A BOM can confuse software that isn't prepared for it but can otherwise accept UTF-8, e.g. programming languages that permit non-ASCII bytes in string literals but not at the start of the file. Nevertheless, there was and still is software that always inserts a BOM when writing UTF-8, and refuses to correctly interpret UTF-8 unless the first character is a BOM (or the file only contains ASCII).
Comparison to UTF-16
For a long time there was considerable argument as to whether it was better to process text in UTF-16 or in UTF-8.
The primary advantage of UTF-16 is that the Windows API required it to be used to get access to all Unicode characters (only recently has this been fixed). This caused several libraries such as Qt to also use UTF-16 strings which propagates this requirement to non-Windows platforms.
In the early days of Unicode there were no characters greater than and combining characters were rarely used, so the 16-bit encoding was fixed-size. This made processing of text more efficient, though the gains are nowhere as great as novice programmers may imagine. All such advantages were lost as soon as UTF-16 became variable width as well.
The code points take 3 bytes in UTF-8 but only 2 in UTF-16. This led to the idea that text in Chinese and other languages would take more space in UTF-8. However, text is only larger if there are more of these code points than 1-byte ASCII code points, and this rarely happens in the real-world documents due to spaces, newlines, digits, punctuation, English words, and (depending on document format) markup.
UTF-8 has the advantages of being trivial to retrofit to any system that could handle an extended ASCII, not having byte-order problems, and taking about 1/2 the space for any language using mostly Latin letters.
Implementations and adoption
UTF-8 has been the most common encoding for the World Wide Web since 2008. , UTF-8 is used by 98.5% of surveyed web sites. Although many pages only use ASCII characters to display content, very few websites now declare their encoding to only be ASCII instead of UTF-8. Virtually all countries and languages have 95% or more use of UTF-8 encodings on the web.
Many standards only support UTF-8, e.g. JSON exchange requires it (without a byte-order mark (BOM)). UTF-8 is also the recommendation from the WHATWG for HTML and DOM specifications, and stating "UTF-8 encoding is the most appropriate encoding for interchange of Unicode" and the Internet Mail Consortium recommends that all e‑mail programs be able to display and create mail using UTF-8. The World Wide Web Consortium recommends UTF-8 as the default encoding in XML and HTML (and not just using UTF-8, also declaring it in metadata), "even when all characters are in the ASCII range ... Using non-UTF-8 encodings can have unexpected results".
Lots of software has the ability to read/write UTF-8. It may though require the user to change options from the normal settings, or may require a BOM (byte-order mark) as the first character to read the file. Examples of software supporting UTF-8 include Microsoft Word, Microsoft Excel (2016 and later), Google Drive, LibreOffice and most databases.
Software that "defaults" to UTF-8 (meaning it writes it without the user changing settings, and it reads it without a BOM) has become more common since 2010. Windows Notepad, in all currently supported versions of Windows, defaults to writing UTF-8 without a BOM (a change from Notepad), bringing it into line with most other text editors. Some system files on Windows 11 require UTF-8 with no requirement for a BOM, and almost all files on macOS and Linux are required to be UTF-8 without a BOM. Programming languages that default to UTF-8 for I/O include Ruby 3.0, R 4.2.2, Raku and Java 18. Although the current version of Python requires an option to open() to read/write UTF-8, plans exist to make UTF-8 I/O the default in Python 3.15. C++23 adopts UTF-8 as the only portable source code file format.
Backwards compatibility is a serious impediment to changing code and APIs using UTF-16 to use UTF-8, but this is happening. , Microsoft added the capability for an application to set UTF-8 as the "code page" for the Windows API, removing the need to use UTF-16; and more recently has recommended programmers use UTF-8, and even states "UTF-16 [...] is a unique burden that Windows places on code that targets multiple platforms".
The default string primitive in Go,
Julia, Rust, Swift (since version 5), and PyPy uses UTF-8 internally in all cases. Python (since version 3.3) uses UTF-8 internally for Python C API extensions and sometimes for strings and a future version of Python is planned to store strings as UTF-8 by default. Modern versions of Microsoft Visual Studio use UTF-8 internally. Microsoft's SQL Server 2019 added support for UTF-8, and using it results in a 35% speed increase, and "nearly 50% reduction in storage requirements."
Java internally uses Modified UTF-8 (MUTF-8), in which the null character uses the two-byte overlong encoding , , instead of just . Modified UTF-8 strings never contain any actual null bytes but can contain all Unicode code points including U+0000, which allows such strings (with a null byte appended) to be processed by traditional null-terminated string functions. Java reads and writes normal UTF-8 to files and streams, but it uses Modified UTF-8 for object serialization, for the Java Native Interface, and for embedding constant strings in class files. The dex format defined by Dalvik also uses the same modified UTF-8 to represent string values. Tcl also uses the same modified UTF-8 as Java for internal representation of Unicode data, but uses strict CESU-8 for external data. All known Modified UTF-8 implementations also treat the surrogate pairs as in CESU-8.
Raku programming language (formerly Perl 6) uses utf-8 encoding by default for I/O (Perl 5 also supports it); though that choice in Raku also implies "normalization into Unicode NFC (normalization form canonical). In some cases you may want to ensure no normalization is done; for this you can use utf8-c8". That UTF-8 Clean-8 variant, implemented by Raku, is an encoder/decoder that preserves bytes as is (even illegal UTF-8 sequences) and allows for Normal Form Grapheme synthetics.
Version 3 of the Python programming language treats each byte of an invalid UTF-8 bytestream as an error (see also changes with new UTF-8 mode in Python 3.7); this gives 128 different possible errors. Extensions have been created to allow any byte sequence that is assumed to be UTF-8 to be losslessly transformed to UTF-16 or UTF-32, by translating the 128 possible error bytes to reserved code points, and transforming those code points back to error bytes to output UTF-8. The most common approach is to translate the codes to U+DC80...U+DCFF which are low (trailing) surrogate values and thus "invalid" UTF-16, as used by Python's PEP 383 (or "surrogateescape") approach. Another encoding called MirBSD OPTU-8/16 converts them to U+EF80...U+EFFF in a Private Use Area. In either approach, the byte value is encoded in the low eight bits of the output code point. These encodings are needed if invalid UTF-8 is to survive translation to and then back from the UTF-16 used internally by Python, and as Unix filenames can contain invalid UTF-8 it is necessary for this to work.
Standards
The official name for the encoding is , the spelling used in all Unicode Consortium documents. The hyphen-minus is required and no spaces are allowed. Some other names used are:
Most standards are also case-insensitive and utf-8 is often used.
Web standards (which include CSS, HTML, XML, and HTTP headers) also allow and many other aliases.
The official Internet Assigned Numbers Authority lists as the only alias, which is rarely used.
In some locales means UTF-8 without a byte-order mark (BOM), and in this case may imply there is a BOM.
In Windows, UTF-8 is codepage 65001 with the symbolic name CP_UTF8 in source code.
In MySQL, UTF-8 is called utf8mb4, while and refer to the obsolete CESU-8 variant.
In Oracle Database (since version 9.0), AL32UTF8 means UTF-8, while means CESU-8.
In HP PCL, the Symbol-ID for UTF-8 is 18N.
There are several current definitions of UTF-8 in various standards documents:
/ STD 63 (2003), which establishes UTF-8 as a standard internet protocol element
defines UTF-8 NFC for Network Interchange (2008)
ISO/IEC 10646:2020-1 §9.1 (2020)
The Unicode Standard, Version 16.0.0 (2024)
They supersede the definitions given in the following obsolete works:
The Unicode Standard, Version 2.0, Appendix A (1996)
ISO/IEC 10646-1:1993 Amendment 2 / Annex R (1996)
(1996)
(1998)
The Unicode Standard, Version 3.0, §2.3 (2000) plus Corrigendum #1 : UTF-8 Shortest Form (2000)
Unicode Standard Annex #27: Unicode 3.1 (2001)
The Unicode Standard, Version 5.0 (2006)
The Unicode Standard, Version 6.0 (2010)
They are all the same in their general mechanics, with the main differences being on issues such as allowed range of code point values and safe handling of invalid input.
| Technology | Software development: General | null |
32197 | https://en.wikipedia.org/wiki/USS%20Monitor | USS Monitor | USS Monitor was an ironclad warship built for the United States Navy during the American Civil War and completed in early 1862, the first such ship commissioned by the Navy. Monitor played a central role in the Battle of Hampton Roads on 9 March under the command of Lieutenant John L. Worden, where she fought the casemate ironclad (built on the hull of the scuttled steam frigate ) to a stalemate. The design of the ship was distinguished by its revolving turret, which was designed by American inventor Theodore Timby; it was quickly duplicated and established the monitor class and type of armored warship built for the American Navy over the next several decades.
The remainder of the ship was designed by Swedish-born engineer and inventor John Ericsson, and built in only 101 days in Brooklyn, New York, on the East River beginning in late 1861. Monitor presented a new concept in ship design and employed a variety of new inventions and innovations in ship building that caught the attention of the world. The impetus to build Monitor was prompted by the news that the Confederates had raised the scuttled Merrimack and were building an iron-plated armored vessel named the Virginia on her hull in the old Federal naval shipyard at Gosport, near Norfolk, that could effectively engage the Union ships blockading Hampton Roads harbor and the James River leading northwest to Richmond (capital of the Confederacy). They could ultimately advance unchallenged on Washington, D.C., up the Potomac River and other seacoast cities. Before Monitor could reach Hampton Roads, the Confederate ironclad had already destroyed the sail frigates and and had run the steam frigate aground. That night, Monitor arrived and, just as Virginia set to finish off Minnesota and St. Lawrence on the second day, the new Union ironclad confronted the Confederate ship, preventing her from wreaking further destruction on the wooden Union ships. A four-hour battle ensued, each ship pounding the other with close-range cannon fire, although neither ship could destroy or seriously damage the other. This was the first battle fought between armored warships and marked a turning point in naval warfare.
The Confederates were forced to scuttle and destroy Virginia as they withdrew in early May 1862 from Norfolk and its naval shipyard, while Monitor sailed up the James River to support the Union Army during the Peninsula Campaign under General-in-Chief George B. McClellan. The ship participated in the Battle of Drewry's Bluff later that month, and remained in the area giving support to General McClellan's forces on land until she was ordered to join the Union Navy blockaders off North Carolina in December. On her way there, she foundered while under tow during a storm off Cape Hatteras on the last day of the year. Monitors wreck was discovered in 1973 and has been partially salvaged. Her guns, gun turret, engine, and other relics are on display at the Mariners' Museum in Newport News, Virginia, a few miles from the site of her most important military action.
Conception
While the concept of ships protected by armor existed before the advent of the ironclad Monitor, the need for iron plating on ship arose only after the explosive shell-firing Paixhans gun was introduced to naval warfare in the 1820s. The use of heavy iron plating on the sides of warships was not practical until steam propulsion matured enough to carry its great weight. Developments in gun technology had progressed by the 1840s so that no practical thickness of wood could withstand the power of a shell. In response, the United States began construction in 1854 of a steam-powered ironclad warship, Stevens Battery, but work was delayed and the designer, Robert Stevens, died in 1856, stalling further work. Since there was no pressing need for such a ship at the time, there was little demand to continue work on the unfinished vessel. It was France that introduced the first operational armored ships as well as the first shell guns and rifled cannons. Experience during the Crimean War of 1854–1855 showed that armored ships could withstand repeated hits without significant damage when French ironclad floating batteries defeated Russian coastal fortifications during the Battle of Kinburn. Ericsson claimed to have sent the French Emperor Napoléon III a proposal for a monitor-type design, with a gun turret, in September 1854, but no record of any such submission could be found in the archives of the French Ministry of the Navy (Ministre de la Marine) when they were searched by naval historian James Phinney Baxter III. The French followed those ships with the first ocean-going ironclad, the armored frigate in 1859, and the British responded with .
The Union Navy's attitude towards ironclads changed quickly when it was learned that the Confederates were converting the captured to an ironclad at the naval shipyard in Norfolk, Virginia. Subsequently, the urgency of Monitors completion and deployment to Hampton Roads was driven by fears of what the Confederate ironclad, now renamed Virginia, would be capable of doing, not only to Union ships but to cities along the coast and riverfronts. Northern newspapers published daily accounts of the Confederates' progress in converting the Merrimack to an ironclad; this prompted the Union Navy to complete and deploy Monitor as soon as possible.
Word of Merrimacks reconstruction and conversion was confirmed in the North in late February 1862 when Mary Louvestre of Norfolk, a freed slave who worked as a housekeeper for one of the Confederate engineers working on Merrimack, made her way through Confederate lines with news that the Confederates were building an ironclad warship. Concealed in her dress was a message from a Union sympathizer who worked in the Navy Yard warning that the former Merrimack, renamed Virginia by the Confederates, was nearing completion. Upon her arrival in Washington Louvestre managed to meet with Secretary of the Navy Gideon Welles and informed him that the Confederates were nearing the completion of their ironclad, which surprised Welles. Convinced by the papers Louvestre was carrying, he had production of Monitor sped up. Welles later recorded in his memoirs that "Mrs. Louvestre encountered no small risk in bringing this information ...".
Approval
After the United States received word of the construction of Virginia, Congress appropriated $1.5 million on 3 August 1861 to build one or more armored steamships. It also ordered the creation of a board to inquire into the various designs proposed for armored ships. The Union Navy advertised for proposals for "iron-clad steam vessels of war" on 7 August and Welles appointed three senior officers as the Ironclad Board the following day. Their task was to "examine plans for the completion of iron-clad vessels" and consider its costs.
Ericsson originally made no submission to the board, but became involved when Cornelius Bushnell, the sponsor of the proposal that became the armored sloop , needed to have his design reviewed by a naval constructor. The board required a guarantee from Bushnell that his ship would float despite the weight of its armor and Cornelius H. DeLamater of New York City recommended that Bushnell consult with his friend Ericsson. The two first met on 9 September and again on the following day, after Ericsson had time to evaluate Galena design. During this second meeting, Ericsson showed Bushnell a model of his own design, the future Monitor, derived from his 1854 design. Bushnell got Ericsson's permission to show the model to Welles, who told Bushnell to show it to the board. Upon review of Ericsson's unusual design, the board was skeptical, concerned that such a vessel would not float, especially in rough seas, and rejected the proposal of a completely iron laden ship. President Lincoln, who had also examined the design, overruled them. Ericsson assured the board his ship would float exclaiming, "The sea shall ride over her and she shall live in it like a duck". On 15 September, after further deliberations, the board accepted Ericsson's proposal. The Ironclad Board evaluated 17 different designs, but recommended only three for procurement on 16 September, including Ericsson's Monitor design.
The three ironclad ships selected differed substantially in design and degree of risk. Monitor was the most innovative design by virtue of its low freeboard, shallow-draft iron hull, and total dependence on steam power. The riskiest element of its design was its rotating gun turret, something that had not previously been tested by any navy. Ericsson's guarantee of delivery in 100 days proved to be decisive in choosing his design despite the risk involved.
Design and description
Monitor was an unusual vessel in almost every respect and was sometimes sarcastically described by the press and other critics as "Ericsson's folly", "cheesebox on a raft" and the "Yankee cheesebox". The most prominent feature on the vessel was a large cylindrical gun turret mounted amidships above the low-freeboard upper hull, also called the "raft". This extended well past the sides of the lower, more traditionally shaped hull. A small armored pilot house was fitted on the upper deck towards the bow, however, its position prevented Monitor from firing her guns straight forward. One of Ericsson's prime goals in designing the ship was to present the smallest possible target to enemy gunfire. The ship was long overall, had a beam of and had a maximum draft of . Monitor had a tonnage of 776 tons burthen and displaced . Her crew consisted of 49 officers and enlisted men.
The ship was powered by a single-cylinder horizontal vibrating-lever steam engine, also designed by Ericsson, which drove a propeller, whose shaft was nine inches in diameter. The engine used steam generated by two horizontal fire-tube boilers at a maximum pressure of . The engine was designed to give the ship a top speed of , but Monitor was slower in service. The engine had a bore of and a stroke of . The ship carried of coal. Ventilation for the vessel was supplied by two centrifugal blowers near the stern, each of which was powered by a steam engine. One fan circulated air throughout the ship, but the other one forced air through the boilers, which depended on this forced draught. Leather belts connected the blowers to their engines and they would stretch when wet, often disabling the fans and boilers. The ship's pumps were steam operated and water would accumulate in the ship if the pumps could not get enough steam to work.
Monitors turret measured in diameter and high, constructed with of armor (11 inches in front at the gun ports) rendering the overall vessel somewhat top heavy. Its rounded shape helped to deflect cannon shot. A pair of steam-powered donkey engines rotated the turret through a set of gears; a full rotation was made in 22.5 seconds during testing on 9 February 1862. Fine control of the turret proved to be difficult; as there was no brake the steam engines would have to be placed in reverse if the turret overshot its mark, or another full rotation would have to be made. The only way to see out of the turret was through the gun ports; when the guns were not in use, or withdrawn for reloading during battle, heavy iron port stoppers would swing down into place to close the gunports. Including the guns, the turret weighed approximately ; the entire weight rested on an iron spindle that had to be jacked up using a wedge before the turret could rotate. The spindle was in diameter which gave it ten times the strength needed in preventing the turret from sliding sideways. When not in use, the turret rested on a brass ring on the deck that was intended to form a watertight seal. In service, however, this proved to leak heavily, despite caulking by the crew. The gap between the turret and the deck proved to be a problem as debris and shell fragments entered the gap and jammed the turrets of several s, which used the same turret design, during the First Battle of Charleston Harbor in April 1863. Direct hits on the turret with heavy shot could bend the spindle, which could also jam the turret. To gain access to the turret from below, or to hoist up powder and shot during battle, the turret had to rotate to face starboard, which would line up the entry hatch in the floor of the turret with an opening in the deck below. The roof of the turret was lightly built to facilitate any needed exchange of the ship's guns and to improve ventilation, with only gravity holding the roof plates in place.
The turret was intended to mount a pair of smoothbore Dahlgren guns, but they were not ready in time and guns were substituted, weighing approximately each. Monitors guns used the standard propellant charge of specified by the 1860 ordnance for targets "distant", "near", and "ordinary", established by the gun's designer Dahlgren himself. They could fire a round shot or shell up to a range of at an elevation of +15°.
The top of the armored deck was only about above the waterline. It was protected by two layers of wrought iron armor. The sides of the "raft" consisted of three to five layers of iron plates, backed by about of pine and oak. Three of the plates extended the full height of the side, but the two innermost plates did not extend all the way down. Ericsson originally intended to use either six 1-inch plates or a single outer plate backed by three plates, but the thicker plate required too much time to roll. The two innermost plates were riveted together while the outer plates were bolted to the inner ones. A ninth plate, only thick and wide, was bolted over the butt joints of the innermost layer of armor. Glass portholes in the deck provided natural light for the interior of the ship; in action these were covered by iron plates.
After the duel between the two ironclads at Hampton Roads there was concern by some Navy officials who witnessed the battle that Monitors design might allow for easy boarding by the Confederates. In a letter dated 27 April 1862 Lieutenant Commander O.C. Badger wrote to Lieutenant H. A. Wise, Assistant Inspector of Ordnance, advising the use of "liquid fire", scalding water from the boiler through hoses and pipes, sprayed out via the vents and pilothouse window, to repel enemy boarders. Wise who was aboard and inspected Monitor after the battle responded in a letter of 30 April 1862: "With reference to the Monitor, the moment I jumped on board of her after the fight I saw that a steam tug with twenty men could have taken the upper part of her in as many seconds ... I hear that hot water pipes are arranged so as to scald the assailants when they may dare to set foot on her." The chance to employ such a tactic never arose. There are conflicting accounts as to whether such an anti-personnel provision was installed.
Construction
Commodore Joseph Smith, Chief of the Bureau of Yards and Docks, sent Ericsson formal notice of the acceptance of his proposal on 21 September 1861. Six days later, Ericsson signed a contract with Bushnell, John F. Winslow and John A. Griswold which stated that the four partners would equally share in the profits or the losses incurred by the construction of the ironclad. There was one major delay, however, over the signing of the actual contract with the government. Welles insisted that if Monitor did not prove to be a "complete success", the builders would have to refund every cent to the government. Winslow balked at this draconian provision and had to be persuaded by his partners to sign after the Navy rejected his attempt to amend the contract. The contract was finally signed on 4 October for a price of $275,000 to be paid in installments as work progressed.
Preliminary work had begun well before that date, however, and Ericsson's consortium contracted with Thomas F. Rowland of the Continental Iron Works at Bushwick Inlet (in modern-day Greenpoint, Brooklyn) on 25 October for construction of Monitors hull. Her keel was laid the same day. The turret was built and assembled at the Novelty Iron Works in Manhattan, disassembled and shipped to Bushwick Inlet where it was reassembled. The ship's steam engines and machinery were constructed at the DeLamater Iron Works, also in Manhattan. Chief Engineer Alban C. Stimers, who once served aboard Merrimack, was appointed Superintendent of the ship while she was undergoing construction. Although never formally assigned to the crew, he remained aboard her as an inspector during her maiden voyage and battle.
Construction progressed in fits and starts, plagued by a number of short delays in the delivery of iron and occasional shortages of cash, but they did not delay the ship's progress by more than a few weeks. The hundred days allotted for her construction passed on 12 January, but the Navy chose not to penalize the consortium. The name "Monitor", meaning "one who admonishes and corrects wrongdoers", was proposed by Ericsson on 20 January 1862 and approved by Assistant Secretary of the Navy Gustavus Fox. While Ericsson stood on its deck in defiance of all his critics who thought she would never float, Monitor was launched on 30 January 1862 to the cheers of the watching crowd, even those who had bet that the ship would sink straight to the bottom, and commissioned on 25 February.
Even before Monitor was commissioned, she ran an unsuccessful set of sea trials on 19 February. Valve problems with the main engine and one of the fan engines prevented her from reaching the Brooklyn Navy Yard from Bushwick Inlet and she had to be towed there the next day. These issues were easily fixed and Monitor was ordered to sail for Hampton Roads on 26 February, but her departure had to be delayed one day to load ammunition. On the morning of 27 February the ship entered the East River preparatory to leaving New York, but proved to be all but unsteerable and had to be towed back to the navy yard. Upon examination, the steering gear controlling the rudder had been improperly installed and Rowland offered to realign the rudder, which he estimated to take only a day. Ericsson, however, preferred to revise the steering gear by adding an extra set of pulleys as he believed it would take less time. His modification proved to be successful during trials on 4 March. Gunnery trials were successfully performed the previous day, although Stimers twice nearly caused disasters as he did not understand how the recoil mechanism worked on Ericsson's carriage for the 11-inch guns. Instead of tightening them to reduce the recoil upon firing, he loosened them so that both guns struck the back of the turret, fortunately without hurting anybody or damaging the guns.
Monitor employed over forty patented inventions and marked a significant departure from the dominant naval vessels of the time. Ericsson's innovative turret design, although not without flaws, facilitated the widespread adoption of rotating guns on warships in navies worldwide. Because Monitor was an experimental craft, urgently needed, hurriedly constructed, and almost immediately put to sea, a number of problems were discovered during her maiden voyage to Hampton Roads and during the battle there. Yet she was still able to challenge Virginia and prevent her from further destroying the remaining ships in the Union flotilla blockading Hampton Roads.
During the "boom time" of the Civil War, Ericsson could have made a fortune with his inventions used in Monitor, but instead gave the U.S. government all his Monitor patent rights saying it was his "contribution to the glorious Union cause".
Crew
The original officers at the time of Monitors commissioning were:
Monitor crew were all volunteers and totaled 49 officers and enlisted men. The ship required ten officers: a commander, an executive officer, four engineers, one medical officer, two masters and a paymaster. Before Worden was allowed to select, assemble, and commit a crew to Monitor, the vessel had to be completed.
Four of the officers were line officers and responsible for the handling of the vessel and operation of guns during battle, while the engineering officers were considered a class unto themselves. In Monitor turret, Greene and Stodder supervised loading and firing of the two Dahlgrens. Each gun was crewed by eight men. In Worden's report of 27 January 1862 to Welles, he stated he believed 17 men and 2 officers would be the maximum number in the turret that allowed the crew to work without getting in each other's way.
Monitor also required petty officers: among them was Daniel Toffey, Worden's nephew. Worden had selected Toffey to serve as his captain's clerk. Two black Americans were also among the enlisted men in the crew.
Living quarters for the senior officers consisted of eight separate well-furnished cabins, each provided with a small oak table and chair, an oil lamp, shelves and drawers and a canvas floor covering covered with a rug. The entire crew were given goat-skin mats to sleep on. Lighting for each living area was provided by small skylights in the deck above, which were covered by an iron hatch during battle. The officer's wardroom was located forward of the berth deck where officers would eat their meals, hold meetings or socialize during what little spare time they had. It was well furnished with an oriental rug, a large oak table and other such items. Ericsson had personally paid for the costs of all the officer's furnishings.
Many details of Monitors history and insights of everyday crew life have been discovered from correspondence sent from the various crew members to family and friends while serving aboard the ironclad. In particular the correspondence of George S. Geer, who sent more than 80 letters, often referred to as The Monitor Chronicles, to his wife Martha during the entire time of Monitors service provide many details and insights into every chapter of the ironclad's short-lived history, offering a rare perspective of a sailor's experience on the naval front during the Civil War. The letters of Acting Paymaster William F. Keeler to his wife Anna also corroborate many of the accounts of affairs that took place aboard Monitor. The letters of Geer and Keeler are available for viewing and are housed at the Mariners' Museum in Virginia. Other crew members were interviewed later in life, like Louis N. Stodder, one of the last crew members to abandon Monitor minutes before she sank in a storm at sea, who was the last surviving crew member of Monitor and lived well into the 20th century.
Service
On 6 March 1862, the ship departed New York bound for Fort Monroe, Virginia, towed by the ocean-going tug Seth Low and accompanied by the gunboats and . Worden, not trusting the seal between the turret and the hull, and ignoring Ericsson's advice, wedged the former in the up position and stuffed oakum and sail cloth in the gap. Rising seas that night washed the oakum away and water poured underneath the turret, as well as through the hawsepipe, various hatches, ventilation pipes, and the two funnels, so that the belts for the ventilation and boiler fans loosened and fell off and the fires in the boilers were nearly extinguished over the course of the next day; this created a toxic atmosphere in the engine room that knocked out most of the engine-room crew. First Assistant Engineer Isaac Newton ordered the engine room abandoned and had the able-bodied crew drag the afflicted engine room hands to the top of the turret where the fresh air could revive them. Both Newton and Stimers worked desperately to get the blowers to work, but they too succumbed to the noxious fumes and were taken above. One fireman was able to punch a hole in the fan box, drain the water, and restart the fan. Later that night, the wheel ropes controlling the ship's rudder jammed, making it nearly impossible to control the ship's heading in the rough seas. Monitor was now in danger of foundering, so Worden signaled Seth Low for help and had Monitor towed to calmer waters closer to shore so she was able to restart her engines later that evening. She rounded Cape Charles around 3:00 pm on 8 March and entered Chesapeake Bay, reaching Hampton Roads at 9:00 pm, well after the first day's fighting in the Battle of Hampton Roads had concluded.
Battle of Hampton Roads
On 8 March 1862, Virginia, commanded by Commander Franklin Buchanan, was ready to engage the Union flotilla blockading the James River. Virginia was powered by Merrimacks original engines, which had been condemned by the US Navy before her capture. The ship's chief engineer, H. Ashton Ramsay, served in Merrimack before the Civil War broke out and knew of the engines' unreliability, but Buchanan pushed forward undaunted.
The slow-moving Virginia attacked the Union blockading squadron in Hampton Roads, Virginia, destroying the sail frigates Cumberland and Congress. Early in the battle, the steam frigate ran aground while attempting to engage Virginia, and remained stranded throughout the battle. Virginia, however, was unable to attack Minnesota before daylight faded. That day Buchanan was severely wounded in the leg and was relieved of command by Catesby ap Roger Jones.
Days before the battle a telegraph cable was laid between Fortress Monroe, which overlooked Hampton Roads, and Washington. Washington was immediately informed of the dire situation after the initial battle. Many were now concerned Virginia would put to sea and begin bombarding cities such as New York while others feared she would ascend the Potomac River and attack Washington. In an emergency meeting among President Lincoln, Secretary of War Edwin M. Stanton, Secretary Welles and other senior naval officers, inquiries were made about Monitors ability to stop Virginias prospect of further destruction. When the temperamental Stanton learned that Monitor had only two guns he expressed contempt and rage as he paced back and forth, further increasing the anxiety and despair among members of the meeting. Assurances from Admiral Dahlgren and other officers that Virginia was too massive to effectively approach Washington and that Monitor was capable of the challenge offered him no consolation. After further deliberations Lincoln was finally assured but Stanton remained almost in a state of terror and sent telegrams to various governors and mayors of the coastal states warning them of the danger. Subsequently, Stanton approved a plan to load some sixty canal boats with stone and gravel and sink them in the Potomac, but Welles was able to convince Lincoln at the last moment that such a plan would only prevent Monitor and other Union ships from reaching Washington and that the barges should only be sunk if and when Virginia was able to make her way up the Potomac.
About 9:00 pm, Monitor finally arrived on the scene only to discover the destruction that Virginia had already wrought on the Union fleet. Worden was ordered upon reaching Hampton Roads to anchor alongside and report to Captain John Marston where Worden was briefed of the situation and received further orders to protect the grounded Minnesota. By midnight, under the cover of darkness, Monitor quietly pulled up alongside and behind the Minnesota and waited.
Duel of the ironclads
The next morning at about 6:00 am Virginia, accompanied by , and , got underway from Sewell's Point to finish off Minnesota and the rest of the blockaders, but were delayed sailing out into Hampton Roads because of heavy fog until about 8:00 am. In Monitor Worden was already at his station in the pilot house while Greene took command of the turret. Samuel Howard, Acting Master of Minnesota, who was familiar with Hampton Roads with its varying depths and shallow areas, had volunteered to be the pilot the night before and thus was accepted, while Quarter Master Peter Williams steered the vessel throughout the battle (Williams was later awarded the Medal of Honor for this act). The speaking tube used to communicate between the pilothouse and the turret had broken early in the action so Keeler and Toffey had to relay commands from Worden to Greene. As Virginia approached, she began firing at Minnesota from more than a mile away, a few of her shells hitting the vessel. When the firing was heard in the distance, Greene sent Keeler to the pilot house for permission to open fire as soon as possible where Worden ordered, "Tell Mr. Greene not to fire till I give the word, to be cool and deliberate, to take sure aim and not waste a shot."
Monitor, to the surprise of Virginias crew, had emerged from behind Minnesota and positioned herself between her and the grounded ship, preventing the Confederate ironclad from further engaging the vulnerable wooden ship at close range. At 8:45 am Worden gave the order to fire where Greene fired the first shots of the battle between the two ironclads which harmlessly deflected off the Confederate ironclad. During the battle Monitor fired solid shot, about once every eight minutes, while Virginia fired shell exclusively. The ironclads fought, generally at close range, for about four hours, ending at 12:15 pm, ranging from a few yards to more than a hundred. Both ships were constantly in motion, maintaining a circular pattern. Because of Virginias weak engines, great size and weight, and a draft of , she was slow and difficult to maneuver, taking her half an hour to complete a 180-degree turn.
During the engagement, the controls of the machinery driving Monitors turret spindle began to malfunction, making it extremely difficult to turn and stop the turret itself at a given position, so the crew simply let the turret continuously turn and fired their guns "on the fly" as they bore on Virginia. Monitor received several direct hits on the turret, causing some bolts to violently shear off and ricochet around inside. The deafening sound of the impact stunned some of the crew, causing nose and ear bleeding. However, neither vessel was able to sink or seriously damage the other. At one point, Virginia attempted to ram, but only struck Monitor a glancing blow and did no damage. The collision did, however, aggravate the damage to Virginias bow from when she had previously rammed Cumberland. Monitor was also unable to do significant damage to Virginia, possibly because her guns were firing with reduced charges, on advice from Commander John Dahlgren, the gun's designer, who lacked the "preliminary information" needed to determine what amount of charge was needed to "pierce, dislocate or dislodge iron plates" of various thicknesses and configurations. During the battle Stodder was stationed at the wheel that controlled the turning of the turret, but at one point when he was leaning against its side the turret received a direct hit directly opposite to him which knocked him clear across the inside, rendering him unconscious. He was taken below to recover and relieved by Stimers.
The two vessels were pounding each other at such close range that they collided five times. By 11:00 am Monitors supply of shot in the turret had been exhausted. With one of the gun port covers jammed shut, she hauled off to shallow waters to resupply the turret and repair the damaged hatch, which could not be fixed. During the lull in the battle, Worden climbed through the gun port out onto the deck to get a better view of the overall situation. Virginia, seeing Monitor turn away, turned her attention to the Minnesota and fired shots that set the wooden vessel ablaze, also destroying the nearby tugboat Dragon. When the turret was resupplied with ammunition, Worden returned to battle with only one gun able to fire.
Towards the end of the engagement, Worden directed Williams to steer Monitor around the stern of the Confederate ironclad; Lieutenant Wood fired Virginias 7-inch Brooke gun at Monitors pilothouse, striking the forward side directly beneath the sight hold, cracking the structural "iron log" along the base of the narrow opening just as Worden was peering out. Worden was heard to cry out, My eyes—I am blind! Others in the pilothouse had also been hit with fragments and were also bleeding. Temporarily blinded by shell fragments and gunpowder residue from the explosion, and believing the pilothouse to be severely damaged, Worden ordered Williams to sheer off into shallow water, where Virginia with her deep draft could not follow. There Monitor drifted idly for about twenty minutes. At the time the pilothouse was struck Worden's injury was only known to those in the pilothouse and immediately nearby. With Worden severely wounded, command passed to the executive officer, Samuel Greene. Taken by surprise, he was briefly undecided as to what action to take next, but after assessing the damage soon ordered Monitor to return to the battle area.
Shortly after Monitor withdrew, Virginia had run aground, at which time Jones came down from the spar deck to find the gun crews not returning fire. Jones demanded to know why and was briefed by Lieutenant Eggleston that powder was low and precious, and given Monitors resistance to shot after two hours of battle, maintained that continued firing at that point would be a waste of ammunition. Virginia soon managed to break away and headed back towards Norfolk for needed repairs, believing that Monitor had withdrawn from battle. Greene did not pursue Virginia and, like Worden, was under orders to stay with and protect Minnesota, an action for which he was later criticized.
As a result of the duel between the two ironclads, Monitor had been struck twenty-two times, including nine hits to the turret and two hits to the pilothouse. She had managed to fire forty-one shots from her pair of Dahlgren guns. Virginia had sustained ninety-seven indentations to her armor from the fire of Monitor and other ships. Neither ship had sustained any significant damage. In the opinion of Virginias commander Jones and her other officers, Monitor could have sunk their ship had she hit the vessel at the waterline.
Strategically, the battle between these two ships was considered the most definitive naval battle of the Civil War. The battle itself was largely considered a draw, though it could be argued Virginia did slightly more damage. Monitor did successfully defend Minnesota and the rest of the Union blockading force, while Virginia was unable to complete the destruction she started the previous day. The battle between the two ironclads marked a turning point in the way naval warfare would be fought in the future. Strategically, nothing had changed: the Union still controlled Hampton Roads and the Confederates still held several rivers and Norfolk, making it a strategic victory for the North. The battle of the ironclads led to what was referred to as "Monitor fever" in the North. During the course of the war improved designs based on Monitor emerged; 60 ironclads were built.
Events after the battle
Immediately following the battle Stimers telegraphed Ericsson, congratulating and thanking him for making it possible to confront the Confederate ironclad and for "saving the day". No sooner than Monitor had weighed anchor, numerous small boats and spectators on shore flocked around the ship to congratulate the crew for what they regarded as their victory over Virginia. Assistant Secretary Fox, who observed the entire battle from aboard Minnesota, came aboard Monitor and jokingly told her officers, "Well gentlemen, you don't look as though you just went through one of the greatest naval conflicts on record". A small tug soon came alongside and the blinded Worden was brought up from his cabin while crew members and spectators cheered. He was taken to Fort Monroe for preliminary treatment, then to a hospital in Washington.
Stimers and Newton soon began repairing the damage to the pilot house, and reconfigured the sides from an upright position to a slope of thirty degrees to deflect shot. During this time, Mrs. Worden personally brought news of her husband's progress and recovery and was optimistic, informing the crew his eyesight would soon return but he would be laid up for some time. She also informed them President Lincoln had personally paid Worden a visit extending his gratitude. Worden was later taken to his summer home in New York and remained unconscious for three months. He returned to Naval service in 1862 as captain of , another Monitor-type ironclad.
The Confederates were also celebrating what they considered a victory, as crowds of spectators gathered along the banks of the Elizabeth River, cheering and waving flags, handkerchiefs and hats as Virginia, displaying the captured ensign of Congress, passed along up the river. The Confederate government was ecstatic and immediately promoted Buchanan to Admiral.
Both the Union and Confederacy soon came up with plans for defeating the other's ironclad. Oddly, these did not depend on their own ironclads. The Union Navy chartered a large ship (the sidewheeler ) and reinforced her bow with steel specifically to be used as a naval ram, provided Virginia steamed far enough out into Hampton Roads.
On 11 April, Virginia, accompanied by a number of gunboats, steamed into Hampton Roads to Sewell's Point at the southeast edge, almost over to Newport News, in a challenge to lure Monitor into battle. Virginia fired a few shots ineffectively at very long range while Monitor returned fire, remaining near Fort Monroe, ready to fight if Virginia came to attack the Federal force congregated there. Furthermore, Vanderbilt was in position to ram Virginia if she approached the fort, but Virginia did not take the bait. In a further attempt to entice Monitor closer to the Confederate side so she could be boarded, the James River Squadron moved in and captured three merchant ships, the brigs Marcus and Sabout, and the schooner Catherine T. Dix. These had been grounded and abandoned when they sighted Virginia entering the Roads. Their flags were then hoisted "Union-side down" to taunt Monitor into a fight as they were towed back to Norfolk. In the end, both sides had failed to provoke a fight on their terms.
The Confederate Navy originally had devised a plan by which the James River Squadron would swarm Monitor with a party of men to board and capture the vessel, and disable her by using heavy hammers to drive iron wedges under and disable the turret, and by covering the pilothouse with a wet sail effectively blinding the pilot. Others would throw combustibles down the ventilation openings and smoke holes. At one point Jones made such an attempt to board the vessel, but she managed to slip away around the stern of Virginia in time.
There was a second confrontation on 8 May, when Virginia came out while Monitor and four other Federal ships bombarded Confederate batteries at Sewell's Point. The Federal ships retired slowly to Fort Monroe, hoping to lure Virginia into the Roads. She did not follow, however, and after firing a gun to windward as a sign of contempt, anchored off Sewell's Point. Later, when Confederate forces abandoned Norfolk on 11 May 1862, they were forced to destroy Virginia.
Battle of Drewry's Bluff
After the destruction of Virginia, Monitor was free to assist the Union Army and General McClellan's campaign against Richmond. As the Navy always gave command to officers based on seniority, Greene was replaced with Lieutenant Thomas O. Selfridge the day after the battle. Two days later, Selfridge was in turn relieved by Lieutenant William Nicholson Jeffers on 15 May 1862. Monitor was now part of a flotilla under the command of Admiral John Rodgers aboard Galena, and, along with three other gunboats, steamed up the James River and engaged the Confederate batteries at Drewry's Bluff. The force had instructions to coordinate their efforts with McClellan's forces on land and push on towards Richmond to bombard the city into surrender if possible. Without any assistance, the task force got within of the Confederate capital but could not proceed further because of sunken vessels and debris placed in the river that blocked further passage. There were also artillery batteries at Fort Darling overlooking and guarding the approach, along with other heavy guns and sharpshooters positioned along the river banks. The fort was strategically situated on the west bank of the James River atop of a bluff some above and overlooking the bend in the river. Monitor was of little help in the assault because the confinement and small gun ports of her turret would not allow her to elevate her guns sufficiently to engage the Confederate batteries at close range, so she had to fall back and fire at a greater distance, while the other gunboats were unable to overcome the fortifications on their own. After Monitor received only a few hits, without incurring any damage, the Confederates, many of whom were former crew members of Virginia well aware of her ability to withstand cannon shot even at close range, concentrated their guns on the other ships, especially Galena, which sustained considerable damage and moderate casualties. After a near four-hour artillery duel and sustaining numerous hits overall, the flotilla was unable to neutralize the fortification and had to turn back. Not a single Union ship reached Richmond until near the end of the war, when the city was finally evacuated by the Confederates.
After the battle at Drewry's Bluff Monitor remained on the James River providing support, along with the Galena and other gunboats, to McClellan's troops at various points along the river including Harrison's Landing
which ended in August. However most of the time spent on the river was marked with inactivity and hot weather which had a negative effect on the morale of Monitors crew. During the long, hot, summer, several crew members became sick and were transferred to Hampton Roads while various officers were replaced including Newton, while Jeffers was replaced by Commander Thomas H. Stevens, Jr. on 15 August. By the end of August, Monitor was ordered back to Hampton Roads and dropped anchor nearby the sunken Cumberland at Newport News Point on 30 August, much to the approval of the crew. Monitors sole purpose now was to blockade the James River from any advances made by the newly constructed , an ironclad ram.
Repairs and refit
In September Captain John P. Bankhead received orders to take command of Monitor, relieving Stevens, and was sent to Hampton Roads to take charge of the vessel. Shortly after Bankhead assumed command, Monitors engines and boilers were condemned by a board of survey which recommended that they be overhauled completely. On 30 September the ironclad was sent to the Washington Navy Yard for repairs arriving there on 3 October.
Upon arrival at Washington Monitor and her crew were greeted by a crowd of thousands of cheering admirers who came to see the ship that "saved the nation". Monitor was now a premier tourist attraction and the crowd was soon allowed on board to tour the vessel. During this time the vessel was picked clean of artifacts for souvenirs by the touring civilians that came aboard. When Stodder and others came to close up the dock and ship one evening Stodder noted, "When we came up to clean that night there was not a key, doorknob, escutcheon – there wasn't a thing that hadn't been carried away."
Before Monitor was put into dry dock for repairs, Lincoln, Fox, various officials and a few of Worden's close friends arrived to ceremoniously review the vessel and pay respect to the crew and former commander Worden, who after a long and partial recovery arrived for the occasion. Entire army regiments were also directed to come by the navy yard and review the ship and honor the crew. Monitors crew assembled on deck in formation with their officers in front, while Lincoln, Fox and other guests stood near the turret. When Worden, with part of his face blackened from the wounds he received at Hampton Roads, came aboard, the heavy guns in the navy yard were fired in salute. Lincoln came forward and greeted Worden and then introduced him to some of the others. After his formal greeting the crew swarmed around Worden and embraced and shook hands with their former commander and thanked God for his recovery and return. Worden called each of them by name and spoke friendly to and complimented each of them personally. When order was restored the President gave a short speech about Worden's career. At Fox's request, Worden gave a speech to the gathering about his voyage from New York to Hampton Roads, the trials they were faced with along the way and of the great battle between Monitor and Virginia, while paying tribute to many of the officers and men involved. In closing he gave special thanks to Ericsson, Lincoln, Welles and all who made construction of Monitor possible.
While Monitor was undergoing repairs her crew was put aboard and were eventually granted a furlough by Bankhead who himself went on leave. For approximately six weeks the vessel remained in dry dock while her bottom was scraped clean, the engines and boilers were overhauled, the entire vessel was cleaned and painted, and a number of improvements made, including an iron shield around the top of the turret. To make the vessel more seaworthy, a funnel-shaped smokestack was placed over the smoke outlet while taller fresh air vents were installed. The berth deck below was also enlarged and raised by removing some of the side storerooms and placing them below, thus reducing the height of the interior which now barely allowed the crew to stand upright. Several cranes were also added while interior improvements were made making the confining environment more livable. A large blower that operated with its own engine was installed which drew fresh air down through the pilothouse. During this time the two Dahlgren guns were each engraved with large letters, MONITOR & MERRIMAC – WORDEN and MONITOR & MERRIMAC – ERICSSON, respectively. Additional iron plates were installed covering the dents from the previous battles. Each plate was inscribed with the name of the source from where the shell causing the dent was made. i.e. Merrimack, Fort Darling, etc. Stanchions were also installed around the perimeter of the freeboard with a rope strung through each making it safer to walk about the deck amid stormy weather and rough seas. Monitor was finally taken out of dry dock on 26 October. By November the ship was fully repaired and ready to return to service.
Final voyage
On 24 December 1862, orders were issued directing Monitor to Beaufort, North Carolina to join and for a joint Army-Navy expedition against Wilmington, North Carolina, where she would join the blockade off Charleston. The orders were received by the crew on Christmas Day, some of whom had been aboard Monitor on her harrowing journey from New York to Hampton Roads in March and were not pleased with the prospect of taking to the high seas once again. Dana Greene remarked, "I do not consider this steamer a sea going vessel".
The crew celebrated Christmas aboard Monitor while berthed at Hampton Roads in what was described as a most merry fashion, while many other celebrations were occurring along the shore. The ship's cook was paid one dollar to prepare a meal for the crew befitting the day; it was received with mixed opinion. That day, Monitor was made ready for sea, her crew under strict orders not to discuss the impending voyage with anyone, but bad weather delayed her departure until 29 December.
While the design of Monitor was well-suited for river combat, her low freeboard and heavy turret made her highly unseaworthy in rough waters. Under the command of John P. Bankhead, Monitor put to sea on 31 December, under tow from the steamship , as a heavy storm developed off Cape Hatteras, North Carolina. Using chalk and a blackboard, Bankhead wrote messages alerting Rhode Island that if Monitor needed help she would signal with a red lantern.
Monitor was soon in trouble as the storm increased in ferocity. Large waves were splashing over and completely covering the deck and pilot house so the crew temporarily rigged the wheel atop the turret which was manned by helmsman Francis Butts. Water continued flooding into the vents and ports and the ship began rolling uncontrollably in the high seas. Sometimes she would drop into a wave with such force the entire hull would tremble. Leaks were beginning to appear everywhere. Bankhead ordered the engineers to start the Worthington pumps, which temporarily stemmed the rising waters, but soon Monitor was hit by a squall and a series of violent waves and water continued to work its way into the vessel. Right when the Worthington pump could no longer keep pace with the flooding, a call came from the engine room that water was gaining there. Realizing the ship was in serious trouble, Bankhead signaled Rhode Island for help and hoisted the red lantern next to Monitors white running light atop the turret. He then ordered the anchor dropped to stop the ship's rolling and pitching with little effect, making it no easier for the rescue boats to get close enough to receive her crew. He then ordered the towline cut and called for volunteers, Stodder, along with crewmates John Stocking, and James Fenwick volunteered and climbed down from the turret, but eyewitnesses said that as soon as they were on the deck Fenwick and Stocking were quickly swept overboard and drowned. Stodder managed to hang onto the safety lines around the deck and finally cut through the towline with a hatchet. At 11:30 pm. Bankhead ordered the engineers to stop engines and divert all available steam to the large Adams centrifugal steam pump; but with reduced steam output from a boiler being fed wet coal, it too was unable to stem the rapidly rising water. After all of the steam pumps had failed, Bankhead ordered some of the crew to man the hand pumps and organized a bucket brigade, but to no avail.
Greene and Stodder were among the last men to abandon ship and remained with Bankhead who was the last surviving man to abandon the sinking Monitor. In his official report of Monitor to the Navy Department, Bankhead praised Greene and Stodder for their heroic efforts and wrote, "I would beg leave to call the attention of the Admiral and of the Department of the particularly good conduct of Lieutenant Greene and Acting Master Louis N. Stodder, who remained with me until the last, and by their example did much toward inspiring confidence and obedience on the part of the others."
After a frantic rescue effort, Monitor finally capsized and sank, stern first, approximately southeast off Cape Hatteras with the loss of sixteen men, including four officers, some of whom remained in the turret, which detached as the ship capsized. Forty-seven men were rescued by the life boats from Rhode Island. Bankhead, Greene and Stodder barely managed to get clear of the sinking vessel and survived the ordeal, but suffered from exposure from the icy winter sea. After his initial recovery, Bankhead filed his official report, as did the commanding officers of the Rhode Island, stating officers and men of both Monitor and Rhode Island did everything within their ability to keep Monitor from sinking. The Navy did not find it necessary to commission a board of inquiry to investigate the affair and took no action against Bankhead or any of his officers.
Some time later a controversy emerged over why Monitor sank. In the Army and Navy Journal, Ericsson accused the crew of drunkenness during the storm, being consequently unable to prevent the vessel from sinking. Stodder vigorously defended the crew and rebuked Ericsson's characterization of the crew and events and wrote to Pierce that Ericsson "covers up defects by blaming those that are now dead", pointing out that there were a number of unavoidable events and circumstances that led to the ship's sinking, foremost being the overhang between the upper and lower hulls which came loose and partially separated during the storm from slamming into the violent waves. Stodder's account was corroborated by other shipmates.
Rediscovery
The Navy tested an "underwater locator" in August 1949 by searching an area south of the Cape Hatteras Lighthouse for the wreck of Monitor. It found a long object bulky enough to be a shipwreck, in of water that was thought to be Monitor, but powerful currents negated attempts by divers to investigate. Retired Rear Admiral Edward Ellsberg proposed using external pontoons to raise the wreck in 1951, the same method of marine salvage he had used on the sunken submarine , for the cost of $250,000. Four years later, Robert F. Marx claimed to have discovered the wreck based on the idea she had drifted into shallow water north of the lighthouse before sinking. Marx said he had dived on the wreck and placed a Coke bottle with his name on it in one of the gun barrels, although he never provided any proof of his story.
Interest in locating the ship revived in the early 1970s and Duke University, the National Geographic Society and the National Science Foundation sponsored an expedition in August 1973 to search for the wreck using a towed sonar system. The Duke team was led by John G. Newton (no known relation to the Isaac Newton that served on the Monitor). On 27 August, Monitor was discovered almost 111 years after sinking, near Cape Hatteras at depth at coordinates . They sent a camera down to photograph the wreck, but the pictures were so fuzzy as to be useless; on a second attempt the camera snagged something on the wreck and was lost. The sonar images did not match what they expected the wreck to look like until they realized that the sinking vessel had turned over while descending and was resting at the bottom upside down. The team announced their discovery on 8 March 1974. Another expedition was mounted that same month to confirm the discovery and the research submersible Alcoa Sea Probe was able to take still photos and video of the wreck that confirmed it was Monitor.
These photos revealed that the wreck was disintegrating and the discovery raised another issue. Since the Navy had formally abandoned the wreck in 1953, it could be exploited by divers and private salvage companies as it lay outside North Carolina's territorial limits. To preserve the ship, the wreck, and everything around it, a radius was designated as the Monitor National Marine Sanctuary, the first U.S. marine sanctuary, on 30 January 1975. Monitor was also designated a National Historic Landmark on 23 June 1986.
In 1977, scientists were finally able to view the wreckage in person as the submersible Johnson Sea Link was used to inspect it. The Sea Link was able to ferry divers down to the sunken vessel and retrieve small artifacts. U.S. Navy interest in raising the entire ship ended in 1978 when Captain Willard F. Searle Jr. calculated the cost and possible damage expected from the operation: $20 million to stabilize the vessel in place, or as much as $50 million to bring all of it to the surface. Research continued and artifacts continued to be recovered, including the ship's anchor in 1983. The growing number of relics required conservation and a proper home so the U.S. National Oceanic and Atmospheric Administration (NOAA), in charge of all U.S. marine sanctuaries, selected the Mariners' Museum on 9 March 1987 after considering proposals from several other institutions.
Recovery
Initial efforts in 1995 by Navy and NOAA divers to raise the warship's propeller were foiled by an abnormally stormy season off Cape Hatteras. Realizing that raising the whole wreck was impractical for financial reasons as well as the inability to bring up the wreck intact, NOAA developed a comprehensive plan to recover the most significant parts of the ship, namely her engine, propeller, guns, and turret. It estimated that the plan would cost over 20 million dollars to implement over four years. The Department of Defense Legacy Resource Management Program contributed $14.5 million. The Navy divers, mainly from its two Mobile Diving and Salvage Units, would perform the bulk of the work necessary in order to train in deep sea conditions and evaluate new equipment.
Another effort to raise Monitors propeller was successful on 8 June 1998, although the amount of effort required to work in the difficult conditions off Cape Hatteras was underestimated and the fewer than 30 divers used were nearly overwhelmed. The 1999 dive season was mostly research oriented as divers investigated the wreck in detail, planning how to recover the engine and determining if they could stabilize the hull so that it would not collapse onto the turret. In 2000, the divers shored up the port side of the hull with bags of grout, installed the engine recovery system, an external framework to which the engine would be attached, in preparation for the next season, and made over five times as many dives as they had the previous season.
The 2001 dive season concentrated on raising the ship's steam engine and condenser. Hull plates had to be removed to access the engine compartment and both the engine and the condenser had to be separated from the ship, the surrounding wreckage and each other. A Mini Rover ROV was used to provide visibility of the wreck and divers to the support staff above water. The engine was raised on 16 July and the condenser three days later by the crane barge Wotan. Saturation diving was evaluated by the Navy that dive season on Monitor and proved to be very successful, allowing divers to maximize their time on the bottom. The surface-supplied divers evaluated the use of heliox due to the depth of the wreck. It also proved to be successful once the dive tables were adjusted.
Much like the previous year, the 2002 dive season was dedicated to lifting the turret to the surface. Around 160 divers were assigned to remove the parts of the hull, including the armor belt, that lay on top of the turret using chisels, exothermic cutting torches and hydroblasters. They removed as much of the debris from inside the turret as possible to reduce the weight to be lifted. This was usually concreted coal as one of the ship's coal bunkers had ruptured and dumped most of its contents into the turret. The divers prepared the turret roof for the first stage of the lift by excavating underneath the turret and placed steel beams and angle irons to reinforce it for its move onto a lifting platform for the second stage. A large, eight-legged lifting frame, nicknamed the "spider", was carefully positioned over the turret to move it onto the platform and the entire affair would be lifted by the crane mounted on the Wotan. The divers discovered one skeleton in the turret on 26 July before the lift and spent a week carefully chipping about half of it free of the concreted debris; the other half was inaccessible underneath the rear of one of the guns.
With Tropical Storm Cristobal bearing down on the recovery team, and time and money running out, the team made the decision to raise the turret on 5 August 2002, after 41 days of work, and the gun turret broke the surface at 5:30 pm to the cheers of everyone aboard Wotan and other recovery ships nearby. As archaeologists examined the contents of the turret after it had been landed aboard Wotan, they discovered a second skeleton, but removing it did not begin until the turret arrived at the Mariners' Museum for conservation. The remains of these sailors were transferred to the Joint POW/MIA Accounting Command (JPAC) at Hickam Air Force Base, Hawaii, in the hope that they could be identified.
Only 16 of the crew were not rescued by Rhode Island before Monitor sank and the forensic anthropologists at JPAC were able to rule out the three missing black crewmen based on the shape of the femurs and skulls. Among the most promising of the 16 candidates were crew members Jacob Nicklis, Robert Williams and William Bryan, but a decade passed without their identities being discovered. On 8 March 2013 their remains were buried at Arlington National Cemetery with full military honors.
In 2003 NOAA divers and volunteers returned to the Monitor with the goal of obtaining overall video of the site to create a permanent record of the current conditions on the wreck after the turret recovery. Jeff Johnston of the Monitor National Marine Sanctuary (MNMS) also wanted a definitive image of the vessel's pilothouse. During the dives, Monitors iron pilothouse was located near the bow of the vessel and documented for the first time by videographer Rick Allen, of Nautilus Productions, in its inverted position.
Conservation of the propeller was completed nearly three years after its recovery, and it is on display in the Monitor Center at the Mariners' Museum. As of 2013, conservation of the engine, its components, the turret and the guns continues. The Dahlgren guns were removed from the turret in September 2004 and placed in their own conservation tanks. Among some of the artifacts recovered from the sunken vessel was a red signal lantern, possibly the one used to send a distress signal to Rhode Island and the last thing to be seen before Monitor sank in 1862 – it was the first object recovered from the site in 1977. A gold wedding band was also recovered from the hand of the skeletal remains of one of Monitors crew members found in the turret.
Northrop Grumman Shipyard in Newport News constructed a full-scale non-seaworthy static replica of Monitor. The replica was laid down in February 2005 and completed just two months later on the grounds of the Mariners' Museum. The Monitor National Marine Sanctuary conducts occasional dives on the wreck to monitor and record any changes in its condition and its environment.
Memorials
The Greenpoint Monitor Monument in McGolrick Park, Brooklyn, depicts a sailor from Monitor pulling on a capstan. The sculptor Antonio de Filippo was commissioned by the State of New York in the 1930s for a bronze statue to commemorate the Battle of Hampton Roads, John Ericsson, and the crew of the ship. It was dedicated on 6 November 1938. A vandal doused it with white paint on 7 January 2013.
In 1995 the U.S. Postal Service issued a stamp commemorating USS Monitor and CSS Virginia depicting the two ships while engaged in their famous battle at Hampton Roads. For an image of the stamp, see footnote link.
The 150th anniversary of the ship's loss prompted several events in commemoration. A memorial to Monitor and her lost crew members was erected in the Civil War section of Hampton National Cemetery by NOAA's Office of National Marine Sanctuaries, together with the U.S. Navy and the U.S. Department of Veterans Affairs, and dedicated on 29 December 2012. The Greenpoint Monitor Museum commemorated the ship and her crew with an event on 12 January 2013 at the grave sites of those Monitor crew members buried in Green-Wood Cemetery in Brooklyn, followed by a service in the cemetery's chapel.
New Jersey–based indie rock band Titus Andronicus named their critically acclaimed second album, 2010's The Monitor, for the ship. Featured on the album's sleeve are the crewmen of Monitor, taken from a tintype portrait. The album's interwoven references to the Civil War include speeches and writings from the period, as well as the side-long closing track "The Battle of Hampton Roads". The latter refers to the Monitors encounter with CSS Virginia in prominent detail. Singer/guitarist Patrick Stickles commented while making the album that he was inspired by Ken Burns's The Civil War and the ship itself so much that he decided to name Titus Andronicus's second album in its honor.
Legacy
Monitor gave her name to a new type of mastless, low-freeboard warship that mounted its armament in turrets. Many more were built, including river monitors, and they played key roles in Civil War battles on the Mississippi and James Rivers. The breastwork monitor was developed during the 1860s by Sir Edward Reed, Chief Constructor of the Royal Navy, as an improvement of the basic Monitor design. Reed gave these ships a superstructure to increase seaworthiness and raise the freeboard of the gun turrets so they could be worked in all weathers. The superstructure was armored to protect the bases of the turrets, the funnels and the ventilator ducts in what he termed a breastwork. The ships were conceived as harbor defense ships with little need to leave port. Reed took advantage of the lack of masts and designed the ships with one twin-gun turret at each end of the superstructure, each able to turn and fire in a 270° arc. These ships were described by Admiral George Alexander Ballard as being like "full-armoured knights riding on donkeys, easy to avoid but bad to close with". Reed later developed the design into the , the first ocean-going turret ships without masts, the direct ancestors of the pre-dreadnought battleships and the dreadnoughts.
In popular culture
The battle between the Monitor and the Confederate ironclad CSS Virginia was reenacted using scale models in the 1936 film Hearts in Bondage from Republic Pictures. The battle was also dramatized in the 1991 made-for-television movie Ironclads, produced by Ted Turner.
| Technology | Specific seacraft | null |
32245 | https://en.wikipedia.org/wiki/Universal%20property | Universal property | In mathematics, more specifically in category theory, a universal property is a property that characterizes up to an isomorphism the result of some constructions. Thus, universal properties can be used for defining some objects independently from the method chosen for constructing them. For example, the definitions of the integers from the natural numbers, of the rational numbers from the integers, of the real numbers from the rational numbers, and of polynomial rings from the field of their coefficients can all be done in terms of universal properties. In particular, the concept of universal property allows a simple proof that all constructions of real numbers are equivalent: it suffices to prove that they satisfy the same universal property.
Technically, a universal property is defined in terms of categories and functors by means of a universal morphism (see , below). Universal morphisms can also be thought more abstractly as initial or terminal objects of a comma category (see , below).
Universal properties occur almost everywhere in mathematics, and the use of the concept allows the use of general properties of universal properties for easily proving some properties that would need boring verifications otherwise. For example, given a commutative ring , the field of fractions of the quotient ring of by a prime ideal can be identified with the residue field of the localization of at ; that is (all these constructions can be defined by universal properties).
Other objects that can be defined by universal properties include: all free objects, direct products and direct sums, free groups, free lattices, Grothendieck group, completion of a metric space, completion of a ring, Dedekind–MacNeille completion, product topologies, Stone–Čech compactification, tensor products, inverse limit and direct limit, kernels and cokernels, quotient groups, quotient vector spaces, and other quotient spaces.
Motivation
Before giving a formal definition of universal properties, we offer some motivation for studying such constructions.
The concrete details of a given construction may be messy, but if the construction satisfies a universal property, one can forget all those details: all there is to know about the construction is already contained in the universal property. Proofs often become short and elegant if the universal property is used rather than the concrete details. For example, the tensor algebra of a vector space is slightly complicated to construct, but much easier to deal with by its universal property.
Universal properties define objects uniquely up to a unique isomorphism. Therefore, one strategy to prove that two objects are isomorphic is to show that they satisfy the same universal property.
Universal constructions are functorial in nature: if one can carry out the construction for every object in a category C then one obtains a functor on C. Furthermore, this functor is a right or left adjoint to the functor U used in the definition of the universal property.
Universal properties occur everywhere in mathematics. By understanding their abstract properties, one obtains information about all these constructions and can avoid repeating the same analysis for each individual instance.
Formal definition
To understand the definition of a universal construction, it is important to look at examples. Universal constructions were not defined out of thin air, but were rather defined after mathematicians began noticing a pattern in many mathematical constructions (see Examples below). Hence, the definition may not make sense to one at first, but will become clear when one reconciles it with concrete examples.
Let be a functor between categories and . In what follows, let be an object of , and be objects of , and be a morphism in .
Then, the functor maps , and in to , and in .
A universal morphism from to is a unique pair in which has the following property, commonly referred to as a universal property:
For any morphism of the form
in , there exists a unique morphism in such that the following diagram commutes:
We can dualize this categorical concept. A universal morphism from to is a unique pair that satisfies the following universal property:
For any morphism of the form in , there exists a unique morphism in such that the following diagram commutes:
Note that in each definition, the arrows are reversed. Both definitions are necessary to describe universal constructions which appear in mathematics; but they also arise due to the inherent duality present in category theory.
In either case, we say that the pair which behaves as above satisfies a universal property.
Connection with comma categories
Universal morphisms can be described more concisely as initial and terminal objects in a comma category (i.e. one where morphisms are seen as objects in their own right).
Let be a functor and an object of . Then recall that the comma category is the category where
Objects are pairs of the form , where is an object in
A morphism from to is given by a morphism in such that the diagram commutes:
Now suppose that the object in is initial. Then
for every object , there exists a unique morphism such that the following diagram commutes.
Note that the equality here simply means the diagrams are the same. Also note that the diagram on the right side of the equality is the exact same as the one offered in defining a universal morphism from to . Therefore, we see that a universal morphism from to is equivalent to an initial object in the comma category .
Conversely, recall that the comma category is the category where
Objects are pairs of the form where is an object in
A morphism from to is given by a morphism in such that the diagram commutes:
Suppose is a terminal object in . Then for every object ,
there exists a unique morphism such that the following diagrams commute.
The diagram on the right side of the equality is the same diagram pictured when defining a universal morphism from to . Hence, a universal morphism from to corresponds with a terminal object in the comma category
.
Examples
Below are a few examples, to highlight the general idea. The reader can construct numerous other examples by consulting the articles mentioned in the introduction.
Tensor algebras
Let be the category of vector spaces -Vect over a field and let be the category of algebras -Alg over (assumed to be unital and associative). Let
: -Alg → -Vect
be the forgetful functor which assigns to each algebra its underlying vector space.
Given any vector space over we can construct the tensor algebra . The tensor algebra is characterized by the fact:
“Any linear map from to an algebra can be uniquely extended to an algebra homomorphism from to .”
This statement is an initial property of the tensor algebra since it expresses the fact that the pair , where is the inclusion map, is a universal morphism from the vector space to the functor .
Since this construction works for any vector space , we conclude that is a functor from -Vect to -Alg. This means that is left adjoint to the forgetful functor (see the section below on relation to adjoint functors).
Products
A categorical product can be characterized by a universal construction. For concreteness, one may consider the Cartesian product in Set, the direct product in Grp, or the product topology in Top, where products exist.
Let and be objects of a category with finite products. The product of and is an object × together with two morphisms
:
:
such that for any other object of and morphisms and there exists a unique morphism such that and .
To understand this characterization as a universal property, take the category to be the product category and define the diagonal functor
by and . Then is a universal morphism from to the object of : if is any morphism from to , then it must equal
a morphism from
to followed by . As a commutative diagram:
For the example of the Cartesian product in Set, the morphism comprises the two projections and . Given any set and functions the unique map such that the required diagram commutes is given by .
Limits and colimits
Categorical products are a particular kind of limit in category theory. One can generalize the above example to arbitrary limits and colimits.
Let and be categories with a small index category and let be the corresponding functor category. The diagonal functor
is the functor that maps each object in to the constant functor (i.e. for each in and for each in ) and each morphism in to the natural transformation in defined as, for every object of , the component
at . In other words, the natural transformation is the one defined by having constant component for every object of .
Given a functor (thought of as an object in ), the limit of , if it exists, is nothing but a universal morphism from to . Dually, the colimit of is a universal morphism from to .
Properties
Existence and uniqueness
Defining a quantity does not guarantee its existence. Given a functor and an object of ,
there may or may not exist a universal morphism from to . If, however, a universal morphism does exist, then it is essentially unique.
Specifically, it is unique up to a unique isomorphism: if is another pair, then there exists a unique isomorphism
such that .
This is easily seen by substituting in the definition of a universal morphism.
It is the pair which is essentially unique in this fashion. The object itself is only unique up to isomorphism. Indeed, if is a universal morphism and is any isomorphism then the pair , where is also a universal morphism.
Equivalent formulations
The definition of a universal morphism can be rephrased in a variety of ways. Let be a functor and let be an object of . Then the following statements are equivalent:
is a universal morphism from to
is an initial object of the comma category
is a representation of , where its components are defined by
for each object in
The dual statements are also equivalent:
is a universal morphism from to
is a terminal object of the comma category
is a representation of , where its components are defined by
for each object in
Relation to adjoint functors
Suppose is a universal morphism from to and is a universal morphism from to .
By the universal property of universal morphisms, given any morphism there exists a unique morphism such that the following diagram commutes:
If every object of admits a universal morphism to , then the assignment and defines a functor . The maps then define a natural transformation from (the identity functor on ) to . The functors are then a pair of adjoint functors, with left-adjoint to and right-adjoint to .
Similar statements apply to the dual situation of terminal morphisms from . If such morphisms exist for every in one obtains a functor which is right-adjoint to (so is left-adjoint to ).
Indeed, all pairs of adjoint functors arise from universal constructions in this manner. Let and be a pair of adjoint functors with unit and co-unit
(see the article on adjoint functors for the definitions). Then we have a universal morphism for each object in and :
For each object in , is a universal morphism from to . That is, for all there exists a unique for which the following diagrams commute.
For each object in , is a universal morphism from to . That is, for all there exists a unique for which the following diagrams commute.
Universal constructions are more general than adjoint functor pairs: a universal construction is like an optimization problem; it gives rise to an adjoint pair if and only if this problem has a solution for every object of (equivalently, every object of ).
History
Universal properties of various topological constructions were presented by Pierre Samuel in 1948. They were later used extensively by Bourbaki. The closely related concept of adjoint functors was introduced independently by Daniel Kan in 1958.
| Mathematics | Category theory | null |
32259 | https://en.wikipedia.org/wiki/Bladder | Bladder | The bladder () is a hollow organ in humans and other vertebrates that stores urine from the kidneys. In placental mammals, urine enters the bladder via the ureters and exits via the urethra during urination. In humans, the bladder is a distensible organ that sits on the pelvic floor. The typical adult human bladder will hold between 300 and (10 and ) before the urge to empty occurs, but can hold considerably more.
The Latin phrase for "urinary bladder" is vesica urinaria, and the term vesical or prefix vesico- appear in connection with associated structures such as vesical veins. The modern Latin word for "bladder" – cystis – appears in associated terms such as cystitis (inflammation of the bladder).
Structure
In humans, the bladder is a hollow muscular organ situated at the base of the pelvis. In gross anatomy, the bladder can be divided into a broad (base), a body, an apex, and a neck. The apex (also called the vertex) is directed forward toward the upper part of the pubic symphysis, and from there the median umbilical ligament continues upward on the back of the anterior abdominal wall to the umbilicus. The peritoneum is carried by it from the apex on to the abdominal wall to form the middle umbilical fold. The neck of the bladder is the area at the base of the trigone that surrounds the internal urethral orifice that leads to the urethra. In males, the neck of the urinary bladder is next to the prostate gland.
The bladder has three openings. The two ureters enter the bladder at ureteric orifices, and the urethra enters at the trigone of the bladder. These ureteric openings have mucosal flaps in front of them that act as valves in preventing the backflow of urine into the ureters, known as vesicoureteral reflux. Between the two ureteric openings is a raised area of tissue called the interureteric crest. This makes the upper boundary of the trigone. The trigone is an area of smooth muscle that forms the floor of the bladder above the urethra. It is an area of smooth tissue for the easy flow of urine into and from this part of the bladder - in contrast to the irregular surface formed by the rugae.
The walls of the bladder have a series of ridges, thick mucosal folds known as rugae that allow for the expansion of the bladder. The detrusor muscle is the muscular layer of the wall made of smooth muscle fibers arranged in spiral, longitudinal, and circular bundles. The detrusor muscle is able to change its length. It can also contract for a long time whilst voiding, and it stays relaxed whilst the bladder is filling. The wall of the urinary bladder is normally 3–5 mm thick. When well distended, the wall is normally less than 3 mm.
Nearby structures
In males, the prostate gland lies outside the opening for the urethra. The middle lobe of the prostate causes an elevation in the mucous membrane behind the internal urethral orifice called the uvula of urinary bladder. The uvula can enlarge when the prostate becomes enlarged.
The bladder is located below the peritoneal cavity near the pelvic floor and behind the pubic symphysis. In males, it lies in front of the rectum, separated by the rectovesical pouch, and is supported by fibres of the levator ani and of the prostate gland. In females, it lies in front of the uterus, separated by the vesicouterine pouch, and is supported by the elevator ani and the upper part of the vagina.
Blood and lymph supply
The bladder receives blood by the vesical arteries and drained into a network of vesical veins. The superior vesical artery supplies blood to the upper part of the bladder. The lower part of the bladder is supplied by the inferior vesical artery, both of which are branches of the internal iliac arteries. In females, the uterine and vaginal arteries provide additional blood supply. Venous drainage begins in a network of small vessels on the lower surfaces of the bladder, which coalesce and travel with the lateral ligaments of the bladder into the internal iliac veins.
The lymph drained from the bladder begins in a series of networks throughout the mucosal, muscular and serosal layers. These then form three sets of vessels: one set near the trigone draining the bottom of the bladder; one set draining the top of the bladder; and another set draining the outer undersurface of the bladder. The majority of these vessels drain into the external iliac lymph nodes.
Nerve supply
The bladder receives both sensory and motor supply from sympathetic and the parasympathetic nervous systems. The motor supply from both sympathetic fibers, most of which arise from the superior and inferior hypogastric plexuses and nerves, and from parasympathetic fibers, which come from the pelvic splanchnic nerves.
Sensation from the bladder, relating to distension or to irritation (such as by infection or a stone) is transmitted primarily through the parasympathetic nervous system. These travel via sacral nerves to S2-4. From here, sensation travels to the brain via the dorsal columns in the spinal cord.
Microanatomy
When viewed under a microscope, the bladder can be seen to have an inner lining (called epithelium), three layers of muscle fibres, and an outer adventitia.
The inner wall of the bladder is called urothelium, a type of transitional epithelium formed by three to six layers of cells; the cells may become more cuboidal or flatter depending on whether the bladder is empty or full. Additionally, these are lined with a mucous membrane consisting of a surface glycocalyx that protects the cells beneath it from urine. The epithelium lies on a thin basement membrane, and a lamina propria. The mucosal lining also offers a urothelial barrier against the passing of infections.
These layers are surrounded by three layers of muscle fibres arranged as an inner layer of fibres orientated longitudinally, a middle layer of circular fibres, and an outermost layer of longitudinal fibres; these form the detrusor muscle, which can be seen with the naked eye.
The outside of the bladder is protected by a serous membrane called adventitia.
Development
In the developing embryo, at the hind end lies a cloaca. This, over the fourth to the seventh week, divides into a urogenital sinus and the beginnings of the anal canal, with a wall forming between these two inpouchings called the urorectal septum. The urogenital sinus divides into three parts, with the upper and largest part becoming the bladder; the middle part becoming the urethra, and the lower part changes depending on the biological sex of the embryo.
The human bladder derives from the urogenital sinus, and it is initially continuous with the allantois. The upper and lower parts of the bladder develop separately and join around the middle part of development. At this time the ureters move from the mesonephric ducts to the trigone. In males, the base of the bladder lies between the rectum and the pubic symphysis. It is superior to the prostate, and separated from the rectum by the recto-vesical pouch. In females, the bladder sits inferior to the uterus and anterior to the vagina; thus its maximum capacity is lower than in males. It is separated from the uterus by the vesico-uterine pouch. In infants and young children the urinary bladder is in the abdomen even when empty.
Function
Urine is excreted by the kidneys and flows into the bladder through the ureters, where it is stored until urination (micturition). Urination involves coordinated muscle changes involving a reflex based in the spine, with higher inputs from the brain. During urination, the detrusor muscle contracts, the external urinary sphincter and muscles of the perineum relax, and urine flows through the urethra and exits the penis or vulva through the urinary meatus.
The urge to pass urine stems from stretch receptors that activate when between 300 - 400 mL urine is held within the bladder. As urine accumulates, the rugae flatten and the wall of the bladder thins as it stretches, allowing the bladder to store larger amounts of urine without a significant rise in internal pressure. Urination is controlled by the pontine micturition center in the brainstem.
Stretch receptors in the bladder signal the parasympathetic nervous system to stimulate the muscarinic receptors in the detrusor to contract the muscle when the bladder is distended. This encourages the bladder to expel urine through the urethra. The main receptor activated is the M3 receptor, although M2 receptors are also involved and whilst outnumbering the M3 receptors they are not so responsive.
The main relaxant pathway is via the adenylyl cyclase cAMP pathway, activated via the β3 adrenergic receptors. The β2 adrenergic receptors are also present in the detrusor and even outnumber β3 receptors, but they do not have as important an effect in relaxing the detrusor smooth muscle.
Clinical significance
Inflammation and infection
Cystitis refers to infection or inflammation of the bladder. It commonly occurs as part of a urinary tract infection. In adults, it is more common in women than men, owing to a shorter urethra. It is common in males during childhood, and in older men where an enlarged prostate may cause urinary retention. Other risk factors include other causes of blockage or narrowing, such as prostate cancer or the presence of vesico-ureteric reflux; the presence of outside structures in the urinary tract, such as urinary catheters; and neurologic problems that make passing urine difficult. Infections that involve the bladder can cause pain in the lower abdomen (above the pubic symphysis, so called "suprapubic" pain), particularly before and after passing urine, and a desire to pass urine frequently and with little warning (urinary urgency). Infections are usually due to bacteria, of which the most common is E coli.
When a urinary tract infection or cystitis is suspected, a medical practitioner may request a urine sample. A dipstick placed in the urine may be used to see if the urine has white blood cells, or the presence of nitrates which may indicate an infection. The urine specimen may be also sent for microbial culture and sensitivity to assess if a particular bacteria grows in the urine, and identify its antibiotic sensitivities. Sometimes, additional investigations may be requested. These might include testing the function of the kidneys by assessing electrolytes and creatinine; investigating for blockages or narrowing of the renal tract with an ultrasound, and testing for an enlarged prostate with a digital rectal examination.
Urinary tract infections or cystitis are treated with antibiotics, many of which are consumed by mouth. Serious infections may require treatment with intravenous antibiotics.
Interstitial cystitis refers to a condition in which the bladder is infected due to a cause that is not bacteria.
Incontinence and retention
Frequent urination can be due to excessive urine production, small bladder capacity, irritability or incomplete emptying. Males with an enlarged prostate urinate more frequently. One definition of an overactive bladder is when a person urinates more than eight times per day. An overactive bladder can often cause urinary incontinence. Though both urinary frequency and volumes have been shown to have a circadian rhythm, meaning day and night cycles, it is not entirely clear how these are disturbed in the overactive bladder. Urodynamic testing can help to explain the symptoms. An underactive bladder is the condition where there is a difficulty in passing urine and is the main symptom of a neurogenic bladder. Frequent urination at night may indicate the presence of bladder stones.
Disorders of or related to the bladder include:
bladder exstrophy
bladder sphincter dyssynergia, a condition in which the sufferer cannot coordinate relaxation of the urethra sphincter with the contraction of the bladder muscles
paruresis
trigonitis
underactive bladder, a condition with its main symptom being urinary retention.
Disorders of bladder function may be dealt with surgically, by redirecting the flow of urine or by replacement with an artificial urinary bladder. The volume of the bladder may be increased by bladder augmentation. An obstruction of the bladder neck may be severe enough to warrant surgery. Ultrasound can be used to estimate bladder volumes.
Cancer
Cancer of the bladder is known as bladder cancer. It is usually due to cancer of the urothelium, the cells that line the surface of the bladder. Bladder cancer is more common after the age of 40, and more common in men than women; other risk factors include smoking and exposure to dyes such as aromatic amines and aldehydes. When cancer is present, the most common symptom in an affected person is blood in the urine; a physical medical examination may be otherwise normal, except in late disease. Bladder cancer is most often due to cancer of the cells lining the ureter, called transitional cell carcinoma, although it can more rarely occur as a squamous cell carcinoma if the type of cells lining the urethra have changed due to chronic inflammation, such as due to stones or schistosomiasis.
Investigations performed usually include collecting a sample of urine for an inspection for malignant cells under a microscope, called cytology, as well as medical imaging by a CT urogram or ultrasound. If a concerning lesion is seen, a flexible camera may be inserted into the bladder, called cystoscopy, in order to view the lesion and take a biopsy, and a CT scan will be performed of other body parts (a CT scan of the chest, abdomen and pelvis) to look for additional lesions.
Treatment depends on the cancer's stage. Cancer present only in the bladder may be removed surgically via cystoscopy; an injection of the chemotherapeutic mitomycin C may be performed at the same time. Cancers that are high grade may be treated with an injection of the BCG vaccine into the bladder wall, and may require surgical removal if it does not resolve. Cancer that is invading through the bladder wall may be managed by complete surgical removal of the bladder (radical cystectomy), with the ureters diverted into a segment of part of ileum connected to a stoma bag on the skin. Prognosis can vary markedly depending on the cancer's stage and grade, with a better prognosis associated with tumours found only in the bladder, that are low grade, that do not invade through the bladder wall, and that is in visual appearance.
Investigation
A number of investigations are used to examine the bladder. The investigations that are ordered will depend on the taking of a medical history and an examination. The examination may involve a medical practitioner feeling in the suprapubic area for tenderness or fullness that might indicate an inflamed or full bladder. Blood tests may be ordered that may indicate inflammation; for example a full blood count may demonstrate elevated white blood cells, or a C-reactive protein may be elevated in an infection.
Some forms of medical imaging exist to visualise the bladder. A bladder ultrasound may be conducted to view how much urine is within the bladder, indicating urinary retention. A urinary tract ultrasound, conducted by a more trained operator, may be conducted to view whether there are stones, tumours or sites of obstruction within the bladder and urinary tract. A CT scan may also be ordered.
A flexible internal camera, called a cystoscope, can be inserted to view the internal appearance of the bladder and take a biopsy if required.
Urodynamic testing can help to explain the symptoms.
Other animals
Mammals
All species of mammal have a urinary bladder. This structure begins as an embryonic cloaca. In the vast majority of species, it eventually becomes differentiated into a dorsal part, connected to the intestine, and a ventral part, associated with the urinogenital passage and urinary bladder. The only mammals in which this does not take place are the platypus and the spiny anteater, both of which retain the cloaca into adulthood.
The mammalian bladder is an organ that regularly stores a hyperosmotic concentration of urine. It therefore is relatively impermeable and has a multi-layer epithelium. The urinary bladders of cetaceans (whales and dolphins) are proportionally smaller than those of land-dwelling mammals.
Reptiles
In all reptiles, the urinogenital ducts and the rectum both empty into the organ called the cloaca. In some reptiles, a midventral wall in the cloaca opens into a urinary bladder. The urinary bladder exists in all species of turtle and tortoise and most species of lizard. Monitor lizards, the legless lizards, snakes, alligators, and crocodiles do not have urinary bladders.
Many turtles, tortoises, and lizards have proportionally very large bladders. Charles Darwin noted that the bladder of the Galapagos tortoise could store urine weighing up to 20% of the tortoise's body weight. Such adaptations are the result of environments, such as remote islands and deserts, where fresh water is very scarce. Other desert-dwelling reptiles have large bladders, which can hold long-term reserves of water for several months and aid in osmoregulation.
Turtles have two or more accessory urinary bladders, beside the neck of the urinary bladder and above the pubis, occupying much of the body cavity. Turtles' bladder is also usually divided into two lobes: the right lobe is under the liver, which prevents large stones from remaining in the lobe; the left lobe is likelier than the right to have calculi.
Amphibians
Most aquatic and semi-aquatic amphibians can absorb water directly through their skin. Some semi-aquatic animals also have similarly permeable bladder membranes. They tend to have high rates of urine production, to offset this high water intake; and the dissolved salts in their urine are highly dilute. The urinary bladder helps these animals to retain salts. Some aquatic amphibians, such as Xenopus, do not reabsorb water from their urine, to prevent excessive water influx. For land-dwelling amphibians, dehydration results in reduced urine output.
The amphibian bladder is usually highly distensible; among some land-dwelling species of frogs and salamanders, it may account for 20%–50% of total body weight. Urine flows from the kidneys through the ureters into the bladder and is periodically released from the bladder to the cloaca.
Fish
The gills of most teleost fish help to eliminate ammonia from the body, and fish live surrounded by water, but most still have a distinct bladder for storing waste fluid. The urinary bladder of teleosts is permeable to water, though this is less true for freshwater dwelling species than saltwater species. In freshwater fish the bladder is a key site of absorption for many major ions in marine fish urine is held in the bladder for extended periods to maximise water absorption. The urinary bladders of fish and tetrapods are thought to be analogous while the former's swim-bladders and latter's lungs are considered homologous.
Most fish also have an organ called a swim-bladder which is unrelated to the urinary bladder except in its membranous nature. The loaches, pilchards, and herrings are among the few types of fish in which a urinary bladder is poorly developed. It is largest in those fish which lack an air bladder, and is situated in front of the oviducts and behind the rectum.
Birds
In nearly all bird species, there is no urinary bladder per se. Although all birds have kidneys, the ureters open directly into a cloaca which serves as a reservoir for urine, fecal matter, and eggs.
Crustaceans
Unlike the urinary bladder of vertebrates, the urinary bladder of crustaceans both stores and modifies urine. The bladder consists of two sets of lateral and central lobes. The central lobes sit near the digestive organs and the lateral lobes extend along the front and sides of the crustacean's body cavity. The tissue of the bladder is thin epithelium.
| Biology and health sciences | Urinary system | null |
32298 | https://en.wikipedia.org/wiki/USS%20Constitution | USS Constitution | USS Constitution, also known as Old Ironsides, is a three-masted wooden-hulled heavy frigate of the United States Navy. She is the world's oldest commissioned naval warship still afloat. She was launched in 1797, one of six original frigates authorized for construction by the Naval Act of 1794 and the third constructed. The name "Constitution" was among ten names submitted to President George Washington by Secretary of War Timothy Pickering in March of 1795 for the frigates that were to be constructed. Joshua Humphreys designed the frigates to be the young Navy's capital ships, and so Constitution and her sister ships were larger and more heavily armed and built than standard frigates of the period. She was built at Edmund Hartt's shipyard in the North End of Boston, Massachusetts. Her first duties were to provide protection for American merchant shipping during the Quasi-War with France and to defeat the Barbary pirates in the First Barbary War.
Constitution is most noted for her actions during the War of 1812 against the United Kingdom, when she captured numerous merchant ships and defeated five British warships: , , , , and . The battle with Guerriere earned her the nickname "Old Ironsides" and public adoration that has repeatedly saved her from scrapping. She continued to serve as flagship in the Mediterranean and African squadrons, and she circled the world in the 1840s. During the American Civil War, she served as a training ship for the United States Naval Academy. She carried American artwork and industrial displays to the Paris Exposition of 1878.
Constitution was retired from active service in 1881 and served as a receiving ship until being designated a museum ship in 1907. In 1934, she completed a three-year, 90-port tour of the nation. She sailed under her own power for her 200th birthday in 1997, and again in August 2012 to commemorate the 200th anniversary of her victory over Guerriere.
Constitutions stated mission today is to promote understanding of the Navy's role in war and peace through educational outreach, historical demonstration, and active participation in public events as part of the Naval History and Heritage Command. As she is a fully commissioned Navy ship, her crew of 75 officers and sailors participate in ceremonies, educational programs, and special events while keeping her open to visitors year round and providing free tours. The officers and crew are all active-duty Navy personnel, and the assignment is considered to be special duty. She is usually berthed at Pier 1 of the former Charlestown Navy Yard at one end of Boston's Freedom Trail.
Construction
In 1785, Barbary pirates, most notably from Algiers, began to seize American merchant vessels in the Mediterranean Sea. In 1793 alone, 11 American ships were captured and their crews and stores held for ransom. To combat this problem, proposals were made for warships to protect American shipping, resulting in the Naval Act of 1794. The act provided funds to construct six frigates, but it included a clause that the construction of the ships would be halted if peace terms were agreed to with Algiers.
Joshua Humphreys' design was unusual for the time, being deep, long on keel, narrow of beam (width), and mounting very heavy guns. The design called for diagonal
riders intended to restrict hogging and sagging while giving the ships extremely heavy planking. This design gave the hull a greater strength than a more lightly built frigate. It was based on Humphrey's realization that the fledgling United States could not match the European states in the size of their navies, so they were designed to overpower any other frigate while escaping from a ship of the line.
Her keel was laid down on 1 November 1794 at Edmund Hartt's shipyard in Boston, Massachusetts, under the supervision of Captain Samuel Nicholson, master shipwright Colonel George Claghorn and Foreman Prince Athearn of the Martha's Vineyard Athearns. Constitutions hull was built thick and her length between perpendiculars was , with a length overall and a width of . In total, of trees were needed for her construction. Primary materials consisted of pine and oak, including southern live oak which was cut from Gascoigne Bluff and milled near St. Simons Island, Georgia. Enslaved workers were used to harvest the oak used for the ship's construction, and USS Constitution Museum historian Carl Herzog stated that "the forced labor of enslaved people was an expediency that Navy officials and contractors saw as fundamental to the job... enslaved people were essential to the construction of naval warships built to secure the very American freedoms they were denied."
A peace accord was announced between the United States and Algiers in March 1796, and construction was halted in accordance with the Naval Act of 1794. After some debate and prompting by President Washington, Congress agreed to continue funding the construction of the three ships nearest to completion: , , and Constitution. Constitutions launching ceremony on 20 September 1797 was attended by President John Adams and Massachusetts Governor Increase Sumner. Upon launch, she slid down the ways only before stopping; her weight had caused the ways to settle into the ground, preventing further movement. An attempt two days later resulted in only of additional travel before the ship again stopped. After a month of rebuilding the ways, Constitution finally slipped into Boston Harbor on 21 October 1797, with Captain James Sever breaking a bottle of Madeira wine on her bowsprit.
Armament
Constitution was rated as a 44-gun frigate, but she often carried more than 50 guns at a time. Ships of this era had no permanent battery of guns such as those of modern Navy ships. The guns and cannons were designed to be completely portable and often were exchanged between ships as situations warranted. Each commanding officer outfitted armaments to his liking, taking into consideration factors such as the overall weight of stores, complement of personnel aboard, and planned routes to be sailed. Consequently, the armaments on ships changed often during their careers, and records of the changes were not generally kept.
During the War of 1812, Constitutions battery of guns typically consisted of 30 long 24-pounder (11 kg) cannons, with 15 on each side of the gun deck. Twenty-two more guns were deployed on the spar deck, 11 per side, each a short 32-pounder (15 kg) carronade. Four chase guns were also positioned, two each at the stern and bow.
All of the guns aboard Constitution have been replicas since her 1927–1931 restoration. Most were cast in 1930, but two carronades on the spar deck were cast in 1983. A modern saluting gun was hidden inside the forward long gun on each side during her 1973–1976 restoration in order to restore the capability of firing ceremonial salutes.
Quasi-War
President John Adams ordered all Navy ships to sea in late May 1798 to patrol for armed French ships and to free any American ship captured by them. Constitution was still not ready to sail and eventually had to borrow sixteen 18-pound (8.2 kg) cannons from Castle Island before finally being ready. She put to sea on the evening of 22 July 1798 with orders to patrol the Eastern seaboard between New Hampshire and New York. She was patrolling between Chesapeake Bay and Savannah, Georgia, a month later when Nicholson found his first opportunity for capturing a prize. They intercepted Niger off the coast of Charleston, South Carolina, on 8 September, a 24-gun ship sailing with a French crew en route from Jamaica to Philadelphia, claiming to have been under the orders of Great Britain. Nicholson had the crewmen imprisoned, perhaps not understanding his orders correctly. He placed a prize crew aboard Niger and brought her into Norfolk, Virginia.
Constitution sailed south again a week later to escort a merchant convoy, but her bowsprit was severely damaged in a gale and she returned to Boston for repairs. In the meantime, Secretary of the Navy Benjamin Stoddert determined that Niger had been operating under the orders of Great Britain as claimed, and the ship and her crew were released to continue their voyage. The American government paid a restitution of $11,000 () to Great Britain.
Constitution departed Boston on 29 December. Nicholson reported to Commodore John Barry, who was flying his flag in United States near the island of Dominica for patrols in the West Indies. On 15 January 1799, Constitution intercepted the English merchantman Spencer, which had been taken prize by the French frigate L'Insurgente a few days prior. Technically, Spencer was a French ship operated by a French prize crew; but Nicholson released the ship and her crew the next morning, perhaps hesitant after the affair with Niger. Upon joining Barry's command, Constitution almost immediately had to put in for repairs to her rigging due to storm damage, and it was not until 1 March that anything of note occurred. On this date, she encountered , whose captain was an acquaintance of Nicholson's. The two agreed to a sailing duel, which the English captain was confident he would win. But after 11 hours of sailing, Santa Margarita lowered her sails and admitted defeat, paying off the bet with a cask of wine to Nicholson. Resuming her patrols, Constitution managed to recapture the American sloop Neutrality on 27 March. On 4 April 1799 she recaptured His Majesty's Packet Carteret that had been captured by the French on 29 March. Secretary Stoddert had other plans, however, and recalled Constitution to Boston. She arrived there on 14 May, and Nicholson was relieved of command.
Change of command
Captain Silas Talbot was recalled to duty to command Constitution and serve as Commodore of operations in the West Indies. After repairs and resupply were completed, Constitution departed Boston on 23 July with a destination of Saint-Domingue via Norfolk and a mission to interrupt French shipping. She departed Norfolk on 14 August. She recaptured the Hamberg ship Amelia from a French prize crew on 15 September, and Talbot sent the ship back to New York City with an American prize crew. The ship was sold but the Court ordered the money returned to her owners. Constitution arrived at Saint-Domingue on 15 October and rendezvoused with , , and . No further incidents occurred over the next six months, as French depredations in the area had declined. Constitution busied herself with routine patrols, and Talbot made diplomatic visits. On 2 February 1800 put men aboard an unidentified American schooner and had it sent to New York for possible illegal trading. It was not until April 1800 that Talbot investigated an increase in ship traffic near Puerto Plata, Santo Domingo, and discovered that the French privateer Sandwich had taken refuge there. On 8 May the squadron captured the sloop Sally, and Talbot hatched a plan to capture Sandwich by utilizing the familiarity of Sally to allow the Americans access to the harbor. On 9 May her Tender "Amphitheatre" engaged a French privateer schooner that, after a short action, was run aground and abandoned by her crew. The privateer was captured and refloated and her two prizes, brig "Nymph" and schooner "Esther", were recaptured. First Lieutenant Isaac Hull led 90 sailors and Marines into Puerto Plata without challenge on 11 May, capturing Sandwich and spiking the guns of the nearby Spanish fort. However, it was later determined that Sandwich had been captured from a neutral port; she was returned to the French with apologies, and no prize money was awarded to the squadron.
Routine patrols again occupied Constitution for the next two months, until 13 July, when the mainmast trouble of a few months before recurred. She put into Cape François for repairs. While leaving the roads of Cape Francois on 22 July she struck a reef and was pulled off 45 minutes later. With the terms of enlistment soon to expire for the sailors aboard her, she made preparations to return to the United States and was relieved of duty by Constellation on 23 July. Constitution escorted 12 merchantmen to Philadelphia on her return voyage, and on 25 August arrived in President Roads, off Boston, and put in quarantine. She received new masts, sails, and rigging. Even though peace was imminent between the United States and France, Constitution again sailed for the West Indies on 17 December as squadron flagship, rendezvousing with , , , , and . Although no longer allowed to pursue French shipping, the squadron was assigned to protect American shipping and continued in that capacity until April 1801, when arrived with orders for the squadron to return to the United States. Constitution returned to Boston. Captain Talbot resigned his Commission 8 September, 1801 and Lt.Isaac Hull was ordered to take command in a letter dated 21 September, 1801. She was finally scheduled for an overhaul, Captain Samuel Nicholson was ordered to supervise the work in a letter dated 1 April, 1802. It was canceled in a letter dated 18 June with the crew ordered discharged, Capt. Nicholson was relieved by her Sailing Master Nathaniel Harden. She was placed in ordinary on 2 July 1802.
First Barbary War
The United States paid tribute to the Barbary States during the Quasi-War to ensure that American merchant ships were not harassed and seized. In 1801, Yusuf Karamanli of Tripoli was dissatisfied that the United States was paying him less than they paid Algiers, and he demanded an immediate payment of $250,000 (). In response, Thomas Jefferson sent a squadron of frigates to protect American merchant ships in the Mediterranean and to pursue peace with the Barbary States.
The first squadron under the command of Richard Dale in was instructed to escort merchant ships through the Mediterranean and to negotiate with leaders of the Barbary States. A second squadron was assembled under the command of Richard Valentine Morris in . The performance of Morris's squadron was so poor, however, that he was recalled and subsequently dismissed from the Navy in 1803.
Captain Edward Preble ordered to take command of Constitution in a letter dated 14 May 1803 as his flagship and made preparations to command a new squadron for a third blockade attempt. She was recommissioned on 20 May. The copper sheathing on her hull needed to be replaced and Paul Revere supplied the copper sheets necessary for the job that took 14 days, ending on 25 June. She departed Boston on 14 August, and she encountered an unknown ship in the darkness on 6 September, near the Rock of Gibraltar. Constitution went to general quarters, then ran alongside the unknown ship. Preble hailed her, only to receive a hail in return. He identified his ship as the United States frigate Constitution but received an evasive answer from the other ship. Preble replied: "I am now going to hail you for the last time. If a proper answer is not returned, I will fire a shot into you." The stranger returned, "If you give me a shot, I'll give you a broadside." Preble demanded that the other ship identify herself and the stranger replied, "This is His Britannic Majesty's ship Donegal, 84 guns, Sir Richard Strachan, an English commodore." He then commanded Preble, "Send your boat on board." Preble was now devoid of all patience and exclaimed, "This is United States ship Constitution, 44 guns, Edward Preble, an American commodore, who will be damned before he sends his boat on board of any vessel." And then to his gun crews: "Blow your matches, boys!" Before the incident escalated further, however, a boat arrived from the other ship and a British lieutenant relayed his captain's apologies. The ship was in fact not Donegal but instead HMS Maidstone, a 32-gun frigate. Constitution had come alongside her so quietly that Maidstone had delayed answering with the proper hail while she readied her guns. This act began the strong allegiance between Preble and the officers under his command, known as "Preble's boys", as he had shown that he was willing to defy a presumed ship of the line.
Constitution arrived at Gibraltar on 12 September, where Preble waited for the other ships of the squadron. His first order of business was to arrange a treaty with Sultan Slimane of Morocco, who was holding American ships hostage to ensure the return of two vessels that the Americans had captured. Constitution and departed Gibraltar on 3 October and arrived at Tangier on the 4th. Adams and arrived the next day. With four American warships in his harbor, the Sultan was glad to arrange the transfer of ships between the two nations, and Preble departed with his squadron on 14 October, heading back to Gibraltar.
Battle of Tripoli Harbor
ran aground off Tripoli on 31 October under the command of William Bainbridge while pursuing a Tripoline vessel. The crew was taken prisoner; Philadelphia was refloated by the Tripolines and brought into their harbor. To deprive the Tripolines of their prize, Preble planned to destroy Philadelphia using the captured ship Mastico, which was renamed . Intrepid entered Tripoli Harbor on 16 February 1804 under the command of Stephen Decatur, disguised as a merchant ship. Decatur's crew quickly overpowered the Tripoline crew and set Philadelphia ablaze.
Preble withdrew the squadron to Syracuse, Sicily, and began planning for a summer attack on Tripoli. He procured a number of smaller gunboats that could move in closer to Tripoli than was feasible for Constitution, given her deep draft. Constitution, , , , , the six gunboats, and two bomb ketches arrived the morning of 3 August and immediately began operations. Twenty-two Tripoline gunboats met them in the harbor; Constitution and her squadron severely damaged or destroyed the Tripoline gunboats in a series of attacks over the coming month, taking their crews prisoner. Constitution primarily provided gunfire support, bombarding the shore batteries of Tripoli—yet Karamanli remained firm in his demand for ransom and tribute, despite his losses.
Preble outfitted Intrepid as a "floating volcano" with of gunpowder aboard in a final attempt of the season. She was to sail into Tripoli harbor and blow up in the midst of the corsair fleet, close under the walls of the city. Intrepid made her way into the harbor on the evening of 3 September under the command of Richard Somers, but she exploded prematurely, killing Somers and his entire crew of thirteen volunteers.
Constellation and President arrived at Tripoli on the 9th with Samuel Barron in command; Preble was forced to relinquish his command of the squadron to Barron, who was senior in rank. Constitution was ordered to Malta on the 11th for repairs and, while en route, captured two Greek vessels attempting to deliver wheat into Tripoli. On the 12th, a collision with President severely damaged Constitutions bow, stern, and figurehead of Hercules. The collision was attributed to an act of God in the form of a sudden change in wind direction.
Peace treaty
Captain John Rodgers assumed command of Constitution on 9 November 1804 while she underwent repairs and resupply in Malta. She resumed the blockade of Tripoli on 5 April 1805, capturing a Tripoline xebec, along with two prizes that the xebec had captured. Meanwhile, Commodore Barron gave William Eaton naval support to bombard Derne, while a detachment of US Marines under the command of Presley O'Bannon was assembled to attack the city by land. They captured it on 27 April. A peace treaty with Tripoli was signed aboard Constitution on 3 June, in which she embarked the crew members of Philadelphia and returned them to Syracuse. She was then dispatched to Tunis and arrived there on 30 July. Seventeen additional American warships had gathered in its harbor by 1 August: Congress, Constellation, Enterprise, , , , , Nautilus, Syren, and eight gunboats. Negotiations went on for several days until a short-term blockade of the harbor finally produced a peace treaty on 14 August.
Rodgers remained in command of the squadron, sending warships back to the United States when they were no longer needed. Eventually, all that remained were Constitution, Enterprise, and Hornet. They performed routine patrols and observed the French and Royal Navy operations of the Napoleonic Wars. Rodgers turned over the command of the squadron and Constitution to Captain Hugh G. Campbell on 29 May 1806.
James Barron sailed Chesapeake out of Norfolk on 15 May 1807 to replace Constitution as the flagship of the Mediterranean squadron, but he encountered , resulting in the Chesapeake–Leopard affair and delaying the relief of Constitution. Constitution continued patrols, unaware of the delay. She arrived in late June at Leghorn, where she took aboard the disassembled Tripoli Monument for transport back to the United States. Campbell learned the fate of Chesapeake when he arrived at Málaga, and he immediately began preparing Constitution and Hornet for possible war against Britain. The crew became mutinous upon learning of the delay in their relief and refused to sail any farther unless the destination was the United States. Campbell and his officers threatened to fire a cannon loaded with grapeshot at the crewmen if they did not comply, thereby putting an end to the conflict. Campbell and the squadron were ordered home on 18 August and set sail for Boston on 8 September, arriving there on 14 October. Constitution had been gone for more than four years.
War of 1812
Constitution was recommissioned in December with Captain John Rodgers again taking command to oversee a major refitting. She was overhauled at a cost just under $100,000; however, Rodgers inexplicably failed to clean her copper sheathing, leading him to later declare her a "slow ". She spent most of the following two years on training runs and ordinary duty. Isaac Hull took command in June 1810, and he immediately recognized that she needed her bottom cleaned. "Ten loads" of barnacles and seaweed were removed.
Hull departed for France on 5 August 1811, transporting the new Ambassador Joel Barlow and his family; they arrived on 1 September. Hull remained near France and the Netherlands through the winter months, continually holding sail and gun drills to keep the crew ready for possible hostilities with the British. Tensions were high between the United States and Britain after the events of the Little Belt affair the previous May, and Constitution was shadowed by British frigates while awaiting dispatches from Barlow to carry back to the United States. They arrived home on 18 February 1812.
War was declared on 18 June and Hull put to sea on 12 July, attempting to join the five ships of a squadron under the command of Rodgers in President. He sighted five ships off Egg Harbor, New Jersey, on 17 July and at first believed them to be Rodgers' squadron but, by the following morning, the lookouts determined that they were a British squadron out of Halifax: , , , , and . They had sighted Constitution and were giving chase.
Constitution was becalmed and unable to run from the five British ships, but Hull acted on a suggestion from his First Lieutenant Charles Morris. He ordered the crew to put boats over the side to tow the ship out of range, using kedge anchors to draw the ship forward and wetting the sails to take advantage of every breath of wind. The British ships soon imitated the tactic of kedging and remained in pursuit. The resulting 57-hour chase in the July heat forced the crew of Constitution to employ myriad tactics to outrun the squadron, finally pumping overboard of drinking water. Cannon fire was exchanged several times, though the British attempts fell short or overshot their mark, including an attempted broadside from Belvidera. On 19 July, Constitution pulled far enough ahead of the British that they abandoned the pursuit.
Constitution arrived in Boston on 27 July and remained there just long enough to replenish her supplies. Hull sailed without orders on 2 August to avoid being blockaded in port, heading on a northeast route towards the British shipping lanes near Halifax and the Gulf of Saint Lawrence. Constitution captured three British merchantmen, which Hull burned rather than risk taking them back to an American port. On 16 August, he learned of a British frigate to the south and sailed in pursuit.
Constitution vs. Guerriere
A frigate was sighted on 19 August and subsequently determined to be (38) with the words "Not The Little Belt" painted on her foretopsail. Guerriere opened fire upon entering range of Constitution, doing little damage. After a few exchanges of cannon fire between the ships, Captain Hull maneuvered Constitution into an advantageous position within of Guerriere. He then ordered a full double-loaded broadside of grape and round shot, which took out Guerrieres mizzenmast. Guerrieres maneuverability decreased with her mizzenmast dragging in the water, and she collided with Constitution, entangling her bowsprit in Constitutions mizzen rigging. This left only Guerrieres bow guns capable of effective fire. Hull's cabin caught fire from the shots, but it was quickly extinguished. With the ships locked together, both captains ordered boarding parties into action, but the sea was heavy and neither party was able to board the opposing ship.
At one point, the two ships rotated together counter-clockwise, with Constitution continuing to fire broadsides. When the two ships pulled apart, the force of the bowsprit's extraction sent shock waves through Guerrieres rigging. Her foremast collapsed, and that brought the mainmast down shortly afterward. Guerriere was now a dismasted, unmanageable hulk with close to a third of her crew wounded or killed, while Constitution remained largely intact. The British surrendered.
Hull had surprised the British with his heavier broadsides and his ship's sailing ability. Adding to their astonishment, many of the British shots had rebounded harmlessly off Constitutions hull. An American sailor reportedly exclaimed "Huzzah! Her sides are made of iron!" and Constitution acquired the nickname "Old Ironsides".
The battle left Guerriere so badly damaged that she was not worth towing to port, and Hull ordered her to be burned the next morning, after transferring the British prisoners onto Constitution. Constitution arrived back in Boston on 30 August, where Hull and his crew found that news of their victory had spread fast, and they were hailed as heroes.
Constitution vs Java
William Bainbridge, senior to Hull, took command of "Old Ironsides" on 8 September and prepared her for another mission in British shipping lanes near Brazil, sailing with on 27 October. They arrived near São Salvador on 13 December, sighting in the harbor. Bonne Citoyenne was reportedly carrying $1.6 million in specie to England, and her captain refused to leave the neutral harbor lest he lose his cargo. Constitution sailed offshore in search of prizes, leaving Hornet to await the departure of Bonne Citoyenne. On 29 December, she met with under Captain Henry Lambert. At the initial hail from Bainbridge, Java answered with a broadside that severely damaged Constitutions rigging. She was able to recover, however, and returned a series of broadsides to Java. A shot from Java destroyed Constitutions helm (wheel), so Bainbridge directed the crew to steer her manually using the tiller for the remainder of the engagement. Bainbridge was wounded twice during the battle. Javas bowsprit became entangled in Constitutions rigging, as in the battle with Guerriere, allowing Bainbridge to continue raking her with broadsides. Javas foremast collapsed, sending her fighting top crashing down through two decks below.
Bainbridge drew off to make emergency repairs and re-approached Java an hour later. She was a shambles, an unmanageable wreck with a badly wounded crew, and she surrendered. Bainbridge determined that Java was far too damaged to retain as a prize and ordered her burned, but not before having her helm salvaged and installed on Constitution. Constitution returned to São Salvador on 1 January 1813 to disembark the prisoners of Java, where she met with Hornet and her two British prizes. Bainbridge ordered Constitution to sail for Boston on 5 January, being far away from a friendly port and needing extensive repairs, leaving Hornet behind to continue waiting for Bonne Citoyenne in the hopes that she would leave the harbor (she did not). Java was the third British warship in three months to be captured by the United States, and Constitutions victory prompted the British Admiralty to order its frigates not to engage the heavier American frigates one-on-one; only British ships of the line or squadrons were permitted to come close enough to attack. Constitution arrived in Boston on 15 February to even greater celebrations than Hull had received a few months earlier.
Marblehead and blockade
Bainbridge determined that Constitution required new spar deck planking and beams, masts, sails, and rigging, as well as replacement of her copper bottom. However, personnel and supplies were being diverted to the Great Lakes, causing shortages that kept her in Boston intermittently with her sister ships Chesapeake, Congress, and President for the majority of the year. Charles Stewart took command on 18 July and struggled to complete the construction and recruitment of a new crew, finally making sail on 31 December. She set course for the West Indies to harass British shipping and had captured five merchant ships and the 14-gun by late March 1814. She also pursued and HMS Pique, though both ships escaped after realizing that she was an American frigate.
Her mainmast split off the coast of Bermuda on 27 March, requiring immediate repair. Stewart set a course for Boston, where British ships and commenced pursuit on 3 April. Stewart ordered drinking water and food to be cast overboard to lighten her load and gain speed, trusting that her mainmast would hold together long enough for her to make her way into Marblehead, Massachusetts. The last item thrown overboard was the supply of spirits. Upon Constitutions arrival in the harbor, the citizens of Marblehead rallied in support, assembling what cannons they possessed at Fort Sewall, and the British called off the pursuit. Two weeks later, Constitution made her way into Boston, where she remained blockaded in port until mid-December.
HMS Cyane and HMS Levant
Captain George Collier of the Royal Navy received command of the 50-gun and was sent to North America to deal with the American frigates that were causing such losses to British shipping. Meanwhile, Charles Stewart saw his chance to escape from Boston Harbor and made it good on the afternoon of 18 December, and Constitution again set course for Bermuda. Collier gathered a squadron consisting of Leander, , and and set off in pursuit, but he was unable to overtake her. On 24 December, Constitution intercepted the merchantman Lord Nelson and placed a prize crew aboard. Constitution had left Boston not fully supplied, but Lord Nelsons stores supplied a Christmas dinner for the crew.
Constitution was cruising off Cape Finisterre on 8 February 1815 when Stewart learned that the Treaty of Ghent had been signed. He realized, however, that a state of war still existed until the treaty was ratified, and Constitution captured the British merchantman Susanna on 16 February; her cargo of animal hides was valued at $75,000.
On 20 February, Constitution sighted the small British ships Cyane and sailing in company and gave chase. Cyane and Levant began a series of broadsides against her, but Stewart outmaneuvered both of them and forced Levant to draw off for repairs. He concentrated fire on Cyane, which soon struck her colors. Levant returned to engage Constitution but she turned and attempted to escape when she saw that Cyane had been defeated. Constitution overtook her and, after several more broadsides, she struck her colors. Stewart remained with his new prizes overnight while ordering repairs to all ships. Constitution had suffered little damage in the battle, though it was later discovered that she had twelve 32-pound British cannonballs embedded in her hull, none of which had penetrated. The trio then set a course for the Cape Verde Islands and arrived at Porto Praya on 10 March.
The next morning, Collier's squadron was spotted on a course for the harbor, and Stewart ordered all ships to sail immediately; he had been unaware until then of Collier's pursuit. Cyane was able to elude the squadron and make sail for America, where she arrived on 10 April, but Levant was overtaken and recaptured. Collier's squadron was distracted with Levant while Constitution made another escape from overwhelming forces.
Constitution set a course towards Guinea and then west towards Brazil, as Stewart had learned from the capture of Susanna that was transporting gold bullion back to England, and he wanted her as a prize. Constitution put into Maranhão on 2 April to offload her British prisoners and replenish her drinking water. While there, Stewart learned by rumor that the Treaty of Ghent had been ratified, and set course for America, receiving verification of peace at San Juan, Puerto Rico, on 28 April. He then set course for New York and arrived home on 15 May to large celebrations. Constitution emerged from the war undefeated, though her sister ships Chesapeake and President were not so fortunate, having been captured in 1813 and 1815 respectively. Constitution was moved to Boston and placed in ordinary in January 1816, sitting out the Second Barbary War.
Mediterranean Squadron
Charlestown Navy Yard's commandant Isaac Hull directed a refitting of Constitution to prepare her for duty with the Mediterranean Squadron in April 1820. They removed Joshua Humphreys' diagonal riders to make room for two iron freshwater tanks, and they replaced the copper sheathing and timbers below the waterline. At the direction of Secretary of the Navy Smith Thompson, she was also subjected to an unusual experiment in which manually operated paddle wheels were fitted to her hull. The paddle wheels were designed to propel her at up to if she was ever becalmed, by the crew using the ship's capstan. Initial testing was successful, but Hull and Constitutions commanding officer Jacob Jones were reportedly unimpressed with paddle wheels on a US Navy ship. Jones had them removed and stowed in the cargo hold before he departed on 13 May 1821 for a three-year tour of duty in the Mediterranean. On 12 April 1823, she collided with the British merchant ship Bicton in the Mediterranean Sea, and Bicton sank with the loss of her captain.
Constitution otherwise experienced an uneventful tour, sailing in company with and , until crew behavior during shore leave gave Jones a reputation as a commodore who was lax in discipline. The Navy grew weary of receiving complaints about the crews' antics while in port and ordered Jones to return. Constitution arrived in Boston on 31 May 1824, and Jones was relieved of command. Thomas Macdonough took command and sailed on 29 October for the Mediterranean under the direction of John Rodgers in . With discipline restored, Constitution resumed uneventful duty. Macdonough resigned his command for health reasons on 9 October 1825. Constitution put in for repairs during December and into January 1826, until Daniel Todd Patterson assumed command on 21 February. By August, she had been put into Port Mahon, suffering decay of her spar deck, and she remained there until temporary repairs were completed in March 1827. Constitution returned to Boston on 4 July 1828 and was placed in reserve.
Old Ironsides
Constitution was built in an era when a ship's expected service life was 10 to 15 years. Secretary of the Navy John Branch made a routine order for surveys of ships in the reserve fleet, and commandant of the Charlestown Navy Yard Charles Morris estimated a repair cost of over $157,000 for Constitution. On 14 September 1830, an article appeared in the Boston Advertiser which erroneously claimed that the Navy intended to scrap Constitution. Two days later, Oliver Wendell Holmes' poem "Old Ironsides" was published in the same paper and later all over the country, igniting public indignation and inciting efforts to save "Old Ironsides" from the scrap yard. Secretary Branch approved the costs, and Constitution began a leisurely repair period while awaiting completion of the dry dock then under construction at the yard. In contrast to the efforts to save Constitution, another round of surveys in 1834 found her sister ship Congress unfit for repair; she was unceremoniously broken up in 1835.
On 24 June 1833, Constitution entered dry dock. Captain Jesse Elliott, the new commander of the Navy yard, oversaw her reconstruction. Constitution had of hog in her keel and remained in dry dock until 21 June 1834. This was the first of many times that souvenirs were made from her old planking; Isaac Hull ordered walking canes, picture frames, and even a phaeton, which was presented to President Andrew Jackson.
Meanwhile, Elliott directed the installation of a new figurehead of President Jackson under the bowsprit, which became a subject of much controversy due to Jackson's political unpopularity in Boston at the time. Elliot was a Jacksonian Democrat, and he received death threats. Rumors circulated about the citizens of Boston storming the navy yard to remove the figurehead themselves.
A merchant captain named Samuel Dewey accepted a small wager as to whether he could complete the task of removal. Elliot had posted guards on Constitution to ensure the safety of the figurehead, but Dewey crossed the Charles River in a small boat, using the noise of thunderstorms to mask his movements, and managed to saw off most of Jackson's head. The severed head made the rounds between taverns and meeting houses in Boston until Dewey personally returned it to Secretary of the Navy Mahlon Dickerson; it remained on Dickerson's library shelf for many years. The addition of busts to her stern escaped controversy of any kind, depicting Isaac Hull, William Bainbridge, and Charles Stewart; the busts remained in place for the next 40 years.
Mediterranean and Pacific Squadrons
Elliot was appointed captain of Constitution and got underway in March 1835 to New York, where he ordered repairs to the Jackson figurehead, avoiding a second round of controversy. Departing on 16 March Constitution set a course for France to deliver Edward Livingston to his post as Minister. She arrived on 10 April and began the return voyage on 16 May. She arrived back in Boston on 23 June, then sailed on 19 August to take her station as flagship in the Mediterranean, arriving at Port Mahon on 19 September. Her duty over the next two years was uneventful as she and United States made routine patrols and diplomatic visits. From April 1837 into February 1838, Elliot collected various ancient artifacts to carry back to America, adding various livestock during the return voyage. Constitution arrived in Norfolk on 31 July. Elliot was later suspended from duty for transporting livestock on a Navy ship.
As the flagship of the Pacific Squadron under the command of Captain Daniel Turner, she began her next voyage on 1 March 1839 with the duty of patrolling the western coast of South America. Often spending months in one port or another, she visited Valparaíso, Callao, Paita, and Puna while her crew amused themselves with the beaches and taverns in each locality. The return voyage found her at Rio de Janeiro, where Emperor Pedro II of Brazil visited her about 29 August 1841. Departing Rio, she returned to Norfolk on 31 October. On 22 June 1842, she was recommissioned under the command of Foxhall Alexander Parker for duty with the Home Squadron. After spending months in port she put to sea for three weeks during December, then was again put in ordinary.
Around the world
In late 1843, she was moored at Norfolk, serving as a receiving ship. Naval Constructor Foster Rhodes calculated that it would require $70,000 to make her seaworthy. Acting Secretary David Henshaw faced a dilemma. His budget could not support such a cost, yet he could not allow the country's favorite ship to deteriorate. He turned to Captain John Percival, known in the service as "Mad Jack". The captain traveled to Virginia and conducted his own survey of the ship's needs. He reported that the necessary repairs and upgrades could be done at a cost of $10,000. On 6 November, Henshaw told Percival to proceed without delay, but stay within his projected figure. After several months of labor, Percival reported Constitution ready for "a two or even a three-year cruise."
She got underway on 29 May 1844 carrying Ambassador to Brazil Henry A. Wise and his family, arriving at Rio de Janeiro on 2 August after making two port visits along the way. She sailed again on 8 September, making port calls at Madagascar, Mozambique, and Zanzibar, and arriving at Sumatra on 1 January 1845. Many of her crew began to suffer from dysentery and fevers, causing several deaths, which led Percival to set course for Singapore, arriving there 8 February. While in Singapore, Commodore Henry Ducie Chads of HMS Cambrian paid a visit to Constitution, offering what medical assistance his squadron could provide. Chads had been the Lieutenant of Java when she surrendered to William Bainbridge 33 years earlier. The relationship between the United States and Brunei began on 6 April, when she was anchored in Brunei Bay in which a Treaty of Peace, Friendship, Commerce and Navigation was formed.
Leaving Singapore, Constitution arrived at Turon, Cochinchina (present-day Da Nang, Vietnam), on 10 May. Not long after, Percival was informed that French missionary Dominique Lefèbvre was being held captive under sentence of death. He went ashore with a squad of Marines to speak with the local Mandarin. Percival demanded the return of Lefèbvre and took three local leaders hostage to ensure that his demands were met. When no communication was forthcoming, he ordered the capture of three junks, which were brought to Constitution. He released the hostages after two days, attempting to show good faith towards the Mandarin, who had demanded their return. During a storm, the three junks escaped upriver; a detachment of Marines pursued and recaptured them. The supply of food and water from shore was stopped, and Percival gave in to another demand for the release of the junks in order to keep his ship supplied, expecting Lefèbvre to be released. He soon realized that no return would be made, however, and Percival ordered Constitution to depart on 26 May.
She arrived at Canton, China, on 20 June and spent the next six weeks there, while Percival made shore and diplomatic visits. Again the crew suffered from dysentery due to poor drinking water, resulting in three more deaths by the time that she reached Manila on 18 September, spending a week there preparing to enter the Pacific Ocean. She then sailed on 28 September for the Hawaiian Islands, arriving at Honolulu on 16 November. She found Commodore John D. Sloat and his flagship there; Sloat informed Percival that Constitution was needed in Mexico, as the United States was preparing for war after the Texas annexation. She provisioned for six months and sailed for Mazatlán, arriving there on 13 January 1846. She sat at anchor for more than three months until she was finally allowed to sail for home on 22 April, rounding Cape Horn on 4 July. Arriving in Rio de Janeiro, the ship's party learned that the Mexican War had begun on 13 May, soon after their departure from Mazatlán. She arrived home in Boston on 27 September and was mothballed on 5 October.
Mediterranean and African Squadrons
Constitution began a refitting in 1847 for duty with the Mediterranean Squadron. The figurehead of Andrew Jackson that caused so much controversy 15 years earlier was replaced with another likeness of Jackson, this time without a top hat and with a more Napoleonic pose. Captain John Gwinn commanded her on this voyage, departing on 9 December 1848 and arriving at Tripoli on 19 January 1849. She received King Ferdinand II and Pope Pius IX on board at Gaeta on 1 August, giving them a 21-gun salute. This was the first time that a Pope set foot on American territory or its equivalent.
At Palermo on 1 September, Captain Gwinn died of chronic gastritis and was buried near Lazaretto on the 9th. Captain Thomas Conover assumed command on the 18th and resumed routine patrolling for the rest of the tour, heading home on 1 December 1850. She was involved in a severe collision with the English brig Confidence, cutting her in half, which sank with the loss of her captain. The surviving crew members were carried back to America, where Constitution was put in ordinary once again, this time at the Brooklyn Navy Yard, in January 1851.
Constitution was recommissioned on 22 December 1852 under the command of John Rudd. She carried Commodore Isaac Mayo for duty with the African Squadron, departing the yard on 2 March 1853 on a leisurely sail towards Africa and arriving there on 18 June. Mayo made a diplomatic visit in Liberia, arranging a treaty between the Gbarbo and the Grebo tribes. Mayo resorted to firing cannons into the village of the Gbarbo in order to get them to agree to the treaty. About 22 June 1854, he arranged another peace treaty, between the leaders of Grahway and Half Cavally. On 31 July 1854, he arranged a compact with the King of Lagos.
Constitution took the American ship H.N. Gambrill as a prize near Angola on 3 November. H.N. Gambrill was involved in the slave trade and proved to be Constitution final capture. The rest of her tour passed uneventfully and she sailed for home on 31 March 1855. She was diverted to Havana, Cuba, arriving there on 16 May and departing on the 24th. She arrived at Portsmouth Navy Yard and was decommissioned on 14 June, ending her last duty on the front lines.
Civil War
Since the formation of the US Naval Academy in 1845, there had been a growing need for quarters in which to house the students (midshipmen). In 1857, Constitution was moved to dry dock at the Portsmouth Navy Yard for conversion into a training ship. Some of the earliest known photographs of her were taken during this refitting, which added classrooms on her spar and gun decks and reduced her armament to only 16 guns. Her rating was changed to a "2nd rate ship". She was recommissioned on 1 August 1860 and moved from Portsmouth to the Naval Academy.
At the outbreak of the Civil War in April 1861, Constitution was ordered to relocate farther north after threats had been made against her by Confederate sympathizers. Several companies of Massachusetts volunteer soldiers were stationed aboard for her protection. towed her to New York City, where she arrived on 29 April. She was subsequently relocated, along with the Naval Academy, to Fort Adams in Newport, Rhode Island, for the duration of the war. Her sister ship United States was abandoned by the Union and then captured by Confederate forces at the Gosport Shipyard, leaving Constitution the only remaining frigate of the original six.
The Navy launched an ironclad on 10 May 1862 as part of the South Atlantic Blockading Squadron, and they bestowed on her the name to honor Constitutions tradition of service. However, New Ironsides naval career was short, as she was destroyed by fire on 16 December 1865. In August 1865, Constitution moved back to Annapolis, along with the rest of the Naval Academy. During the voyage, she was allowed to drop her tow lines from the tug and continue alone under wind power. Despite her age, she was recorded running at and arrived at Hampton Roads ten hours ahead of the tug. Andersonville Prisoners- "Thorp and his fellow soldiers were transported to Jacksonville, Fla., then on USS Constitution to "Camp Parole" in Annapolis, Md. There, they were issued rations, clothing and back pay before being sent to their respective regimental headquarters for discharge."
As Constitution settled in again at the Academy, a series of upgrades was installed that included steam pipes and radiators to supply heat from shore, along with gas lighting. From June to August each year, she would depart with midshipmen for their summer training cruise and then return to operate for the rest of the year as a classroom. In June 1867, her last known plank owner William Bryant died in Maine. George Dewey assumed command in November; he served as her commanding officer until 1870. In 1871, her condition had deteriorated to the point where she was retired as a training ship, and then towed to the Philadelphia Navy Yard, where she was placed in ordinary on 26 September.
Paris Exposition
Constitution was overhauled beginning in 1873 in order to participate in the centennial celebrations of the United States. Work began slowly and was intermittently delayed by the transition of the Philadelphia Navy Yard to League Island. By late 1875, the Navy opened bids for an outside contractor to complete the work, and Constitution was moved to Wood, Dialogue, and Company in May 1876, where a coal bin and a small boiler for heat were installed. The Andrew Jackson figurehead was removed at this time and given to the Naval Academy Museum, where it remains today. Her construction dragged on during the rest of 1876 until the centennial celebrations had long passed, and the Navy decided that she would be used as a training and school ship for apprentices.
Oscar C. Badger took command on 9 January 1878 to prepare her for a voyage to the Paris Exposition of 1878, transporting artwork and industrial displays to France. Three railroad cars were lashed to her spar deck and all but two cannons were removed when she departed on 4 March. While docking at Le Havre, she collided with Ville de Paris, which resulted in Constitution entering dry dock for repairs and remaining in France for the rest of 1878. She got underway for the United States on 16 January 1879, but poor navigation ran her aground the next day near Bollard Head, Dorset, United Kingdom. She was refloated with the assistance of the tugs Commodore, Lightning, Lothair, Royal Albert, Malta and Telegraph. She was towed into the Portsmouth Naval Dockyard, Hampshire, England, where only minor damage was found and repaired.
Her problem-plagued voyage continued on 13 February when her rudder was damaged during heavy storms, resulting in a total loss of steering control, with the rudder smashing into the hull at random. Three crewmen went over the stern on ropes and boatswain's chairs and secured it. The next morning, they rigged a temporary steering system. Badger set a course for the nearest port, and she arrived in Lisbon on 18 February. Slow dock services delayed her departure until 11 April and her voyage home did not end until 24 May. Carpenter's Mate Henry Williams, Captain of the Top Joseph Matthews, and Captain of the Top James Horton received the Medal of Honor for their actions in repairing the damaged rudder at sea. Constitution returned to her previous duties of training apprentice boys, and Ship's Corporal James Thayer received a Medal of Honor for saving a fellow crew member from drowning on 16 November.
Over the next two years, she continued her training cruises, but it soon became apparent that her overhaul in 1876 had been of poor quality, and in 1881 she was determined to be unfit for service. Funds were lacking for another overhaul, so she was decommissioned, ending her days as an active-duty naval ship. She was moved to the Portsmouth Navy Yard and used as a receiving ship. There, she had a housing structure built over her spar deck, and her condition continued to deteriorate, with only a minimal amount of maintenance performed to keep her afloat. In 1896, Massachusetts Congressman John F. Fitzgerald became aware of her condition and proposed to Congress that funds be appropriated to restore her enough to return to Boston. She arrived at the Charlestown Navy Yard under tow on 21 September 1897 and, after her centennial celebrations in October, she lay there with an uncertain future.
Museum ship
In 1900, Congress authorized the restoration of Constitution but did not appropriate any funds for the project; funding was to be raised privately. The Massachusetts Society of the United Daughters of the War of 1812 spearheaded an effort to raise funds, but they ultimately failed. In 1903, the Massachusetts Historical Society's president Charles Francis Adams requested of Congress that Constitution be rehabilitated and placed back into active service.
In 1905, Secretary of the Navy Charles Joseph Bonaparte suggested that Constitution be towed out to sea and used as target practice, after which she would be allowed to sink. Moses H. Gulesian read about this in a Boston newspaper; he was a businessman from Worcester, Massachusetts, and he offered to purchase her for $10,000. The State Department refused, but Gulesian initiated a public campaign which began from Boston and ultimately "spilled all over the country." The storms of protest from the public prompted Congress to authorize $100,000 (~$ in ) in 1906 for the ship's restoration. First to be removed was the barracks structure on her spar deck, but the limited amount of funds allowed just a partial restoration. By 1907, Constitution began to serve as a museum ship, with tours offered to the public. On 1 December 1917, she was renamed Old Constitution to free her name for a planned new . The name Constitution was originally destined for the lead ship of the class, but was shuffled between hulls until CC-5 was given the name; construction of CC-5 was canceled in 1923 due to the Washington Naval Treaty. The incomplete hull was sold for scrap and Old Constitution was granted the return of her name on 24 July 1925.
1925 restoration and tour
Admiral Edward Walter Eberle, Chief of Naval Operations, ordered the Board of Inspection and Survey to compile a report on her condition, and the inspection of 19 February 1924 found her in grave condition. Water had to be pumped out of her hold every day just to keep her afloat, and her stern was in danger of falling off. Almost all deck areas and structural components were filled with rot, and she was considered to be on the verge of ruin. Yet the Board recommended that she be thoroughly repaired in order to preserve her as long as possible. The estimated cost of repairs was $400,000. Secretary of the Navy Curtis D. Wilbur proposed to Congress that the required funds be raised privately, and he was authorized to assemble the committee charged with her restoration.
The first effort was sponsored by the national Elks Lodge. Programs presented to schoolchildren about "Old Ironsides" encouraged them to donate pennies towards her restoration, eventually raising $148,000. In the meantime, the estimates for repair began to climb, eventually reaching over $745,000 (~$ in ) after costs of materials were realized. In September 1926, Wilbur began to sell copies of a painting of Constitution at 50 cents per copy. The silent film Old Ironsides portrayed Constitution during the First Barbary War. It premiered in December and helped spur more contributions to her restoration fund. The final campaign allowed memorabilia to be made of her discarded planking and metal. The committee eventually raised more than $600,000 after expenses, still short of the required amount, and Congress approved up to $300,000 to complete the restoration. The final cost of the restoration was $946,000 ().
Lieutenant John A. Lord was selected to oversee the reconstruction project, and work began while fund-raising efforts were still underway. Materials were difficult to find, especially the live oak needed; Lord uncovered a long-forgotten stash of live oak (some ) at Naval Air Station Pensacola, Florida, that had been cut sometime in the 1850s for a ship-building program that never began. Constitution entered dry dock with a crowd of 10,000 observers on 16 June 1927. Meanwhile, Charles Francis Adams had been appointed as Secretary of the Navy, and he proposed that Constitution make a tour of the United States upon her completion, as a gift to the nation for its efforts to help restore her. She emerged from dry dock on 15 March 1930; approximately 85 percent of the ship had been "renewed" (i.e. replaced) to make her seaworthy. Many amenities were installed to prepare her for the three-year tour of the country, including water piping throughout, modern toilet and shower facilities, electric lighting to make the interior visible for visitors, and several peloruses for ease of navigation. of rigging was made for Constitution at Charlestown Navy Yard ropewalk.
Constitution recommissioned on 1 July 1931 under the command of Louis J. Gulliver with a crew of 60 officers and sailors, 15 Marines, and a pet monkey named Rosie as their mascot. The tour began at Portsmouth, New Hampshire, with much celebration and a 21-gun salute, scheduled to visit 90 port cities along the Atlantic, Gulf, and Pacific coasts. Due to the heavy itinerary, she was towed by the minesweeper . She went as far north as Bar Harbor, Maine, south and into the Gulf of Mexico, then through the Panama Canal Zone, and north again to Bellingham, Washington, on the Pacific Coast. Constitution returned to her home port of Boston in May 1934 after more than 4.6 million people visited her during the three-year tour.
1934 return to Boston
Constitution returned to serving as a museum ship, receiving 100,000 visitors per year in Boston. She was maintained by a small crew who were berthed on the ship, requiring more reliable heating. The heating was upgraded to a forced-air system in the 1950s, and a sprinkler system was added that protects her from fire. Constitution broke loose from her dock on 21 September 1938 during the New England Hurricane and was blown into Boston Harbor, where she collided with the destroyer ; she suffered only minor damage.
With limited funds available, she experienced more deterioration over the years, and items began to disappear from the ship as souvenir hunters picked away at the more portable objects. Constitution and were recommissioned in 1940 at the request of President Franklin Roosevelt.
In early 1941, Constitution was assigned the hull classification symbol IX-21 and began to serve as a brig for officers awaiting court-martial.
The United States Postal Service issued a stamp commemorating Constitution in 1947, and an Act of Congress in 1954 made the Secretary of the Navy responsible for her upkeep.
Restoration
In 1970, another survey was performed on her condition, finding that repairs were required but not as extensively as needed in the 1920s. The US Navy determined that a commander was required as commanding officer—typically someone with about 20 years of seniority; this would ensure the experience to organize the maintenance that she required. Funds were approved in 1972 for her restoration, and she entered dry dock in April 1973, remaining until April 1974. During this period, large quantities of red oak were removed and replaced. The red oak had been added in the 1950s as an experiment to see if it would be more durable than the live oak, but it had mostly rotted away by 1970.
Bicentennial celebrations
Commander Tyrone G. Martin became her captain in August 1974, as preparations began for the upcoming United States Bicentennial celebrations. He set the precedent that all construction work on Constitution was to be aimed towards maintaining her to the 1812 configuration for which she is most noted. In September 1975, her hull classification of IX-21 was officially canceled.
The privately run USS Constitution Museum opened on 8 April 1976, and Commander Martin dedicated a tract of land as "Constitution Grove" one month later, located at the Naval Surface Warfare Center in Indiana. The now supply the majority of the white oak required for repair work. On 10 July, Constitution led the parade of tall ships up Boston Harbor for Operation Sail, firing her guns at one-minute intervals for the first time in approximately 100 years. On 11 July, she rendered a 21-gun salute to Her Majesty's Yacht Britannia, as Queen Elizabeth II and Prince Philip arrived for a state visit. The royal couple were piped aboard and privately toured the ship for approximately 30 minutes with Commander Martin and Secretary of the Navy J. William Middendorf. Upon their departure, the crew of Constitution rendered three cheers for the Queen. Over 900,000 visitors toured "Old Ironsides" that year.
1995 reconstruction
Constitution entered dry dock in 1992 for an inspection and minor repair period that turned out to be her most comprehensive structural restoration and repair since she was launched in 1797. Multiple refittings over the 200 years of her career had removed most of her original construction components and design, as her mission changed from a fighting warship to a training ship and eventually to a receiving ship. In 1993, the Naval History & Heritage Command Detachment Boston reviewed Humphreys' original plans and identified five main structural components that were required to prevent hogging of the hull, as Constitution had of hog at that point. Using a 1:16 scale model of the ship, they were able to determine that restoring the original components would result in a 10% increase in hull stiffness.
Three hundred scans were completed on her timbers using radiography to find any hidden problems otherwise undetectable from the outside—technology that was unavailable during previous reconstructions. The repair crew used sound-wave testing, aided by the United States Forest Service's Forest Products Laboratory, to determine the condition of the remaining timbers that might have been rotting from the inside. The of hog was removed from her keel by allowing the ship to settle naturally while in dry dock. The most difficult task was the procurement of timber in the quantity and sizes needed, as was the case during her 1920s restoration as well. The city of Charleston, South Carolina, donated live-oak trees that had been felled by Hurricane Hugo in 1989, and the International Paper Company donated live oak from its own property. The project continued to reconstruct her to 1812 specifications, even as she remained open to visitors who were allowed to observe the process and converse with workers. The $12 million project was completed in 1995.
Sailing on 200th anniversary
As early as 1991, Commander David Cashman had suggested that Constitution should sail to celebrate her 200th anniversary in 1997 rather than being towed. The proposal was approved, though it was thought to be a large undertaking since she had not sailed in over 100 years. When she emerged from dry dock in 1995, a more serious effort began to prepare her for sail. As in the 1920s, education programs aimed at school children helped collect pennies to purchase the sails to make the voyage possible. Her six-sail battle configuration consisted of jibs, topsails, and driver.
Commander Mike Beck began training the crew for the historic sail using an 1819 Navy sailing manual and several months of practice, including time spent aboard the Coast Guard cutter Eagle. On 20 July, Constitution was towed from her usual berth in Boston to an overnight mooring in Marblehead, Massachusetts. En route, she made her first sail in 116 years, at a recorded .
On 21 July, she was towed offshore, where the tow line was dropped and Commander Beck ordered six sails set (jibs, topsails, and spanker). She then sailed for 40 minutes on a south-south-east course with true wind speeds of about , attaining a top recorded speed of . Her modern US naval combatant escorts were the guided-missile destroyer and frigate . They rendered passing honors to "Old Ironsides" while she was under sail, and she was overflown by the US Navy Flight Demonstration Squadron, the Blue Angels. Inbound to her permanent berth at Charlestown, she rendered a 21-gun salute to the nation off Fort Independence in Boston Harbor.
Present day
The mission of Constitution is to promote understanding of the Navy's role in war and peace through active participation in public events and education through outreach programs, public access, and historic demonstration. Her crew of approximately 75 US Navy sailors participate in ceremonies, educational programs, and special events while keeping the ship open to visitors year-round and providing free tours. The crewmen are all active-duty members of the US Navy, and the assignment is considered to be special duty. She entered dry dock in May 2015 for a scheduled restoration, before returning to sea.
Constitution is berthed at Pier One of the former Charlestown Navy Yard, at the terminus of Boston's Freedom Trail. She is open to the public year-round. The privately run USS Constitution Museum is nearby, located in a restored shipyard building at the foot of Pier Two. Constitution typically makes at least one "turnaround cruise" each year, during which she is towed into Boston Harbor to perform underway demonstrations, including a gun drill; she then returns to her dock in the opposite direction to ensure that she weathers evenly. The "turnaround cruise" is open to the general public based on a "lottery draw" of interested persons each year.
The Naval History and Heritage Command Detachment Boston is responsible for planning and performing her maintenance, repair, and restoration, keeping her as close as possible to her 1812 configuration. The detachment estimates that approximately 10–15 percent of the timber in Constitution contains original material installed during her initial construction period in the years 1795–1797. The Navy maintains Constitution Grove at Naval Surface Warfare Center Crane Division near Bloomington, Indiana to ensure a supply of mature white oak.
In 2003, the special effects crew from the production of Master and Commander: The Far Side of the World spent several days using Constitution as a computer model for the fictional French frigate Acheron, using stem-to-stern digital image scans. Lieutenant Commander John Scivier of the Royal Navy, commanding officer of , paid a visit to Constitution in November 2007, touring the local facilities with Commander William A. Bullard III. They discussed arranging an exchange program between the two ships.
Constitution emerged from a three-year repair period in November 2010. During this time, the entire spar deck was stripped down to the support beams, and the decking overhead was replaced to restore its original curvature, allowing water to drain overboard and not remain standing on the deck. In addition to decking repairs, 50 hull planks and the main hatch were repaired or replaced. The restoration continued the focus toward keeping her appearance of 1812 by replacing her upper sides so that she now resembles what she looked like after her triumph over Guerriere, when she gained her nickname "Old Ironsides". The crew of Constitution under Commander Matt Bonner sailed Constitution under her own power on 19 August 2012, the anniversary of her victory over Guerriere. Bonner was Constitutions 72nd commanding officer.
On 18 May 2015, the ship entered Dry Dock 1 in Charlestown Navy Yard to begin a two-year restoration program. The restoration planned to restore the copper sheets on the ship's hull and replace deck boards. The Department of the Navy provided the $12–15 million expected cost. After the restoration was complete, she was returned to the water on 23 July 2017. In November 2017, Commander Nathaniel R. Shick relieved Commander Robert S. Gerosa Jr., who had spent most of his command while the ship was dry docked, in a ceremony held on board Constitution, to become the ship's 75th commanding officer.
On 29 February 2020, Shick was succeeded as commanding officer by Commander John Benda.
On 17 January 2022, Billie J. Farrell became the first woman to command Constitution.
Image gallery
Commanders
Since she was first launched in 1797, there have been 77 commanders of Constitution.
| Technology | Specific seacraft | null |
32308 | https://en.wikipedia.org/wiki/United%20States%20customary%20units | United States customary units | United States customary units form a system of measurement units commonly used in the United States and most U.S. territories since being standardized and adopted in 1832. The United States customary system developed from English units that were in use in the British Empire before the U.S. became an independent country. The United Kingdom's system of measures evolved by 1824 to create the imperial system (with imperial units), which was officially adopted in 1826, changing the definitions of some of its units. Consequently, while many U.S. units are essentially similar to their imperial counterparts, there are noticeable differences between the systems.
The majority of U.S. customary units were redefined in terms of the meter and kilogram with the Mendenhall Order of 1893 and, in practice, for many years before. These definitions were refined by the international yard and pound agreement of 1959.
The United States uses customary units in commercial activities, as well as for personal and social use. In science, medicine, many sectors of industry, and some government and military areas, metric units are used. The International System of Units (SI), the modern form of the metric system, is preferred for many uses by the U.S. National Institute of Standards and Technology (NIST). For newer types of measurement where there is no traditional customary unit, international units are used, sometimes mixed with customary units: for example, electrical resistance of wire expressed in ohms (SI) per thousand feet.
History
The United States customary system of units of 1832 is based on the system in use in the United Kingdom prior to the introduction to the British imperial system on January 1, 1826. Both systems are derived from English units, an older system of units which had evolved over the millennia before American independence, and which had its roots in both Roman and Anglo-Saxon units.
The customary system was championed by the U.S.-based International Institute for Preserving and Perfecting Weights and Measures in the late 19th century. Some advocates of the customary system saw the French Revolutionary, or metric, system as atheistic. The president of an Ohio auxiliary of the Institute wrote that the traditional units were "a just weight and a just measure, which alone are acceptable to the Lord". His organization later went so far as to publish music for a song proclaiming "down with every 'metric' scheme".
The U.S. government passed the Metric Conversion Act of 1975, which made the metric system "the preferred system of weights and measures for U.S. trade and commerce". The legislation states that the federal government has a responsibility to assist industry as it voluntarily converts to the metric system, i.e., metrification. This is most evident in U.S. labeling requirements on food products, where SI units are almost always presented alongside customary units. According to the CIA World Factbook, the United States is one of three nations (along with Liberia and Myanmar (Burma)) that have not adopted the metric system as their official system of weights and measures.
Executive Order 12770, signed by President George H. W. Bush on July 25, 1991, citing the Metric Conversion Act, directed departments and agencies within the executive branch of the United States Government to "take all appropriate measures within their authority" to use the metric system "as the preferred system of weights and measures for United States trade and commerce" and authorized the Secretary of Commerce "to charter an Interagency Council on Metric Policy ('ICMP'), which will assist the Secretary in coordinating Federal Government-wide implementation of this order."
U.S. customary units are widely used on consumer products and in industrial manufacturing. Metric units are standard in the fields of science, medicine, and engineering, as well as many sectors of industry and government, including the military. There are anecdotal objections to the use of metric units in carpentry and the building trades, on the basis that it is easier to remember an integer number of inches plus a fraction, rather than a measurement in millimeters, or that foot-inch measurements are more suitable when distances are frequently divided into halves, thirds, and quarters, often in parallel. The metric system also lacks a parallel measurement to the foot.
The term "United States customary units" was used by the former United States National Bureau of Standards, although "English units" is sometimes used in colloquial speech.
Length
For measuring length, the U.S. customary system uses the inch, foot, yard, and mile, which are the only four customary length measurements in everyday use. From 1893, the foot was legally defined as exactly (approximately ). Since July 1, 1959, the units of length have been defined on the basis of = . The U.S., the United Kingdom and other Commonwealth countries agreed on this definition per the International Yard and Pound Agreement of 1958. At the time of the agreement, the basic geodetic datum in North America was the North American Datum of 1927 (NAD27), which had been constructed by triangulation based on the definition of the foot in the Mendenhall Order of 1893, that is = : this definition was retained for data derived from NAD27, but renamed the US survey foot to distinguish it from the international foot. For most applications, the difference between the two definitions is insignificant – one international foot is exactly of a US survey foot, for a difference of about per mile – but it affects the definition of the State Plane Coordinate Systems (SPCSs), which can stretch over hundreds of miles.
The NAD27 was replaced in the 1980s by the North American Datum of 1983 (NAD83), which is defined in meters. The SPCSs were also updated, but the U.S. National Geodetic Survey left the decision of which (if any) definition of the foot to use to the individual states (and other jurisdictions). All SPCS 1983 systems are defined in meters, but forty jurisdictions also use the survey foot, six use the international foot, and ten do not specify which, if any, foot type should be used.
In 2019, the NIST, working with the National Geodetic Survey (NGS), National Ocean Service (NOS), National Oceanic and Atmospheric Administration (NOAA) and Department of Commerce (DOC), issued a Federal Register Notice (FRN) indicating the deprecation of the U.S. survey foot and U.S. survey mile units from December 31, 2022.
In the following tables in this and subsequent sections, the most common measures are shown in italics, and approximate values are shown in parentheses; values not in parentheses are exact.
International units
{| class="wikitable"
|+List of international units
! Unit !! Name !! Divisions !! SI equivalent
|-
|
|twip
|
| ()
|-
|
|
|
|
|-
|
|point
|
|
|-
|
|pica
|
|
|-
|
|inch
|
|
|-
|
|foot
|
|
|-
|
|yard
|
|
|-
|
|mile
|
|
|-
|
| league
|
|
|-
|}
International nautical units
US survey units
Note that as announced by the National Institute of Standards and Technology, the US survey foot, and other units defined in terms of it, have been deprecated since 2023, "except for historic and legacy applications".
Area
The most widely used area unit with a name unrelated to any length unit is the acre. The National Institute of Standards and Technology formerly contended that customary area units are defined in terms of the square survey foot, not the square international foot, but from 2023 it states that "although historically defined using the U.S. survey foot, the statute mile can be defined using either definition of the foot, as is the case for all other units listed in this table. However, use of definitions based on the U.S. survey foot should be avoided after December 31, 2022 except for historic and legacy applications."
Volume
The cubic inch, cubic foot and cubic yard are commonly used for measuring volume. In addition, there is one group of units for measuring volumes of liquids (based on the wine gallon and subdivisions of the fluid ounce), and one for measuring volumes of dry material, each with their own names and sub-units.
Although the units and their names are similar to the units in the imperial system, and many units are shared between the two systems as a whole; with respect to volume, however, this is quite the contrary. The independence of the U.S. from the British Empire decades prior to the reformation of units in 1824—most notably the gallon, its subdivisions, and (in mass) higher combinations above the pound—is the cause of the differences in values.
As a non-participant in that reform, the U.S. retained the separate systems for measuring the volumes of liquids and dry material, whereas the imperial system had unified the units for both under a new imperial gallon. The U.S. uses the pre-1824 gallon () and Winchester bushel (), as opposed to British 1824 definition of of water and the bushel as .
Fluid volume
One US fluid ounce is of a US pint, of a US quart, and of a US gallon. The teaspoon, tablespoon, and cup are defined in terms of a fluid ounce as , , and 8 fluid ounces respectively. The fluid ounce derives its name originally from being the volume of one ounce avoirdupois of water, but in the US it is defined as of a US gallon. Consequently, a fluid ounce of water weighs about 1.041 ounces avoirdupois.
For nutritional labeling and medicine in the US, the teaspoon and tablespoon are defined as a metric teaspoon and tablespoonprecisely and respectively.
The saying, "a pint's a pound the world around", refers to 16 US fluid ounces of water weighing approximately (about 4% more than) one pound avoirdupois. An imperial pint of water weighs a pound and a quarter ().
There are varying standards for barrel for some specific commodities, including 31 gallons for beer, 40 gallons for whiskey or kerosene, and 42 gallons for petroleum. The general standard for liquids is 31.5 gal or half a hogshead. The common 55-gallon size of drum for storing and transporting various products and wastes is sometimes confused with a barrel, though it is not a standard measure.
In the U.S., single servings of beverages are usually measured in fluid ounces. Milk is usually sold in half-pints (8 fluid ounces), pints, quarts, half gallons, and gallons. Water volume for sinks, bathtubs, ponds, swimming pools, etc., is usually stated in gallons or cubic feet. Quantities of gases are usually given in cubic feet (at one atmosphere).
Minims, drams, gill, and pottle are rarely used currently. The gill is often referred to as a "half-cup". The pottle is often referred to as a "half-gallon".
Dry volume
Dry volume is measured on a separate system, although many of the names remain the same. Small fruits and vegetables are often sold in dry pints and dry quarts. The US dry gallon is less commonly used, and was not included in the handbook that many states recognize as the authority on measurement law.Summary of State Laws and Regulations in Weights and Measures . (2005) National Institute of Standards and Technology. However pecks, or bushels are sometimes used—particularly for grapes, apples and similar fruits in agricultural regions.
Mass and weight
There have historically been five different English systems of mass: tower, apothecaries', troy, avoirdupois, and metric. Of these, the avoirdupois weight is the most common system used in the U.S., although Troy weight is still used to weigh precious metals. Apothecaries' weight—once used by pharmacies—has been largely replaced by metric measurements. Tower weight fell out of use in England (due to legal prohibition in 1527) centuries ago, and was never used in the U.S. The imperial system, which is still used for some measures in the United Kingdom and other countries, is based on avoirdupois, with variations from U.S. customary units larger than a pound.
The pound avoirdupois, which forms the basis of the U.S. customary system of mass, is defined as exactly by agreement between the U.S., the United Kingdom, and other English-speaking countries in 1959. Other units of mass are defined in terms of it.
The avoirdupois pound is legally defined as a measure of mass, but the name pound is also applied to measures of force. For instance, in many contexts, the pound avoirdupois is used as a unit of mass, but in some contexts, the term "pound" is used to refer to "pound-force". The slug is another unit of mass derived from pound-force.
Troy weight, avoirdupois weight, and apothecaries' weight are all built from the same basic unit, the grain, which is the same in all three systems. However, while each system has some overlap in the names of their units of measure (all have ounces and pounds), the relationship between the grain and these other units within each system varies. For example, in apothecary and troy weight, the pound and ounce are the same, but are different from the pound and ounce in avoirdupois in terms of their relationships to grains and to each other. The systems also have different units between the grain and ounce (apothecaries' has scruple and dram, troy has pennyweight, and avoirdupois has just dram, sometimes spelled drachm). The dram in avoirdupois weighs just under half of the dram in apothecaries'. The fluid dram unit of volume is based on the weight of 1 dram of water in the apothecaries' system.
To alleviate confusion, it is typical when publishing non-avoirdupois weights to mention the name of the system along with the unit. Precious metals, for example, are often weighed in "troy ounces", because just "ounce" would be more likely to be assumed to mean an avoirdupois ounce.
For the pound and smaller units, the U.S. customary system and the British imperial system are identical. However, they differ when dealing with units larger than the pound. The definition of the pound avoirdupois in the imperial system is identical to that in the U.S. customary system.
In the U.S., only the ounce, pound and short ton known in the country simply as the ton are commonly used, though the hundredweight is still used in agriculture and shipping. The grain is used to describe the mass of propellant and projectiles in small arms ammunition. It was also used to measure medicine and other very small masses.
Grain measures
In agricultural practice, a bushel is a fixed volume of . The mass of grain will therefore vary according to density. Some nominal weight examples are:
1 bushel (corn) =
1 bushel (wheat) =
1 bushel (barley) =
Cooking measures
The most common practical cooking measures for both liquid and dry ingredients in the U.S. are teaspoon, tablespoon, and cup, along with halves, thirds, quarters, and eighths of each. Units used are pounds, ounces, and fluid ounces. Common sizes are also used, such as can (presumed size varies depending on product), jar, square (e.g. of chocolate), stick (e.g. of butter), or portion of fruit or vegetable (e.g. a half lemon, two medium onions).
Temperature
Degrees Fahrenheit are used in the U.S. to measure temperatures in most non-scientific contexts. The Rankine scale of absolute temperature also saw some use in thermodynamics. Scientists worldwide use the kelvin and degree Celsius. Several U.S. technical standards are expressed in Fahrenheit temperatures, and some American medical practitioners use degrees Fahrenheit for body temperature.
The relationship between the different temperature scales is linear but the scales have different zero points, so conversion is not simply multiplication by a factor. Pure water freezes at and boils at at 1 atm. The conversion formula is:
or inversely as
Other units
Length
1 hand = 4 in =
1 U (rack unit) = 1.75 in =
Volume
1 board-foot = 1 ft × 1 ft × 1 in =
Mass
1 slug = 1 lbf⋅s2/ft ≈ 14.59390 kg
Force
1 poundal = force to accelerate 1 pound mass 1 foot/second/second ≈ .
1 kip = 1000 lbf ≈
Energy
1 foot-pound ≈
1 British thermal unit (Btu) ≈ (1,054–1,060 J, depending on which of several definitions of BTU is used)
1 Quad 1015 BTU, one quadrillion BTU (short-scale) or 1.055×1018 joule (1.055 exajoules or EJ)
Power
1 horsepower ≈
1 ton of refrigeration (12,000 Btu/h) =
Pressure
1 inch of mercury = the pressure produced by 1 inch height of mercury = (33.8639 hPa, millibars)
1 pound per square inch (psi) ≈
Torque
1 pound-foot ≈
Insulation
1 R-value (ft2⋅°F⋅h/Btu) ≈ 0.1761 RSI (K⋅m2/W)
Various combination units are in common use; these are straightforwardly defined based on the above basic units.
Sizing systems are used for various items in commerce, several of which are U.S.-specific:
US standard clothing size
American wire gauge is used for most metal wire.
Scoop (utensil) sizes, numbered by scoops per quart
Thickness of leather is measured in ounces, 1 oz equals .
Bolts and screws follow the Unified Thread Standard rather than the ISO metric screw thread standard.
Knitting needles in the United States are measured according to a non-linear unitless numerical system.
Thickness of aluminum foil is measured in mils ( inch, or 0.0254 mm) in the United States.
Cross-sectional area of electrical wire is measured in circular mils in the U.S. and Canada, one circular mil (cmil) being equal to (or ). Since this is so small, actual wire is commonly measured in thousands of a cmil, called either kcmil or MCM.
The mil or thou is also sometimes used to mean thousandth of an inch.
Sheet metal in the U.S. is commonly measured in gauge (not to be confused with the American wire gauge), which is derived from weight and thus differs by material.
Nominal Pipe Size is used for the outside diameter of pipes. Below NPS14, the NPS number is not consistent with the pipe diameter in inches.
Copper tubing, however, is measured in nominal size, inch less than the outside diameter.
The Schedule system is used for standard pipe thicknesses.
Alcohol content is frequently given in proof, 2 × percent alcohol by volume
The cord is used for volume of firewood.
The square is used to mean 100 square feet in construction.
Heat flux in the U.S. is measured in langleys.
Other names for U.S. customary units
The United States Code refers to these units as "traditional systems of weights and measures".
Other common ways of referring to the system are: customary, standard, English, or imperial'' (which refers to the post-1824 reform measures used throughout the British Empire & Commonwealth countries). Another term is the foot–pound–second (FPS) system, as opposed to centimeter–gram–second (CGS) and meter–kilogram–second (MKS) systems.
Tools and fasteners with sizes measured in inches are sometimes called "SAE bolts" or "SAE wrenches" to differentiate them from their metric counterparts. The Society of Automotive Engineers (SAE) originally developed fasteners standards using U.S. units for the U.S. auto industry; the organization now uses metric units.
| Physical sciences | Measurement systems | null |
32344 | https://en.wikipedia.org/wiki/Variance | Variance | In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .
An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions.
There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoretical probability distribution and is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real-world system. If all possible observations of the system are present, then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to calculate an estimate of the population variance, as discussed in the section below.
The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling.
Definition
The variance of a random variable is the expected value of the squared deviation from the mean of , :
This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself:
The variance is also equivalent to the second cumulant of a probability distribution that generates . The variance is typically designated as , or sometimes as or , or symbolically as or simply (pronounced "sigma squared"). The expression for the variance can be expanded as follows:
In other words, the variance of is equal to the mean of the square of minus the square of the mean of . This equation should not be used for computations using floating point arithmetic, because it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude. For other numerically stable alternatives, see algorithms for calculating variance.
Discrete random variable
If the generator of random variable is discrete with probability mass function , then
where is the expected value. That is,
(When such a discrete weighted variance is specified by weights whose sum is not 1, then one divides by the sum of the weights.)
The variance of a collection of equally likely values can be written as
where is the average value. That is,
The variance of a set of equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other:
Absolutely continuous random variable
If the random variable has a probability density function , and is the corresponding cumulative distribution function, then
or equivalently,
where is the expected value of given by
In these formulas, the integrals with respect to and
are Lebesgue and Lebesgue–Stieltjes integrals, respectively.
If the function is Riemann-integrable on every finite interval then
where the integral is an improper Riemann integral.
Examples
Exponential distribution
The exponential distribution with parameter is a continuous distribution whose probability density function is given by
on the interval . Its mean can be shown to be
Using integration by parts and making use of the expected value already calculated, we have:
Thus, the variance of is given by
Fair dice
A fair six-sided dice can be modeled as a discrete random variable, , with outcomes 1 through 6, each with equal probability 1/6. The expected value of is Therefore, the variance of is
The general formula for the variance of the outcome, , of an die is
Commonly used probability distributions
The following table lists the variance for some commonly used probability distributions.
Properties
Basic properties
Variance is non-negative because the squares are positive or zero:
The variance of a constant is zero.
Conversely, if the variance of a random variable is 0, then it is almost surely a constant. That is, it always has the same value:
Issues of finiteness
If a distribution does not have a finite expected value, as is the case for the Cauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is a Pareto distribution whose index satisfies
Decomposition
The general formula for variance decomposition or the law of total variance is: If and are two random variables, and the variance of exists, then
The conditional expectation of given , and the conditional variance may be understood as follows. Given any particular value y of the random variable Y, there is a conditional expectation given the event Y = y. This quantity depends on the particular value y; it is a function . That same function evaluated at the random variable Y is the conditional expectation
In particular, if is a discrete random variable assuming possible values with corresponding probabilities , then in the formula for total variance, the first term on the right-hand side becomes
where . Similarly, the second term on the right-hand side becomes
where and . Thus the total variance is given by
A similar formula is applied in analysis of variance, where the corresponding formula is
here refers to the Mean of the Squares. In linear regression analysis the corresponding formula is
This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated.
Similar decompositions are possible for the sum of squared deviations (sum of squares, ):
Calculation from the CDF
The population variance for a non-negative random variable can be expressed in terms of the cumulative distribution function F using
This expression can be used to calculate the variance in situations where the CDF, but not the density, can be conveniently expressed.
Characteristic property
The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. . Conversely, if a continuous function satisfies for all random variables X, then it is necessarily of the form , where . This also holds in the multidimensional case.
Units of measurement
Unlike the expected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via their standard deviation or root mean square deviation is often preferred over using the variance. In the dice example the standard deviation is , slightly larger than the expected absolute deviation of 1.5.
The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution.
Propagation
Addition and multiplication by a constant
Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged:
If all values are scaled by a constant, the variance is scaled by the square of that constant:
The variance of a sum of two random variables is given by
where is the covariance.
Linear combinations
In general, for the sum of random variables , the variance becomes:
see also general Bienaymé's identity.
These results lead to the variance of a linear combination as:
If the random variables are such that
then they are said to be uncorrelated. It follows immediately from the expression given earlier that if the random variables are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically:
Since independent random variables are always uncorrelated (see ), the equation above holds in particular when the random variables are independent. Thus, independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances.
Matrix notation for the variance of a linear combination
Define as a column vector of random variables , and as a column vector of scalars . Therefore, is a linear combination of these random variables, where denotes the transpose of . Also let be the covariance matrix of . The variance of is then given by:
This implies that the variance of the mean can be written as (with a column vector of ones)
Sum of variables
Sum of uncorrelated variables
One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of uncorrelated random variables is the sum of their variances:
This statement is called the Bienaymé formula and was discovered in 1853. It is often made with the stronger condition that the variables are independent, but being uncorrelated suffices. So if all the variables have the same variance σ2, then, since division by n is a linear transformation, this formula immediately implies that the variance of their mean is
That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem.
To prove the initial statement, it suffices to show that
The general result then follows by induction. Starting with the definition,
Using the linearity of the expectation operator and the assumption of independence (or uncorrelatedness) of X and Y, this further simplifies as follows:
Sum of correlated variables
Sum of correlated variables with fixed sample size
In general, the variance of the sum of variables is the sum of their covariances:
(Note: The second equality comes from the fact that .)
Here, is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory of Cronbach's alpha in classical test theory.
So, if the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is
This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing the uncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to
This formula is used in the Spearman–Brown prediction formula of classical test theory. This converges to ρ if n goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have
Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables.
Sum of uncorrelated variables with random sample size
There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size is a random variable whose variation adds to the variation of , such that,
which follows from the law of total variance.
If has a Poisson distribution, then with estimator = . So, the estimator of becomes , giving
(see standard error of the sample mean).
Weighted sum of variables
The scaling property and the Bienaymé formula, along with the property of the covariance jointly imply that
This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if X and Y are uncorrelated and the weight of X is two times the weight of Y, then the weight of the variance of X will be four times the weight of the variance of Y.
The expression above can be extended to a weighted sum of multiple variables:
Product of variables
Product of independent variables
If two variables X and Y are independent, the variance of their product is given by
Equivalently, using the basic properties of expectation, it is given by
Product of statistically dependent variables
In general, if two variables are statistically dependent, then the variance of their product is given by:
Arbitrary functions
The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by
provided that f is twice differentiable and that the mean and variance of X are finite.
Population variance and sample variance
Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that one estimates the mean and variance from a limited set of observations by using an estimator equation. The estimator is a function of the sample of n observations drawn without observational bias from the whole population of potential observations. In this example, the sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest.
The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance – these are consistent estimators (they converge to the value of the whole population as the number of samples increases) but can be improved. Most simply, the sample variance is computed as the sum of squared deviations about the (sample) mean, divided by n as the number of samples. However, using values other than n improves the estimator in various ways. Four common values for the denominator are n, n − 1, n + 1, and n − 1.5: n is the simplest (the variance of the sample), n − 1 eliminates bias, n + 1 minimizes mean squared error for the normal distribution, and n − 1.5 mostly eliminates bias in unbiased estimation of standard deviation for the normal distribution.
Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is a biased estimator: it underestimates the variance by a factor of (n − 1) / n; correcting this factor, resulting in the sum of squared deviations about the sample mean divided by n -1 instead of n, is called Bessel's correction. The resulting estimator is unbiased and is called the (corrected) sample variance or unbiased sample variance. If the mean is determined in some other way than from the same samples used to estimate the variance, then this bias does not arise, and the variance can safely be estimated as that of the samples about the (independently known) mean.
Secondly, the sample variance does not generally minimize mean squared error between sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the excess kurtosis of the population (see mean squared error: variance) and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than n − 1) and is a simple example of a shrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by n + 1 (instead of n − 1 or n) minimizes mean squared error. The resulting estimator is biased, however, and is known as the biased sample variation.
Population variance
In general, the population variance of a finite population of size N with values xi is given bywhere the population mean is and , where is the expectation value operator.
The population variance can also be computed using
(The right side has duplicate terms in the sum while the middle side has only unique terms to sum.) This is true becauseThe population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations.
Sample variance
In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a sample of the population. This is generally referred to as sample variance or empirical variance. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution.
We take a sample with replacement of n values Y1, ..., Yn from the population of size , where n < N, and estimate the variance on the basis of this sample. Directly taking the variance of the sample data gives the average of the squared deviations:
(See the section Population variance for the derivation of this formula.) Here, denotes the sample mean:
Since the Yi are selected randomly, both and are random variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples {Yi} of size n from the population. For this gives:
Here derived in the section Population variance and due to independency of and are used.
Hence gives an estimate of the population variance that is biased by a factor of as the expectation value of is smaller than the population variance (true variance) by that factor. For this reason, is referred to as the biased sample variance.
Correcting for this bias yields the unbiased sample variance, denoted :
Either estimator may be simply referred to as the sample variance when the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution.
The use of the term n − 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n − 1.5 yields an almost unbiased estimator.
The unbiased sample variance is a U-statistic for the function ƒ(y1, y2) = (y1 − y2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population.
Example
For a set of numbers {10, 15, 30, 45, 57, 52 63, 72, 81, 93, 102, 105}, if this set is the whole data population for some measurement, then variance is the population variance 932.743 as the sum of the squared deviations about the mean of this set, divided by 12 as the number of the set members. If the set is a sample from the whole population, then the unbiased sample variance can be calculated as 1017.538 that is the sum of the squared deviations about the mean of the sample, divided by 11 instead of 12. A function VAR.S in Microsoft Excel gives the unbiased sample variance while VAR.P is for population variance.
Distribution of the sample variance
Being a function of random variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case that Yi are independent observations from a normal distribution, Cochran's theorem shows that the unbiased sample variance S2 follows a scaled chi-squared distribution (see also: asymptotic properties and an elementary proof):
where σ2 is the population variance. As a direct consequence, it follows that
and
If Yi are independent and identically distributed, but not necessarily normally distributed, then
where κ is the kurtosis of the distribution and μ4 is the fourth central moment.
If the conditions of the law of large numbers hold for the squared observations, S2 is a consistent estimator of σ2. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).
Samuelson's inequality
Samuelson's inequality is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated. Values must lie within the limits
Relations with the harmonic and arithmetic means
It has been shown that for a sample {yi} of positive real numbers,
where ymax is the maximum of the sample, A is the arithmetic mean, H is the harmonic mean of the sample and is the (biased) variance of the sample.
This bound has been improved, and it is known that variance is bounded by
where ymin is the minimum of the sample.
Tests of equality of variances
The F-test of equality of variances and the chi square tests are adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult.
Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, the Capon test, Mood test, the Klotz test and the Sukhatme test. The Sukhatme test applies to two variances and requires that both medians be known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal.
The Lehmann test is a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include the Box test, the Box–Anderson test and the Moses test.
Resampling methods, which include the bootstrap and the jackknife, may be used to test the equality of variances.
Moment of inertia
The variance of a probability distribution is analogous to the moment of inertia in classical mechanics of a corresponding mass distribution along a line, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called moments of probability distributions. The covariance matrix is related to the moment of inertia tensor for multivariate distributions. The moment of inertia of a cloud of n points with a covariance matrix of is given by
This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to the x axis and distributed along it. The covariance matrix might look like
That is, there is the most variance in the x direction. Physicists would consider this to have a low moment about the x axis so the moment-of-inertia tensor is
Semivariance
The semivariance is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not.
For inequalities associated with the semivariance, see .
Etymology
The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance:
The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the square root of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations and , it is found that the distribution, when both causes act together, has a standard deviation . It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance...
Generalizations
For complex variables
If is a scalar complex-valued random variable, with values in then its variance is where is the complex conjugate of This variance is a real scalar.
For vector-valued random variables
As a matrix
If is a vector-valued random variable, with values in and thought of as a column vector, then a natural generalization of variance is where and is the transpose of and so is a row vector. The result is a positive semi-definite square matrix, commonly referred to as the variance-covariance matrix (or simply as the covariance matrix).
If is a vector- and complex-valued random variable, with values in then the covariance matrix is where is the conjugate transpose of This matrix is also positive semi-definite and square.
As a scalar
Another generalization of variance for vector-valued random variables , which results in a scalar value rather than in a matrix, is the generalized variance , the determinant of the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean.
A different generalization is obtained by considering the equation for the scalar variance, , and reinterpreting as the squared Euclidean distance between the random variable and its mean, or, simply as the scalar product of the vector with itself. This results in which is the trace of the covariance matrix.
| Mathematics | Statistics and probability | null |
32347 | https://en.wikipedia.org/wiki/Vacuole | Vacuole | A vacuole () is a membrane-bound organelle which is present in plant and fungal cells and some protist, animal, and bacterial cells. Vacuoles are essentially enclosed compartments which are filled with water containing inorganic and organic molecules including enzymes in solution, though in certain cases they may contain solids which have been engulfed. Vacuoles are formed by the fusion of multiple membrane vesicles and are effectively just larger forms of these. The organelle has no basic shape or size; its structure varies according to the requirements of the cell.
Discovery
Antonie van Leeuwenhoek described the plant vacuole in 1676. Contractile vacuoles ("stars") were first observed by Spallanzani (1776) in protozoa, although mistaken for respiratory organs. Dujardin (1841) named these "stars" as vacuoles. In 1842, Schleiden applied the term for plant cells, to distinguish the structure with cell sap from the rest of the protoplasm. In 1885, de Vries named the vacuole membrane as tonoplast.
Christian de Duve, discovered mammalian lysosomes using biochemical methods in the mid 1970’s. de Duve named lysosomes based on their biochemical properties (from the Greek lysis- digestive and soma-body). Their physical form was confirmed shortly later by electron microscopy. Because the lysosome shares many properties with vacuoles across taxonomical kingdoms, the notion that vacuoles and lysosomes are distinctly different organelles is more historical than functional.
Function
The function and significance of vacuoles varies greatly according to the type of cell in which they are present, having much greater prominence in the cells of plants, fungi and certain protists than those of animals and bacteria. In general, the functions of the vacuole include:
Isolating materials that might be harmful or a threat to the cell
Containing waste products
Containing water in plant cells
Maintaining internal hydrostatic pressure or turgor within the cell
Maintaining an acidic internal pH
Containing small molecules
Exporting unwanted substances from the cell
Allowing plants to support structures such as leaves and flowers due to the pressure of the central vacuole
By increasing in size, allowing the germinating plant or its organs (such as leaves) to grow very quickly and through using up mostly just water.
In seeds, storing proteins needed for germination (these are kept in 'protein bodies', which are modified vacuoles).
Vacuoles also play a major role in autophagy, maintaining a balance between biogenesis (production) and degradation (or turnover), of many substances and cell structures in certain organisms. They also aid in the lysis and recycling of misfolded proteins that have begun to build up within the cell. Thomas Boller and others proposed that the vacuole participates in the destruction of invading bacteria and Robert B. Mellor proposed organ-specific forms have a role in 'housing' symbiotic bacteria. In protists, vacuoles have the additional function of storing food which has been absorbed by the organism and assisting in the digestive and waste management process for the cell.
In animal cells, vacuoles perform mostly subordinate roles, assisting in larger processes of exocytosis and endocytosis.
Animal vacuoles are smaller than their plant counterparts but also usually greater in number. There are also animal cells that do not have any vacuoles.
Exocytosis is the extrusion process of proteins and lipids from the cell. These materials are absorbed into secretory granules within the Golgi apparatus before being transported to the cell membrane and secreted into the extracellular environment. In this capacity, vacuoles are simply storage vesicles which allow for the containment, transport and disposal of selected proteins and lipids to the extracellular environment of the cell.
Endocytosis is the reverse of exocytosis and can occur in a variety of forms. Phagocytosis ("cell eating") is the process by which bacteria, dead tissue, or other bits of material visible under the microscope are engulfed by cells. The material makes contact with the cell membrane, which then invaginates. The invagination is pinched off, leaving the engulfed material in the membrane-enclosed vacuole and the cell membrane intact. Pinocytosis ("cell drinking") is essentially the same process, the difference being that the substances ingested are in solution and not visible under the microscope. Phagocytosis and pinocytosis are both undertaken in association with lysosomes which complete the breakdown of the material which has been engulfed.
Salmonella is able to survive and reproduce in the vacuoles of several mammal species after being engulfed.
The vacuole probably evolved several times independently, even within the Viridiplantae.
Types
Central
Most mature plant cells have one large vacuole that typically occupies more than 30% of the cell's volume, and that can occupy as much as 80% of the volume for certain cell types and conditions. Strands of cytoplasm often run through the vacuole.
A vacuole is surrounded by a membrane called the tonoplast (word origin: Gk tón(os) + -o-, meaning “stretching”, “tension”, “tone” + comb. form repr. Gk plastós formed, molded) and filled with cell sap. Also called the vacuolar membrane, the tonoplast is the cytoplasmic membrane surrounding a vacuole, separating the vacuolar contents from the cell's cytoplasm. As a membrane, it is mainly involved in regulating the movements of ions around the cell, and isolating materials that might be harmful or a threat to the cell.
Transport of protons from the cytosol to the vacuole stabilizes cytoplasmic pH, while making the vacuolar interior more acidic creating a proton motive force which the cell can use to transport nutrients into or out of the vacuole. The low pH of the vacuole also allows degradative enzymes to act. Although single large vacuoles are most common, the size and number of vacuoles may vary in different tissues and stages of development. For example, developing cells in the meristems contain small provacuoles and cells of the vascular cambium have many small vacuoles in the winter and one large one in the summer.
Aside from storage, the main role of the central vacuole is to maintain turgor pressure against the cell wall. Proteins found in the tonoplast (aquaporins) control the flow of water into and out of the vacuole through active transport, pumping potassium (K+) ions into and out of the vacuolar interior. Due to osmosis, water will diffuse into the vacuole, placing pressure on the cell wall. If water loss leads to a significant decline in turgor pressure, the cell will plasmolyze. Turgor pressure exerted by vacuoles is also required for cellular elongation: as the cell wall is partially degraded by the action of expansins, the less rigid wall is expanded by the pressure coming from within the vacuole. Turgor pressure exerted by the vacuole is also essential in supporting plants in an upright position. Another function of a central vacuole is that it pushes all contents of the cell's cytoplasm against the cellular membrane, and thus keeps the chloroplasts closer to light. Most plants store chemicals in the vacuole that react with chemicals in the cytosol. If the cell is broken, for example by a herbivore, then the two chemicals can react forming toxic chemicals. In garlic, alliin and the enzyme alliinase are normally separated but form allicin if the vacuole is broken. A similar reaction is responsible for the production of syn-propanethial-S-oxide when onions are cut.
Vacuoles in fungal cells perform similar functions to those in plants and there can be more than one vacuole per cell. In yeast cells the vacuole (Vac7) is a dynamic structure that can rapidly modify its morphology. They are involved in many processes including the homeostasis of cell pH and the concentration of ions, osmoregulation, storing amino acids and polyphosphate and degradative processes. Toxic ions, such as strontium (), cobalt(II) (), and lead(II) () are transported into the vacuole to isolate them from the rest of the cell.
Contractile
A contractile vacuole is a specialized osmoregulatory organelle that is present in many free-living protists. The contractile vacuole is part of the contractile vacuole complex which includes radial arms and a spongiome. The contractile vacuole complex works periodically contracts to remove excess water and ions from the cell to balance water flow into the cell. When the contractile vacuole is slowly taking water in, the contractile vacuole enlarges, this is called diastole and when it reaches its threshold, the central vacuole contracts then contracts (systole) periodically to release water.
Digestive
Food vacuoles (also called digestive vacuole) are organelles found in Ciliates, and Plasmodium falciparum, a protozoan parasite that causes Malaria.
Histopathology
In histopathology, vacuolization is the formation of vacuoles or vacuole-like structures, within or adjacent to cells. It is an unspecific sign of disease.
| Biology and health sciences | Organelles and other cell parts | null |
32353 | https://en.wikipedia.org/wiki/Virtual%20machine | Virtual machine | In computing, a virtual machine (VM) is the virtualization or emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination of the two.
Virtual machines differ and are organized by their function, shown here:
System virtual machines (also called full virtualization VMs, SysVM, or SYS-VM) provide a substitute for a real machine. They provide the functionality needed to execute entire operating systems. A hypervisor uses native execution to share and manage hardware, allowing for multiple environments that are isolated from one another yet exist on the same physical machine. Modern hypervisors use hardware-assisted virtualization, with virtualization-specific hardware features on the host CPUs providing assistance to hypervisors.
Process virtual machines are designed to execute computer programs in a platform-independent environment.
Some virtual machine emulators, such as QEMU and video game console emulators, are designed to also emulate (or "virtually imitate") different system architectures, thus allowing execution of software applications and operating systems written for another CPU or architecture. OS-level virtualization allows the resources of a computer to be partitioned via the kernel. The terms are not universally interchangeable.
Definitions
System virtual machines
A "virtual machine" was originally defined by Popek and Goldberg as "an efficient, isolated duplicate of a real computer machine." Current use includes virtual machines that have no direct correspondence to any real hardware. The physical, "real-world" hardware running the VM is generally referred to as the 'host', and the virtual machine emulated on that machine is generally referred to as the 'guest'. A host can emulate several guests, each of which can emulate different operating systems and hardware platforms.
The desire to run multiple operating systems was the initial motive for virtual machines, so as to allow time-sharing among several single-tasking operating systems. In some respects, a system virtual machine can be considered a generalization of the concept of virtual memory that historically preceded it. IBM's CP/CMS, the first systems to allow full virtualization, implemented time sharing by providing each user with a single-user operating system, the Conversational Monitor System (CMS). Unlike virtual memory, a system virtual machine entitled the user to write privileged instructions in their code. This approach had certain advantages, such as adding input/output devices not allowed by the standard system.
As technology evolves virtual memory for purposes of virtualization, new systems of memory overcommitment may be applied to manage memory sharing among multiple virtual machines on one computer operating system. It may be possible to share memory pages that have identical contents among multiple virtual machines that run on the same physical machine, what may result in mapping them to the same physical page by a technique termed kernel same-page merging (KSM). This is especially useful for read-only pages, such as those holding code segments, which is the case for multiple virtual machines running the same or similar software, software libraries, web servers, middleware components, etc. The guest operating systems do not need to be compliant with the host hardware, thus making it possible to run different operating systems on the same computer (e.g., Windows, Linux, or prior versions of an operating system) to support future software.
The use of virtual machines to support separate guest operating systems is popular in regard to embedded systems. A typical use would be to run a real-time operating system simultaneously with a preferred complex operating system, such as Linux or Windows. Another use would be for novel and unproven software still in the developmental stage, so it runs inside a sandbox. Virtual machines have other advantages for operating system development and may include improved debugging access and faster reboots.
Multiple VMs running their own guest operating system are frequently engaged for server consolidation.
Process virtual machines
A process VM, sometimes called an application virtual machine, or Managed Runtime Environment (MRE), runs as a normal application inside a host OS and supports a single process. It is created when that process is started and destroyed when it exits. Its purpose is to provide a platform-independent programming environment that abstracts away details of the underlying hardware or operating system and allows a program to execute in the same way on any platform.
A process VM provides a high-level abstraction that of a high-level programming language (compared to the low-level ISA abstraction of the system VM). Process VMs are implemented using an interpreter; performance comparable to compiled programming languages can be achieved by the use of just-in-time compilation.
This type of VM has become popular with the Java programming language, which is implemented using the Java virtual machine. Other examples include the Parrot virtual machine and the .NET Framework, which runs on a VM called the Common Language Runtime. All of them can serve as an abstraction layer for any computer language.
A special case of process VMs are systems that abstract over the communication mechanisms of a (potentially heterogeneous) computer cluster. Such a VM does not consist of a single process, but one process per physical machine in the cluster. They are designed to ease the task of programming concurrent applications by letting the programmer focus on algorithms rather than the communication mechanisms provided by the interconnect and the OS. They do not hide the fact that communication takes place, and as such do not attempt to present the cluster as a single machine.
Unlike other process VMs, these systems do not provide a specific programming language, but are embedded in an existing language; typically such a system provides bindings for several languages (e.g., C and Fortran). Examples are Parallel Virtual Machine (PVM) and Message Passing Interface (MPI).
History
Both system virtual machines and process virtual machines date to the 1960s and remain areas of active development.
System virtual machines grew out of time-sharing, as notably implemented in the Compatible Time-Sharing System (CTSS). Time-sharing allowed multiple users to use a computer concurrently: each program appeared to have full access to the machine, but only one program was executed at the time, with the system switching between programs in time slices, saving and restoring state each time. This evolved into virtual machines, notably via IBM's research systems: the M44/44X, which used partial virtualization, and the CP-40 and SIMMON, which used full virtualization, and were early examples of hypervisors. The first widely available virtual machine architecture was the CP-67/CMS (see History of CP/CMS for details). An important distinction was between using multiple virtual machines on one host system for time-sharing, as in M44/44X and CP-40, and using one virtual machine on a host system for prototyping, as in SIMMON. Emulators, with hardware emulation of earlier systems for compatibility, date back to the IBM System/360 in 1963, while the software emulation (then-called "simulation") predates it.
Process virtual machines arose originally as abstract platforms for an intermediate language used as the intermediate representation of a program by a compiler; early examples date to around 1964 with the META II compiler-writing system using it for both syntax description and target code generation. A notable 1966 example was the O-code machine, a virtual machine that executes O-code (object code) emitted by the front end of the BCPL compiler. This abstraction allowed the compiler to be easily ported to a new architecture by implementing a new back end that took the existing O-code and compiled it to machine code for the underlying physical machine. The Euler language used a similar design, with the intermediate language named P (portable). This was popularized around 1970 by Pascal, notably in the Pascal-P system (1973) and Pascal-S compiler (1975), in which it was termed p-code and the resulting machine as a p-code machine. This has been influential, and virtual machines in this sense have been often generally called p-code machines. In addition to being an intermediate language, Pascal p-code was also executed directly by an interpreter implementing the virtual machine, notably in UCSD Pascal (1978); this influenced later interpreters, notably the Java virtual machine (JVM). Another early example was SNOBOL4 (1967), which was written in the SNOBOL Implementation Language (SIL), an assembly language for a virtual machine, which was then targeted to physical machines by transpiling to their native assembler via a macro assembler. Macros have since fallen out of favor, however, so this approach has been less influential. Process virtual machines were a popular approach to implementing early microcomputer software, including Tiny BASIC and adventure games, from one-off implementations such as Pyramid 2000 to a general-purpose engine like Infocom's z-machine, which Graham Nelson argues is "possibly the most portable virtual machine ever created".
Significant advances occurred in the implementation of Smalltalk-80,
particularly the Deutsch/Schiffmann implementation
which pushed just-in-time (JIT) compilation forward as an implementation approach that uses process virtual machine.
Later notable Smalltalk VMs were VisualWorks, the Squeak Virtual Machine,
and Strongtalk.
A related language that produced a lot of virtual machine innovation was the Self programming language, which pioneered adaptive optimization and generational garbage collection. These techniques proved commercially successful in 1999 in the HotSpot Java virtual machine. Other innovations include a register-based virtual machine, to better match the underlying hardware, rather than a stack-based virtual machine, which is a closer match for the programming language; in 1995, this was pioneered by the Dis virtual machine for the Limbo language.
Virtualization techniques
Full virtualization
In full virtualization, the virtual machine simulates enough hardware to allow an unmodified "guest" OS (one designed for the same instruction set) to be run in isolation. This approach was pioneered in 1966 with the IBM CP-40 and CP-67, predecessors of the VM family.
Examples outside the mainframe field include Parallels Workstation, Parallels Desktop for Mac, VirtualBox, Virtual Iron, Oracle VM, Virtual PC, Virtual Server, Hyper-V, VMware Fusion, VMware Workstation, VMware Server (discontinued, formerly called GSX Server), VMware ESXi, QEMU, Adeos, Mac-on-Linux, Win4BSD, Win4Lin Pro, and Egenera vBlade technology.
Hardware-assisted virtualization
In hardware-assisted virtualization, the hardware provides architectural support that facilitates building a virtual machine monitor and allows guest OSes to be run in isolation.
Hardware-assisted virtualization was first introduced on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating system offered by IBM as an official product.
In 2005 and 2006, Intel and AMD provided additional hardware to support virtualization. Sun Microsystems (now Oracle Corporation) added similar features in their UltraSPARC T-Series processors in 2005. Examples of virtualization platforms adapted to such hardware include KVM, VMware Workstation, VMware Fusion, Hyper-V, Windows Virtual PC, Xen, Parallels Desktop for Mac, Oracle VM Server for SPARC, VirtualBox and Parallels Workstation.
In 2006, first-generation 32- and 64-bit x86 hardware support was found to rarely offer performance advantages over software virtualization.
OS-level virtualization
In OS-level virtualization, a physical server is virtualized at the operating system level, enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest" operating system environments share the same running instance of the operating system as the host system. Thus, the same operating system kernel is also used to implement the "guest" environments, and applications running in a given "guest" environment view it as a stand-alone system. The pioneer implementation was FreeBSD jails; other examples include Docker, Solaris Containers, OpenVZ, Linux-VServer, LXC, AIX Workload Partitions, Parallels Virtuozzo Containers, and iCore Virtual Accounts.
Snapshots
A snapshot is a state of a virtual machine, and generally its storage devices, at an exact point in time. A snapshot enables the virtual machine's state at the time of the snapshot to be restored later, effectively undoing any changes that occurred afterwards. This capability is useful as a backup technique, for example, prior to performing a risky operation.
Virtual machines frequently use virtual disks for their storage; in a very simple example, a 10-gigabyte hard disk drive is simulated with a 10-gigabyte flat file. Any requests by the VM for a location on its physical disk are transparently translated into an operation on the corresponding file. Once such a translation layer is present, however, it is possible to intercept the operations and send them to different files, depending on various criteria. Every time a snapshot is taken, a new file is created, and used as an overlay for its predecessors. New data is written to the topmost overlay; reading existing data, however, needs the overlay hierarchy to be scanned, resulting in accessing the most recent version. Thus, the entire stack of snapshots is virtually a single coherent disk; in that sense, creating snapshots works similarly to the incremental backup technique.
Other components of a virtual machine can also be included in a snapshot, such as the contents of its random-access memory (RAM), BIOS settings, or its configuration settings. "Save state" feature in video game console emulators is an example of such snapshots.
Restoring a snapshot consists of discarding or disregarding all overlay layers that are added after that snapshot, and directing all new changes to a new overlay.
Migration
The snapshots described above can be moved to another host machine with its own hypervisor; when the VM is temporarily stopped, snapshotted, moved, and then resumed on the new host, this is known as migration. If the older snapshots are kept in sync regularly, this operation can be quite fast, and allow the VM to provide uninterrupted service while its prior physical host is, for example, taken down for physical maintenance.
Failover
Similar to the migration mechanism described above, failover allows the VM to continue operations if the host fails. Generally it occurs if the migration has stopped working. However, in this case, the VM continues operation from the last-known coherent state, rather than the current state, based on whatever materials the backup server was last provided with.
Nested virtualization
Nested virtualization refers to the ability of running a virtual machine within another, having this general concept extendable to an arbitrary depth. In other words, nested virtualization refers to running one or more hypervisors inside another hypervisor. The nature of a nested guest virtual machine does not need to be homogeneous with its host virtual machine; for example, application virtualization can be deployed within a virtual machine created by using hardware virtualization.
Nested virtualization becomes more necessary as widespread operating systems gain built-in hypervisor functionality, which in a virtualized environment can be used only if the surrounding hypervisor supports nested virtualization; for example, Windows 7 is capable of running Windows XP applications inside a built-in virtual machine. Furthermore, moving already existing virtualized environments into a cloud, following the Infrastructure as a Service (IaaS) approach, is much more complicated if the destination IaaS platform does not support nested virtualization.
The way nested virtualization can be implemented on a particular computer architecture depends on supported hardware-assisted virtualization capabilities. If a particular architecture does not provide hardware support required for nested virtualization, various software techniques are employed to enable it. Over time, more architectures gain required hardware support; for example, since the Haswell microarchitecture (announced in 2013), Intel started to include VMCS shadowing as a technology that accelerates nested virtualization.
| Technology | Operating systems | null |
32354 | https://en.wikipedia.org/wiki/Virtual%20memory | Virtual memory | In computing, virtual memory, or virtual storage, is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory".
The computer's operating system, using a combination of hardware and software, maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage, as seen by a process or task, appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory. Address translation hardware in the CPU, often referred to as a memory management unit (MMU), automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities, utilizing, e.g., disk storage, to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer.
The primary benefits of virtual memory include freeing applications from having to manage a shared memory space, ability to share memory used by libraries between processes, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging or segmentation.
Properties
Virtual memory makes application programming easier by hiding fragmentation of physical memory; by delegating to the kernel the burden of managing the memory hierarchy (eliminating the need for the program to handle overlays explicitly); and, when each process is run in its own dedicated address space, by obviating the need to relocate program code or to access memory with relative addressing.
Memory virtualization can be considered a generalization of the concept of virtual memory.
Usage
Virtual memory is an integral part of a modern computer architecture; implementations usually require hardware support, typically in the form of a memory management unit built into the CPU. While not necessary, emulators and virtual machines can employ hardware support to increase performance of their virtual memory implementations. Older operating systems, such as those for the mainframes of the 1960s, and those for personal computers of the early to mid-1980s (e.g., DOS), generally have no virtual memory functionality, though notable exceptions for mainframes of the 1960s include:
the Atlas Supervisor for the Atlas
THE multiprogramming system for the Electrologica X8 (software based virtual memory without hardware support)
MCP for the Burroughs B5000
MTS, TSS/360 and CP/CMS for the IBM System/360 Model 67
Multics for the GE 645
The Time Sharing Operating System for the RCA Spectra 70/46
During the 1960s and early '70s, computer memory was very expensive. The introduction of virtual memory provided an ability for software systems with large memory demands to run on computers with less real memory. The savings from this provided a strong incentive to switch to virtual memory for all systems. The additional capability of providing virtual address spaces added another level of security and reliability, thus making virtual memory even more attractive to the marketplace.
Most modern operating systems that support virtual memory also run each process in its own dedicated address space. Each program thus appears to have sole access to the virtual memory. However, some older operating systems (such as OS/VS1 and OS/VS2 SVS) and even modern ones (such as IBM i) are single address space operating systems that run all processes in a single address space composed of virtualized memory.
Embedded systems and other special-purpose computer systems that require very fast and/or very consistent response times may opt not to use virtual memory due to decreased determinism; virtual memory systems trigger unpredictable traps that may produce unwanted and unpredictable delays in response to input, especially if the trap requires that data be read into main memory from secondary memory. The hardware to translate virtual addresses to physical addresses typically requires a significant chip area to implement, and not all chips used in embedded systems include that hardware, which is another reason some of those systems do not use virtual memory.
History
In the 1950s, all larger programs had to contain logic for managing primary and secondary storage, such as overlaying. Virtual memory was therefore introduced not only to extend primary memory, but to make such an extension as easy as possible for programmers to use. To allow for multiprogramming and multitasking, many early systems divided memory between multiple programs without virtual memory, such as early models of the PDP-10 via registers.
A claim that the concept of virtual memory was first developed by German physicist Fritz-Rudolf Güntsch at the Technische Universität Berlin in 1956 in his doctoral thesis, Logical Design of a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High Speed Memory Operation, does not stand up to careful scrutiny. The computer proposed by Güntsch (but never built) had an address space of 105 words which mapped exactly onto the 105 words of the drums, i.e. the addresses were real addresses and there was no form of indirect mapping, a key feature of virtual memory. What Güntsch did invent was a form of cache memory, since his high-speed memory was intended to contain a copy of some blocks of code or data taken from the drums. Indeed, he wrote (as quoted in translation): "The programmer need not respect the existence of the primary memory (he need not even know that it exists), for there is only one sort of addresses by which one can program as if there were only one storage." This is exactly the situation in computers with cache memory, one of the earliest commercial examples of which was the IBM System/360 Model 85. In the Model 85 all addresses were real addresses referring to the main core store. A semiconductor cache store, invisible to the user, held the contents of parts of the main store in use by the currently executing program. This is exactly analogous to Güntsch's system, designed as a means to improve performance, rather than to solve the problems involved in multi-programming.
The first true virtual memory system was that implemented at the University of Manchester to create a one-level storage system as part of the Atlas Computer. It used a paging mechanism to map the virtual addresses available to the programmer onto the real memory that consisted of 16,384 words of primary core memory with an additional 98,304 words of secondary drum memory. The addition of virtual memory into the Atlas also eliminated a looming programming problem: planning and scheduling data transfers between main and secondary memory and recompiling programs for each change of size of main memory. The first Atlas was commissioned in 1962 but working prototypes of paging had been developed by 1959.
As early as 1958, Robert S. Barton, working at Shell Research, suggested that main storage should be allocated automatically rather than have the programmer being concerned with overlays from secondary memory, in effect virtual memory. By 1960 Barton was lead architect on the Burroughs B5000 project. From 1959 to 1961, W. R. Lonergan was manager of the Burroughs Product Planning Group which included Barton, Donald Knuth as consultant, and Paul King. In May 1960, UCLA ran a two-week seminar "Using and Exploiting Giant Computers" to which Paul King and two others were sent. Stan Gill gave a presentation on virtual memory in the Atlas I computer. Paul King took the ideas back to Burroughs and it was determined that virtual memory should be designed into the core of the B5000.. Burroughs Corporation released the B5000 in 1964 as the first commercial computer with virtual memory.
IBM developed the concept of hypervisors in their CP-40 and CP-67, and in 1972 provided it for the S/370 as Virtual Machine Facility/370. IBM introduced the Start Interpretive Execution (SIE) instruction as part of 370-XA on the 3081, and VM/XA versions of VM to exploit it.
Before virtual memory could be implemented in mainstream operating systems, many problems had to be addressed. Dynamic address translation required expensive and difficult-to-build specialized hardware; initial implementations slowed down access to memory slightly. There were worries that new system-wide algorithms utilizing secondary storage would be less effective than previously used application-specific algorithms. By 1969, the debate over virtual memory for commercial computers was over; an IBM research team led by David Sayre showed that their virtual memory overlay system consistently worked better than the best manually controlled systems. Throughout the 1970s, the IBM 370 series running their virtual-storage based operating systems provided a means for business users to migrate multiple older systems into fewer, more powerful, mainframes that had improved price/performance. The first minicomputer to introduce virtual memory was the Norwegian NORD-1; during the 1970s, other minicomputers implemented virtual memory, notably VAX models running VMS.
Virtual memory was introduced to the x86 architecture with the protected mode of the Intel 80286 processor, but its segment swapping technique scaled poorly to larger segment sizes. The Intel 80386 introduced paging support underneath the existing segmentation layer, enabling the page fault exception to chain with other exceptions without double fault. However, loading segment descriptors was an expensive operation, causing operating system designers to rely strictly on paging rather than a combination of paging and segmentation.
Paged virtual memory
Nearly all current implementations of virtual memory divide a virtual address space into pages, blocks of contiguous virtual memory addresses. Pages on contemporary systems are usually at least 4 kilobytes in size; systems with large virtual address ranges or amounts of real memory generally use larger page sizes.
Page tables
Page tables are used to translate the virtual addresses seen by the application into physical addresses used by the hardware to process instructions; such hardware that handles this specific translation is often known as the memory management unit. Each entry in the page table holds a flag indicating whether the corresponding page is in real memory or not. If it is in real memory, the page table entry will contain the real memory address at which the page is stored. When a reference is made to a page by the hardware, if the page table entry for the page indicates that it is not currently in real memory, the hardware raises a page fault exception, invoking the paging supervisor component of the operating system.
Systems can have, e.g., one page table for the whole system, separate page tables for each address space or process, separate page tables for each segment; similarly, systems can have, e.g., no segment table, one segment table for the whole system, separate segment tables for each address space or process, separate segment tables for each region in a tree of region tables for each address space or process. If there is only one page table, different applications running at the same time use different parts of a single range of virtual addresses. If there are multiple page or segment tables, there are multiple virtual address spaces and concurrent applications with separate page tables redirect to different real addresses.
Some earlier systems with smaller real memory sizes, such as the SDS 940, used page registers instead of page tables in memory for address translation.
Paging supervisor
This part of the operating system creates and manages page tables and lists of free page frames. In order to ensure that there will be enough free page frames to quickly resolve page faults, the system may periodically steal allocated page frames, using a page replacement algorithm, e.g., a least recently used (LRU) algorithm. Stolen page frames that have been modified are written back to auxiliary storage before they are added to the free queue. On some systems the paging supervisor is also responsible for managing translation registers that are not automatically loaded from page tables.
Typically, a page fault that cannot be resolved results in an abnormal termination of the application. However, some systems allow the application to have exception handlers for such errors. The paging supervisor may handle a page fault exception in several different ways, depending on the details:
If the virtual address is invalid, the paging supervisor treats it as an error.
If the page is valid and the page information is not loaded into the MMU, the page information will be stored into one of the page registers.
If the page is uninitialized, a new page frame may be assigned and cleared.
If there is a stolen page frame containing the desired page, that page frame will be reused.
For a fault due to a write attempt into a read-protected page, if it is a copy-on-write page then a free page frame will be assigned and the contents of the old page copied; otherwise it is treated as an error.
If the virtual address is a valid page in a memory-mapped file or a paging file, a free page frame will be assigned and the page read in.
In most cases, there will be an update to the page table, possibly followed by purging the Translation Lookaside Buffer (TLB), and the system restarts the instruction that causes the exception.
If the free page frame queue is empty then the paging supervisor must free a page frame using the same page replacement algorithm for page stealing.
Pinned pages
Operating systems have memory areas that are pinned (never swapped to secondary storage). Other terms used are locked, fixed, or wired pages. For example, interrupt mechanisms rely on an array of pointers to their handlers, such as I/O completion and page fault. If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become far more complex and time-consuming, particularly in the case of page fault interruptions. Hence, some part of the page table structures is not pageable.
Some pages may be pinned for short periods of time, others may be pinned for long periods of time, and still others may need to be permanently pinned. For example:
The paging supervisor code and drivers for secondary storage devices on which pages reside must be permanently pinned, as otherwise paging would not even work because the necessary code would not be available.
Timing-dependent components may be pinned to avoid variable paging delays.
Data buffers that are accessed directly by peripheral devices that use direct memory access or I/O channels must reside in pinned pages while the I/O operation is in progress because such devices and the buses to which they are attached expect to find data buffers located at physical memory addresses; regardless of whether the bus has a memory management unit for I/O, transfers cannot be stopped if a page fault occurs and then restarted when the page fault has been processed. For example, the data could come from a measurement sensor unit and lost real time data that got lost because of a page fault can not be recovered.
In IBM's operating systems for System/370 and successor systems, the term is "fixed", and such pages may be long-term fixed, or may be short-term fixed, or may be unfixed (i.e., pageable). System control structures are often long-term fixed (measured in wall-clock time, i.e., time measured in seconds, rather than time measured in fractions of one second) whereas I/O buffers are usually short-term fixed (usually measured in significantly less than wall-clock time, possibly for tens of milliseconds). Indeed, the OS has a special facility for "fast fixing" these short-term fixed data buffers (fixing which is performed without resorting to a time-consuming Supervisor Call instruction).
Multics used the term "wired". OpenVMS and Windows refer to pages temporarily made nonpageable (as for I/O buffers) as "locked", and simply "nonpageable" for those that are never pageable. The Single UNIX Specification also uses the term "locked" in the specification for , as do the man pages on many Unix-like systems.
Virtual-real operation
In OS/VS1 and similar OSes, some parts of systems memory are managed in "virtual-real" mode, called "V=R". In this mode every virtual address corresponds to the same real address. This mode is used for interrupt mechanisms, for the paging supervisor and page tables in older systems, and for application programs using non-standard I/O management. For example, IBM's z/OS has 3 modes (virtual-virtual, virtual-real and virtual-fixed).
Thrashing
When paging and page stealing are used, a problem called "thrashing" can occur, in which the computer spends an unsuitably large amount of time transferring pages to and from a backing store, hence slowing down useful work. A task's working set is the minimum set of pages that should be in memory in order for it to make useful progress. Thrashing occurs when there is insufficient memory available to store the working sets of all active programs. Adding real memory is the simplest response, but improving application design, scheduling, and memory usage can help. Another solution is to reduce the number of active tasks on the system. This reduces demand on real memory by swapping out the entire working set of one or more processes.
A system thrashing is often a result of a sudden spike in page demand from a small number of running programs. Swap-token is a lightweight and dynamic thrashing protection mechanism. The basic idea is to set a token in the system, which is randomly given to a process that has page faults when thrashing happens. The process that has the token is given a privilege to allocate more physical memory pages to build its working set, which is expected to quickly finish its execution and to release the memory pages to other processes. A time stamp is used to handover the token one by one. The first version of swap-token was implemented in Linux 2.6. The second version is called preempt swap-token and is also in Linux 2.6. In this updated swap-token implementation, a priority counter is set for each process to track the number of swap-out pages. The token is always given to the process with a high priority, which has a high number of swap-out pages. The length of the time stamp is not a constant but is determined by the priority: the higher the number of swap-out pages of a process, the longer the time stamp for it will be.
Segmented virtual memory
Some systems, such as the Burroughs B5500, and the current Unisys MCP systems use segmentation instead of paging, dividing virtual address spaces into variable-length segments. Using segmentation matches the allocated memory blocks to the logical needs and requests of the programs, rather than the physical view of a computer, although pages themselves are an artificial division in memory. The designers of the B5000 would have found the artificial size of pages to be Procrustean in nature, a story they would later use for the exact data sizes in the B1000.
In the Burroughs and Unisys systems, each memory segment is described by a master descriptor which is a single absolute descriptor which may be referenced by other relative (copy) descriptors, effecting sharing either within a process or between processes. Descriptors are central to the working of virtual memory in MCP systems. Descriptors contain not only the address of a segment, but the segment length and status in virtual memory indicated by the 'p-bit' or 'presence bit' which indicates if the address is to a segment in main memory or to a secondary-storage block. When a non-resident segment (p-bit is off) is accessed, an interrupt occurs to load the segment from secondary storage at the given address, or if the address itself is 0 then allocate a new block. In the latter case, the length field in the descriptor is used to allocate a segment of that length.
A further problem to thrashing in using a segmented scheme is checkerboarding, where all free segments become too small to satisfy requests for new segments. The solution is to perform memory compaction to pack all used segments together and create a large free block from which further segments may be allocated. Since there is a single master descriptor for each segment the new block address only needs to be updated in a single descriptor, since all copies refer to the master descriptor.
Paging is not free from fragmentation – the fragmentation is internal to pages (internal fragmentation). If a requested block is smaller than a page, then some space in the page will be wasted. If a block requires larger than a page, a small area in another page is required resulting in large wasted space. The fragmentation thus becomes a problem passed to programmers who may well distort their program to match certain page sizes. With segmentation, the fragmentation is external to segments (external fragmentation) and thus a system problem, which was the aim of virtual memory in the first place, to relieve programmers of such memory considerations. In multi-processing systems, optimal operation of the system depends on the mix of independent processes at any time. Hybrid schemes of segmentation and paging may be used.
The Intel 80286 supports a similar segmentation scheme as an option, but it is rarely used.
Segmentation and paging can be used together by dividing each segment into pages; systems with this memory structure, such as Multics and IBM System/38, are usually paging-predominant, segmentation providing memory protection.
In the Intel 80386 and later IA-32 processors, the segments reside in a 32-bit linear, paged address space. Segments can be moved in and out of that space; pages there can "page" in and out of main memory, providing two levels of virtual memory; few if any operating systems do so, instead using only paging. Early non-hardware-assisted x86 virtualization solutions combined paging and segmentation because x86 paging offers only two protection domains whereas a VMM, guest OS or guest application stack needs three. The difference between paging and segmentation systems is not only about memory division; segmentation is visible to user processes, as part of memory model semantics. Hence, instead of memory that looks like a single large space, it is structured into multiple spaces.
This difference has important consequences; a segment is not a page with variable length or a simple way to lengthen the address space. Segmentation that can provide a single-level memory model in which there is no differentiation between process memory and file system consists of only a list of segments (files) mapped into the process's potential address space.
This is not the same as the mechanisms provided by calls such as mmap and Win32's MapViewOfFile, because inter-file pointers do not work when mapping files into semi-arbitrary places. In Multics, a file (or a segment from a multi-segment file) is mapped into a segment in the address space, so files are always mapped at a segment boundary. A file's linkage section can contain pointers for which an attempt to load the pointer into a register or make an indirect reference through it causes a trap. The unresolved pointer contains an indication of the name of the segment to which the pointer refers and an offset within the segment; the handler for the trap maps the segment into the address space, puts the segment number into the pointer, changes the tag field in the pointer so that it no longer causes a trap, and returns to the code where the trap occurred, re-executing the instruction that caused the trap. This eliminates the need for a linker completely and works when different processes map the same file into different places in their private address spaces.
Address space swapping
Some operating systems provide for swapping entire address spaces, in addition to whatever facilities they have for paging and segmentation. When this occurs, the OS writes those pages and segments currently in real memory to swap files. In a swap-in, the OS reads back the data from the swap files but does not automatically read back pages that had been paged out at the time of the swap out operation.
IBM's MVS, from OS/VS2 Release 2 through z/OS, provides for marking an address space as unswappable; doing so does not pin any pages in the address space. This can be done for the duration of a job by entering the name of an eligible main program in the Program Properties Table with an unswappable flag. In addition, privileged code can temporarily make an address space unswappable using a SYSEVENT Supervisor Call instruction (SVC); certain changes in the address space properties require that the OS swap it out and then swap it back in, using SYSEVENT TRANSWAP.
Swapping does not necessarily require memory management hardware, if, for example, multiple jobs are swapped in and out of the same area of storage.
| Technology | Volatile memory | null |
32363 | https://en.wikipedia.org/wiki/Vostok%201 | Vostok 1 | Vostok 1 (, ) was the first spaceflight of the Vostok programme and the first human orbital spaceflight in history. The Vostok 3KA space capsule was launched from Baikonur Cosmodrome on 12 April 1961, with Soviet cosmonaut Yuri Gagarin aboard, making him the first human to reach orbital velocity around the Earth and to complete a full orbit around the Earth.
The orbital spaceflight consisted of a single orbit around Earth which skimmed the upper atmosphere at at its lowest point. The flight took 108 minutes from launch to landing. Gagarin parachuted to the ground separately from his capsule after ejecting at altitude.
Background
The Space Race between the Soviet Union and the United States, the two Cold War superpowers, began just before the Soviet Union launched the world's first artificial satellite, Sputnik 1, in 1957. Both countries wanted to develop spaceflight technology quickly, particularly by launching the first successful human spaceflight. The Soviet Union secretly pursued the Vostok programme in competition with the United States' Project Mercury. Vostok launched several precursor uncrewed missions between May 1960 and March 1961, to test and develop the Vostok rocket family and space capsule. These missions had varied degrees of success, but the final two—Korabl-Sputnik 4 and Korabl-Sputnik 5—were complete successes, allowing the first crewed flight.
Pilot
The Vostok 1 capsule was designed to carry a single cosmonaut. Yuri Gagarin was chosen as the prime pilot of Vostok 1, with Gherman Titov and Grigori Nelyubov as backups. These assignments were formally made on April 8, four days before the mission, but Gagarin had been a favourite among the cosmonaut candidates for at least several months.
The final decision of who would fly the mission relied heavily on the opinion of the head of cosmonaut training, Nikolai Kamanin. In an April 5 diary entry, Kamanin wrote that he was still undecided between Gagarin and Titov. "The only thing that keeps me from picking [Titov] is the need to have the stronger person for the one day flight." Kamanin was referring to the second mission, Vostok 2, compared to the relatively short single-orbit mission of Vostok 1. When Gagarin and Titov were informed of the decision during a meeting on April 9, Gagarin was very happy, and Titov was disappointed. On 10 April, this meeting was reenacted in front of television cameras, so there would be official footage of the event. This included an acceptance speech by Gagarin. As an indication of the level of secrecy involved, one of the other cosmonaut candidates, Alexei Leonov, later recalled that he did not know who was chosen for the mission until after the spaceflight had begun.
Backup
Reserve
Preparations
Unlike later Vostok missions, there were no dedicated tracking ships available to receive signals from the spacecraft. Instead they relied on the network of ground stations, also called Command Points, to communicate with the spacecraft; all of these Command Points were located within the Soviet Union.
Because of weight constraints, there was no backup retrorocket engine. The spacecraft carried 13 days of provisions to allow for survival and natural orbital decay in the event the retrorockets failed. The provisions included food for Gagarin. As focus was made on food that would not form crumbs, Gagarin was provided with liver meat puree and chocolate sauce, packed in metal toothpaste-style tubes.
The letters "СССР" were hand-painted onto Gagarin's helmet by engineer Gherman Lebedev during transfer to the launch site. As it had been less than a year since U-2 pilot Francis Gary Powers was shot down, Lebedev reasoned that without some country identification, there was a small chance the cosmonaut might be mistaken for a spy on landing.
Automatic control
The entire mission would be controlled by either automatic systems or by ground control. This was because medical staff and spacecraft engineers were unsure how a human might react to weightlessness, and therefore it was decided to lock the pilot's manual controls. In an unusual move, a code to unlock the controls was placed in an onboard envelope, for Gagarin's use in case of emergency. Prior to the flight, Kamanin and others told Gagarin the code (1-2-5) anyway.
11 April 1961
At Baikonur Cosmodrome on the morning of 11 April 1961, the Vostok-K rocket, together with the attached Vostok 3KA space capsule, were transported several kilometers to the launch pad, in a horizontal position. Once they arrived at the launch pad, a quick examination of the booster was conducted by technicians to make sure everything was in order. When no visible problems were found, the booster was erected on LC-1. At 10:00 (Moscow Time), Gagarin and Titov were given a final review of the flight plan. They were informed that launch was scheduled to occur the following day, at 09:07 Moscow Time. This time was chosen so that when the capsule started to fly over Africa, which was when the retrorockets would need to fire for reentry, the solar illumination would be ideal for the orientation system's sensors.
At 18:00, once various physiological readings had been taken, the doctors instructed the cosmonauts not to discuss the upcoming missions. That evening Gagarin and Titov relaxed by listening to music, playing pool, and chatting about their childhoods. At 21:50, both men were offered sleeping pills, to ensure a good night's sleep, but they both declined. Physicians had attached sensors to the cosmonauts, to monitor their condition throughout the night, and they believed that both had slept well. Gagarin's biographers Doran and Bizony say that neither Gagarin nor Titov slept that night. Chief Designer Sergei Korolev did not sleep that night, due to anxiety caused by the imminent spaceflight.
Gagarin statement before the mission
Before the mission, Gagarin made a statement to the press, addressed to the Soviet Union and to the whole world:
In his autobiography, Gagarin recalled that, looking at the spacecraft before start, he was "seized with an unprecedented rise of all mental strength <...> some extraordinary words were born that I had never used before in everyday speech." This was not true; according to historian Asif Siddiqi, Gagarin "was essentially forced to utter a stream of banalities prepared by anonymous speechwriters" taped much earlier in Moscow.
Flight
At 05:30 Moscow time, on the morning of 12 April 1961, both Gagarin and his backup Titov were woken. They were given breakfast, assisted into their spacesuits, and then were transported to the launch pad. Gagarin entered the Vostok 1 spacecraft, and at 07:10 local time (04:10 UTC), the radio communication system was turned on. Once Gagarin was in the spacecraft, his picture appeared on television screens in the launch control room from an onboard camera. Launch would not occur for another two hours, and during the time Gagarin chatted with the mission's main CapCom, as well as Chief Designer Sergei Korolev, Nikolai Kamanin, and a few others, periodically joking and singing songs. Following a series of tests and checks, about forty minutes after Gagarin entered the spacecraft, its hatch was closed. Gagarin, however, reported that the hatch was not sealed properly, and technicians spent about 15 minutes removing all the screws and sealing the hatch again. According to a 2014 obituary, Vostok's chief designer, Oleg Ivanovsky, personally helped rebolt the hatch. There is some disagreement over whether the hatch was in fact not sealed correctly, as a more recent account stated the indication was false.
During this time Gagarin requested some music to be played over the radio. Korolev was reportedly suffering from chest pains and anxiety, as up to this point the Soviet space launch rate was 50% (12 out of 24 launches had failed). Two Vostoks had failed to reach orbit due to launch vehicle malfunctions and another two malfunctioned in orbit. Korolev was given a pill to calm him down. Gagarin, on the other hand, was described as calm; about half an hour before launch his pulse was recorded at 64 beats per minute.
Launch
06:07 UTC Launch occurred from the Baikonur Cosmodrome Site No.1. Korolev radioed, "Preliminary stage..... intermediate..... main..... lift off! We wish you a good flight. Everything is all right." Gagarin replied, "Let's go! (Poyekhali!)."
06:09 (T+ 119 s) The four strap-on boosters of the Vostok rocket used up the last of their propellant and dropped away from the core vehicle.
06:10 (T+ 156 s) The payload shroud covering Vostok 1 was released, uncovering a window at Gagarin's feet, with an optical orientation device (lit. "look" or "glance").
06:12 (T+ 300 s) The rocket core stage used up its propellant and fell away from the capsule and final rocket stage. The final rocket stage ignited.
06:13 Gagarin reported, "...the flight is continuing well. I can see the Earth. The visibility is good.... I almost see everything. There's a certain amount of space under cumulus cloud cover. I continue the flight, everything is good."
06:14 Vostok 1 passed over central Russia. Gagarin reported, "Everything is working very well. All systems are working. Let's keep going!"
06:15 Three minutes into the burn of the final rocket stage, Gagarin radioed, ", I can't hear you very well. I feel fine. I'm in good spirits. I'm continuing the flight..." Vostok 1 started to move out of radio range of the Baikonur ground station.
06:17 The rocket final stage shut down and Vostok 1 reached orbit. Ten seconds later the rocket separated from the capsule.
Time in orbit
06:18 UTC (T+ 676 s) Gagarin reported, "The craft is operating normally. I can see Earth in the view port of the . Everything is proceeding as planned". Vostok 1 moved on over Siberia as it passed over the Soviet Union.
06:21 Vostok 1 passed over the Kamchatka Peninsula and out over the North Pacific Ocean. Gagarin radioed, "...the lights are on on the descent mode monitor. I'm feeling fine, and I'm in good spirits. Cockpit parameters: pressure 1; humidity 65; temperature 20; pressure in the compartment 1; first automatic 155; second automatic 155; pressure in the retro-rocket system 320 atmospheres...."
06:25 As Vostok 1 began its diagonal crossing of the Pacific Ocean from Kamchatka Peninsula to the southern tip of South America, Gagarin requested information about his orbital parameters: "What can you tell me about the flight? What can you tell me?". The ground station at Khabarovsk didn't have his orbital parameters yet, and reported back, "There are no instructions from No. 20 [code name for Korolyov], and the flight is proceeding normally." (Ground control did not know until 25 minutes after launch that a stable orbit had been achieved.)
06:31 Gagarin transmitted to the Khabarovsk ground station, "I feel splendid, very well, very well, very well. Give me some results on the flight!". At this time, Vostok 1 was nearing the VHF radio horizon for Khabarovsk, and they responded, "Repeat. I can't hear you very well". Gagarin transmitted again, "I feel very good. Give me your data on the flight!" Vostok 1 then passed out of VHF range of the Khabarovsk ground station.
06:37 Vostok 1 continued on its journey as the sun set over the North Pacific. Gagarin crossed into night, northwest of the Hawaiian Islands. Out of VHF range with ground stations, communications continued via HF radio.
06:46 Khabarovsk ground station sent the message "KK" via telegraph (on HF radio to Vostok 1). This was a code meaning, "Report the monitoring of commands," a request for Gagarin to report when the spacecraft automated descent system had received its instructions from ground control.
06:48 Vostok 1 crossed the equator at about 170° West in a southeast direction, and began crossing the South Pacific. Gagarin transmitted over HF radio, "I am transmitting the regular report message: 9 hours 48 minutes (Moscow Time), the flight is proceeding successfully. Spusk-1 is operating normally. The mobile index of the descent mode monitor is moving. Pressure in the cockpit is 1; humidity 65; temperature 20; pressure in the compartment 1.2 ... Manual 150; First automatic 155; second automatic 155; retro rocket system tanks 320 atmospheres. I feel fine...."
06:49 Gagarin reported he was on the night side of the Earth.
06:51 Gagarin reported the sun-seeking attitude control system was switched on; this oriented Vostok 1 for retrofire. The automatic/solar system was backed up by a manual/visual system; either one could operate the two redundant cold nitrogen gas thruster systems, each with of gas.
06:53 The Khabarovsk ground station sent Gagarin via HF radio, "By order of No. 33 (General Nikolai Kamanin), the transmitters have been switched on, and we are transmitting this: the flight is proceeding as planned and the orbit is as calculated." Vostok 1 was now known to be in a stable orbit; Gagarin acknowledged.
06:57 Vostok 1 was over the South Pacific between New Zealand and Chile as Gagarin radioed, "...I'm continuing the flight, and I'm over America. I transmitted the telegraph signal "ON".
07:00 Vostok 1 crossed the Strait of Magellan at the tip of South America. News of the Vostok 1 mission was broadcast on Radio Moscow.
07:04 Gagarin sent another spacecraft status message, similar to the one at 06:48. This was not received by ground stations.
07:09 Gagarin sent another spacecraft status message, also not received by ground stations.
07:10 Vostok 1 passed over the South Atlantic, into daylight again. At this point, retrofire is 15 minutes away.
07:13 Gagarin sent a fourth spacecraft status message; Moscow received this partial message: "I read you well. The flight is going...."
07:18 Gagarin sent another spacecraft status message, not received by ground stations.
07:23 Gagarin sent another spacecraft status message, not received by ground stations.
The automatic orientation system brought Vostok 1 into alignment for retrofire about 1 hour into the flight.
Reentry and landing
At 07:25 UTC, the spacecraft's automatic systems brought it into the required attitude (orientation) for the retrorocket firing, and shortly afterwards, the liquid-fueled engine fired for about 42 seconds over the west coast of Africa, near Angola, about uprange of the landing point. The orbit's perigee and apogee had been selected to cause reentry due to orbital decay within 13 days (the limit of the life support system function) in the event of retrorocket malfunction. However, the actual orbit differed from the planned and would not have allowed descent until 20 days.
Ten seconds after retrofire, commands were sent to separate the Vostok service module from the reentry module (code name "little ball" ()), but the equipment module unexpectedly remained attached to the reentry module by a bundle of wires. At around 07:35 UTC, the two parts of the spacecraft began reentry and went through strong gyrations as Vostok 1 neared Egypt. At this point the wires broke, the two modules separated, and the descent module settled into the proper reentry attitude. Gagarin telegraphed "Everything is OK" despite continuing gyrations; he later reported that he did not want to "make noise" as he had (correctly) reasoned that the gyrations did not endanger the mission (and were apparently caused by the spherical shape of the reentry module). As Gagarin continued his descent, he remained conscious as he experienced about 8 g during reentry. (Gagarin's own report states "over 10 g".)
At 07:55 UTC, when Vostok 1 was still from the ground, the hatch of the spacecraft was released, and two seconds later Gagarin was ejected. At altitude, the main parachute was deployed from the Vostok spacecraft.
Gagarin's parachute opened almost immediately, and about ten minutes later, at 08:05 UTC, Gagarin landed. Both he and the spacecraft landed via parachute south west of Engels, in the Saratov region at .
A kolkhoz woman Annihayat Nurskanova and her granddaughter, Rita, observed the strange scene of a figure in a bright orange suit with a large white helmet landing near them by parachute. Gagarin later recalled, "When they saw me in my space suit and the parachute dragging alongside as I walked, they started to back away in fear. I told them, don't be afraid, I am a Soviet citizen like you, who has descended from space and I must find a telephone to call Moscow!"
Reactions and legacy
Soviet reaction
Gagarin's flight was announced while Gagarin was still in orbit, by Yuri Levitan, the leading Soviet radio personality since the 1930s. Although news of Soviet rocket launches would normally be aired only after the fact, Sergei Korolev wrote a note to the Party Central Committee to convince them that the announcement should be made as early as possible:
"We consider it advisable to publish the first TASS report immediately after the satellite-spacecraft enters orbit, for the following reasons:
(a) if a rescue becomes necessary, it will facilitate rapid organization of a rescue;
(b) it precludes any foreign government declaring that the cosmonaut is a military scout."
The flight was celebrated as a great triumph of Soviet science and technology, demonstrating the superiority of the socialist system over capitalism. Moscow and other cities in the USSR held mass demonstrations, the scale of which was comparable to World War II Victory Parades. Gagarin was awarded the title of Hero of the Soviet Union, the nation's highest honour. He also became an international celebrity, receiving numerous awards and honours.
April 12 was declared Cosmonautics Day in the USSR, and is celebrated today in Russia as one of the official "Commemorative Dates of Russia." In 2011, it was declared the International Day of Human Space Flight by the United Nations.
Gagarin's informal reply Poyekhali! ("Let's go!") became a historical phrase used to refer to the arrival of the Space Age in human history. Later it was included in the refrain of a Soviet song written by Alexandra Pakhmutova and Nikolai Dobronravov (He said "Let's go!" He waved his hand) which was dedicated to the memory of Gagarin.
American reaction
Officially, the U.S. congratulated the Soviet Union on its accomplishments. Writing for The New York Times shortly after the flight, however, journalist Arthur Krock described mixed feelings in the United States due to fears of the spaceflight's potential military implications for the Cold War, and the Detroit Free Press wrote that "the people of Washington, London, Paris and all points between might have been dancing in the streets" if it were not for "doubts and suspicions" about Soviet intentions. Other US writers were concerned that the spaceflight had gained a propaganda victory on behalf of communism. President John F. Kennedy was quoted as saying that it would be "some time" before the US could match the Soviet launch vehicle technology, and that "the news will be worse before it's better." Kennedy also sent congratulations to the Soviet Union for their "outstanding technical achievement." Opinion pages of many US newspapers urged renewed efforts to overtake the Soviet scientific accomplishments.
Adlai Stevenson, then the US ambassador to the United Nations, was quoted as saying, "Now that the Soviet scientists have put a man into space and brought him back alive, I hope they will also help to bring the United Nations back alive," and on a more serious note urged international agreements covering the use of space (which did not occur until the Outer Space Treaty of 1967).
Other world reactions
Prime Minister Jawaharlal Nehru of India praised the Soviet Union for "a great victory of man over the forces of nature" and urged that it be "considered as a victory for peace." The Economist voiced worries that orbital platforms might be used for surprise nuclear attacks. The Svenska Dagbladet in Sweden chided "free countries" for "splitting up and frittering away" their resources, while West Germany's Die Welt argued that America had the resources to have sent a man into space first but was beaten by Soviet purposefulness. Japan's Yomiuri Shimbun urged "that both the United States and the Soviet Union should use their new knowledge and techniques for the good of mankind," and Egypt's Akhbar El Yom likewise expressed hopes that the cold war would "turn into a peaceful race in infinite space" and turn away from armed conflicts such as the Laotian Civil War.
Charles de Gaulle claimed that "the success of Soviet scientists and astronauts does honor to Europe and humanity". Sukarno, the President of the Republic of Indonesia, said that "that delightful event opens up new prospects for human thought and activity, which will be put at the service of the progress and well-being of people, international peace as a whole." Zhou Enlai, head of the State Council of the People's Republic of China, and Kim Il Sung, Chairman of the Cabinet of Ministers of the DPRK, described the successes of Soviet science as "a brilliant symbol of the triumph of socialism and communism." The chairman of the Council of Ministers of Cuba, Fidel Castro, sent a telegram to Khrushchev as follows: "Let this victory of his become the victory of all mankind, which men and women in all corners of the earth perceived as the greatest hope for the destinies of freedom, prosperity and peace."
The President of the Chinese Academy of Sciences, Guo Moruo, wrote a poem 《歌颂东方号》 ("Hymn to the Vostok Spacecraft"), which was published in Pravda. Charlie Chaplin and Gianni Rodari were among those who sent congratulatory telegrams to Komsomolskaya Pravda.
World records
FAI officially recognized three space records claimed by Gagarin: duration in orbital flight—108 minutes, greatest altitude in earth orbital flight—, greatest mass lifted in earth orbital flight—.
The FAI rules in 1961 required that a pilot must land with the spacecraft to be considered an official spaceflight for the FAI record books. Although some contemporary Soviet sources stated that Gagarin had parachuted separately to the ground, the Soviet Union officially insisted that he had landed with the Vostok; the government forced the cosmonaut to lie in press conferences, and the FAI certified the flight. The Soviet Union did not admit until 1971 that Gagarin had ejected and landed separately from the Vostok descent module. Gagarin's spaceflight records were nonetheless certified and reaffirmed by the FAI, which revised its rules, and acknowledged that the crucial steps of the safe launch, orbit, and return of the pilot had been accomplished. Gagarin is internationally recognised as the first human in space and first to orbit the Earth.
Legacy
Four decades after the flight, historian Asif S. Siddiqi wrote that Vostok 1
The landing site is now a monument park. The central feature in the park is a tall monument that consists of a silver metallic rocketship rising on a curved metallic column of flame, from a wedge shaped, white stone base. In front of this is a 3-meter (9 foot) tall white stone statue of Yuri Gagarin, wearing a spacesuit, with one arm raised in greeting and the other holding a space helmet.
The Vostok 1 re-entry capsule belongs to the S. P. Korolev RSC Energia Museum in Korolev City. In 2018 it was temporarily loaned to the Space Pavilion at the VDNKh in Moscow.
In 2011, documentary film maker Christopher Riley partnered with European Space Agency astronaut Paolo Nespoli to record a new film of what Gagarin would have seen of the Earth from his spaceship, by matching historical audio recordings to video from the International Space Station following the ground path taken by Vostok 1. The resulting film, First Orbit, was released online to celebrate the 50th anniversary of human spaceflight.
| Technology | Crewed vehicles | null |
32370 | https://en.wikipedia.org/wiki/Vector%20space | Vector space | In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called vectors, can be added together and multiplied ("scaled") by numbers called scalars. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector spaces and complex vector spaces are kinds of vector spaces based on different kinds of scalars: real numbers and complex numbers. Scalars can also be, more generally, elements of any field.
Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities (such as forces and velocity) that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrices, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studying systems of linear equations.
Vector spaces are characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. This means that, for two vector spaces over a given field and with the same dimension, the properties that depend only on the vector-space structure are exactly the same (technically the vector spaces are isomorphic). A vector space is finite-dimensional if its dimension is a natural number. Otherwise, it is infinite-dimensional, and its dimension is an infinite cardinal. Finite-dimensional vector spaces occur naturally in geometry and related areas. Infinite-dimensional vector spaces occur in many areas of mathematics. For example, polynomial rings are countably infinite-dimensional vector spaces, and many function spaces have the cardinality of the continuum as a dimension.
Many vector spaces that are considered in mathematics are also endowed with other structures. This is the case of algebras, which include field extensions, polynomial rings, associative algebras and Lie algebras. This is also the case of topological vector spaces, which include function spaces, inner product spaces, normed spaces, Hilbert spaces and Banach spaces.
Definition and basic properties
In this article, vectors are represented in boldface to distinguish them from scalars.
A vector space over a field is a non-empty set together with a binary operation and a binary function that satisfy the eight axioms listed below. In this context, the elements of are commonly called vectors, and the elements of are called scalars.
The binary operation, called vector addition or simply addition assigns to any two vectors and in a third vector in which is commonly written as , and called the sum of these two vectors.
The binary function, called scalar multiplication, assigns to any scalar in and any vector in another vector in , which is denoted .
To have a vector space, the eight following axioms must be satisfied for every , and in , and and in .
When the scalar field is the real numbers, the vector space is called a real vector space, and when the scalar field is the complex numbers, the vector space is called a complex vector space. These two cases are the most common ones, but vector spaces with scalars in an arbitrary field are also commonly considered. Such a vector space is called an vector space or a vector space over .
An equivalent definition of a vector space can be given, which is much more concise but less elementary: the first four axioms (related to vector addition) say that a vector space is an abelian group under addition, and the four remaining axioms (related to the scalar multiplication) say that this operation defines a ring homomorphism from the field into the endomorphism ring of this group.
Subtraction of two vectors can be defined as
Direct consequences of the axioms include that, for every and one has
implies or
Even more concisely, a vector space is a module over a field.
Bases, vector coordinates, and subspaces
Linear combination
Given a set of elements of a -vector space , a linear combination of elements of is an element of of the form where and The scalars are called the coefficients of the linear combination.
Linear independence
The elements of a subset of a -vector space are said to be linearly independent if no element of can be written as a linear combination of the other elements of . Equivalently, they are linearly independent if two linear combinations of elements of define the same element of if and only if they have the same coefficients. Also equivalently, they are linearly independent if a linear combination results in the zero vector if and only if all its coefficients are zero.
Linear subspace
A linear subspace or vector subspace of a vector space is a non-empty subset of that is closed under vector addition and scalar multiplication; that is, the sum of two elements of and the product of an element of by a scalar belong to . This implies that every linear combination of elements of belongs to . A linear subspace is a vector space for the induced addition and scalar multiplication; this means that the closure property implies that the axioms of a vector space are satisfied.The closure property also implies that every intersection of linear subspaces is a linear subspace.
Linear span
Given a subset of a vector space , the linear span or simply the span of is the smallest linear subspace of that contains , in the sense that it is the intersection of all linear subspaces that contain . The span of is also the set of all linear combinations of elements of . If is the span of , one says that spans or generates , and that is a spanning set or a generating set of .
Basis and dimension
A subset of a vector space is a basis if its elements are linearly independent and span the vector space. Every vector space has at least one basis, or many in general (see ). Moreover, all bases of a vector space have the same cardinality, which is called the dimension of the vector space (see Dimension theorem for vector spaces). This is a fundamental property of vector spaces, which is detailed in the remainder of the section.
Bases are a fundamental tool for the study of vector spaces, especially when the dimension is finite. In the infinite-dimensional case, the existence of infinite bases, often called Hamel bases, depends on the axiom of choice. It follows that, in general, no base can be explicitly described. For example, the real numbers form an infinite-dimensional vector space over the rational numbers, for which no specific basis is known.
Consider a basis of a vector space of dimension over a field . The definition of a basis implies that every may be written
with in , and that this decomposition is unique. The scalars are called the coordinates of on the basis. They are also said to be the coefficients of the decomposition of on the basis. One also says that the -tuple of the coordinates is the coordinate vector of on the basis, since the set of the -tuples of elements of is a vector space for componentwise addition and scalar multiplication, whose dimension is .
The one-to-one correspondence between vectors and their coordinate vectors maps vector addition to vector addition and scalar multiplication to scalar multiplication. It is thus a vector space isomorphism, which allows translating reasonings and computations on vectors into reasonings and computations on their coordinates.
History
Vector spaces stem from affine geometry, via the introduction of coordinates in the plane or three-dimensional space. Around 1636, French mathematicians René Descartes and Pierre de Fermat founded analytic geometry by identifying solutions to an equation of two variables with points on a plane curve. To achieve geometric solutions without using coordinates, Bolzano introduced, in 1804, certain operations on points, lines, and planes, which are predecessors of vectors. introduced the notion of barycentric coordinates. introduced an equivalence relation on directed line segments that share the same length and direction which he called equipollence. A Euclidean vector is then an equivalence class of that relation.
Vectors were reconsidered with the presentation of complex numbers by Argand and Hamilton and the inception of quaternions by the latter. They are elements in R2 and R4; treating them using linear combinations goes back to Laguerre in 1867, who also defined systems of linear equations.
In 1857, Cayley introduced the matrix notation which allows for harmonization and simplification of linear maps. Around the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations. In his work, the concepts of linear independence and dimension, as well as scalar products are present. Grassmann's 1844 work exceeds the framework of vector spaces as well since his considering multiplication led him to what are today called algebras. Italian mathematician Peano was the first to give the modern definition of vector spaces and linear maps in 1888, although he called them "linear systems". Peano's axiomatization allowed for vector spaces with infinite dimension, but Peano did not develop that theory further. In 1897, Salvatore Pincherle adopted Peano's axioms and made initial inroads into the theory of infinite-dimensional vector spaces.
An important development of vector spaces is due to the construction of function spaces by Henri Lebesgue. This was later formalized by Banach and Hilbert, around 1920. At that time, algebra and the new field of functional analysis began to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces.
Examples
Arrows in the plane
The first example of a vector space consists of arrows in a fixed plane, starting at one fixed point. This is used in physics to describe forces or velocities. Given any two such arrows, and , the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows, and is denoted . In the special case of two arrows on the same line, their sum is the arrow on this line whose length is the sum or the difference of the lengths, depending on whether the arrows have the same direction. Another operation that can be done with arrows is scaling: given any positive real number , the arrow that has the same direction as , but is dilated or shrunk by multiplying its length by , is called multiplication of by . It is denoted . When is negative, is defined as the arrow pointing in the opposite direction instead.
The following shows a few examples: if , the resulting vector has the same direction as , but is stretched to the double length of (the second image). Equivalently, is the sum . Moreover, has the opposite direction and the same length as (blue vector pointing down in the second image).
Ordered pairs of numbers
A second key example of a vector space is provided by pairs of real numbers and . The order of the components and is significant, so such a pair is also called an ordered pair. Such a pair is written as . The sum of two such pairs and the multiplication of a pair with a number is defined as follows:
The first example above reduces to this example if an arrow is represented by a pair of Cartesian coordinates of its endpoint.
Coordinate space
The simplest example of a vector space over a field is the field itself with its addition viewed as vector addition and its multiplication viewed as scalar multiplication. More generally, all -tuples (sequences of length )
of elements of form a vector space that is usually denoted and called a coordinate space.
The case is the above-mentioned simplest example, in which the field is also regarded as a vector space over itself. The case and (so R2) reduces to the previous example.
Complex numbers and other field extensions
The set of complex numbers , numbers that can be written in the form for real numbers and where is the imaginary unit, form a vector space over the reals with the usual addition and multiplication: and for real numbers , , , and . The various axioms of a vector space follow from the fact that the same rules hold for complex number arithmetic. The example of complex numbers is essentially the same as (that is, it is isomorphic to) the vector space of ordered pairs of real numbers mentioned above: if we think of the complex number as representing the ordered pair in the complex plane then we see that the rules for addition and scalar multiplication correspond exactly to those in the earlier example.
More generally, field extensions provide another class of examples of vector spaces, particularly in algebra and algebraic number theory: a field containing a smaller field is an -vector space, by the given multiplication and addition operations of . For example, the complex numbers are a vector space over , and the field extension is a vector space over .
Function spaces
Functions from any fixed set to a field also form vector spaces, by performing addition and scalar multiplication pointwise. That is, the sum of two functions and is the function given by
and similarly for multiplication. Such function spaces occur in many geometric situations, when is the real line or an interval, or other subsets of . Many notions in topology and analysis, such as continuity, integrability or differentiability are well-behaved with respect to linearity: sums and scalar multiples of functions possessing such a property still have that property. Therefore, the set of such functions are vector spaces, whose study belongs to functional analysis.
Linear equations
Systems of homogeneous linear equations are closely tied to vector spaces. For example, the solutions of
are given by triples with arbitrary and They form a vector space: sums and scalar multiples of such triples still satisfy the same ratios of the three variables; thus they are solutions, too. Matrices can be used to condense multiple linear equations as above into one vector equation, namely
where is the matrix containing the coefficients of the given equations, is the vector denotes the matrix product, and is the zero vector. In a similar vein, the solutions of homogeneous linear differential equations form vector spaces. For example,
yields where and are arbitrary constants, and is the natural exponential function.
Linear maps and matrices
The relation of two vector spaces can be expressed by linear map or linear transformation. They are functions that reflect the vector space structure, that is, they preserve sums and scalar multiplication:
for all and in all in
An isomorphism is a linear map such that there exists an inverse map , which is a map such that the two possible compositions and are identity maps. Equivalently, is both one-to-one (injective) and onto (surjective). If there exists an isomorphism between and , the two spaces are said to be isomorphic; they are then essentially identical as vector spaces, since all identities holding in are, via , transported to similar ones in , and vice versa via .
For example, the arrows in the plane and the ordered pairs of numbers vector spaces in the introduction above (see ) are isomorphic: a planar arrow departing at the origin of some (fixed) coordinate system can be expressed as an ordered pair by considering the - and -component of the arrow, as shown in the image at the right. Conversely, given a pair , the arrow going by to the right (or to the left, if is negative), and up (down, if is negative) turns back the arrow .
Linear maps between two vector spaces form a vector space , also denoted , or . The space of linear maps from to is called the dual vector space, denoted . Via the injective natural map , any vector space can be embedded into its bidual; the map is an isomorphism if and only if the space is finite-dimensional.
Once a basis of is chosen, linear maps are completely determined by specifying the images of the basis vectors, because any element of is expressed uniquely as a linear combination of them. If , a 1-to-1 correspondence between fixed bases of and gives rise to a linear map that maps any basis element of to the corresponding basis element of . It is an isomorphism, by its very definition. Therefore, two vector spaces over a given field are isomorphic if their dimensions agree and vice versa. Another way to express this is that any vector space over a given field is completely classified (up to isomorphism) by its dimension, a single number. In particular, any n-dimensional -vector space is isomorphic to . However, there is no "canonical" or preferred isomorphism; an isomorphism is equivalent to the choice of a basis of , by mapping the standard basis of to , via .
Matrices
Matrices are a useful notion to encode linear maps. They are written as a rectangular array of scalars as in the image at the right. Any -by- matrix gives rise to a linear map from to , by the following
where denotes summation, or by using the matrix multiplication of the matrix with the coordinate vector :
Moreover, after choosing bases of and , any linear map is uniquely represented by a matrix via this assignment.
The determinant of a square matrix is a scalar that tells whether the associated map is an isomorphism or not: to be so it is sufficient and necessary that the determinant is nonzero. The linear transformation of corresponding to a real n-by-n matrix is orientation preserving if and only if its determinant is positive.
Eigenvalues and eigenvectors
Endomorphisms, linear maps , are particularly important since in this case vectors can be compared with their image under , . Any nonzero vector satisfying , where is a scalar, is called an eigenvector of with eigenvalue . Equivalently, is an element of the kernel of the difference (where Id is the identity map . If is finite-dimensional, this can be rephrased using determinants: having eigenvalue is equivalent to
By spelling out the definition of the determinant, the expression on the left hand side can be seen to be a polynomial function in , called the characteristic polynomial of . If the field is large enough to contain a zero of this polynomial (which automatically happens for algebraically closed, such as ) any linear map has at least one eigenvector. The vector space may or may not possess an eigenbasis, a basis consisting of eigenvectors. This phenomenon is governed by the Jordan canonical form of the map. The set of all eigenvectors corresponding to a particular eigenvalue of forms a vector space known as the eigenspace corresponding to the eigenvalue (and ) in question.
Basic constructions
In addition to the above concrete examples, there are a number of standard linear algebraic constructions that yield vector spaces related to given ones.
Subspaces and quotient spaces
A nonempty subset of a vector space that is closed under addition and scalar multiplication (and therefore contains the -vector of ) is called a linear subspace of , or simply a subspace of , when the ambient space is unambiguously a vector space. Subspaces of are vector spaces (over the same field) in their own right. The intersection of all subspaces containing a given set of vectors is called its span, and it is the smallest subspace of containing the set . Expressed in terms of elements, the span is the subspace consisting of all the linear combinations of elements of .
Linear subspace of dimension 1 and 2 are referred to as a line (also vector line), and a plane respectively. If W is an n-dimensional vector space, any subspace of dimension 1 less, i.e., of dimension is called a hyperplane.
The counterpart to subspaces are quotient vector spaces. Given any subspace , the quotient space (" modulo ") is defined as follows: as a set, it consists of
where is an arbitrary vector in . The sum of two such elements and is , and scalar multiplication is given by . The key point in this definition is that if and only if the difference of and lies in . This way, the quotient space "forgets" information that is contained in the subspace .
The kernel of a linear map consists of vectors that are mapped to in . The kernel and the image are subspaces of and , respectively.
An important example is the kernel of a linear map for some fixed matrix . The kernel of this map is the subspace of vectors such that , which is precisely the set of solutions to the system of homogeneous linear equations belonging to . This concept also extends to linear differential equations
where the coefficients are functions in too.
In the corresponding map
the derivatives of the function appear linearly (as opposed to , for example). Since differentiation is a linear procedure (that is, and for a constant ) this assignment is linear, called a linear differential operator. In particular, the solutions to the differential equation form a vector space (over or ).
The existence of kernels and images is part of the statement that the category of vector spaces (over a fixed field ) is an abelian category, that is, a corpus of mathematical objects and structure-preserving maps between them (a category) that behaves much like the category of abelian groups. Because of this, many statements such as the first isomorphism theorem (also called rank–nullity theorem in matrix-related terms)
and the second and third isomorphism theorem can be formulated and proven in a way very similar to the corresponding statements for groups.
Direct product and direct sum
The direct product of vector spaces and the direct sum of vector spaces are two ways of combining an indexed family of vector spaces into a new vector space.
The direct product of a family of vector spaces consists of the set of all tuples , which specify for each index in some index set an element of . Addition and scalar multiplication is performed componentwise. A variant of this construction is the direct sum (also called coproduct and denoted ), where only tuples with finitely many nonzero vectors are allowed. If the index set is finite, the two constructions agree, but in general they are different.
Tensor product
The tensor product or simply of two vector spaces and is one of the central notions of multilinear algebra which deals with extending notions such as linear maps to several variables. A map from the Cartesian product is called bilinear if is linear in both variables and That is to say, for fixed the map is linear in the sense above and likewise for fixed
The tensor product is a particular vector space that is a universal recipient of bilinear maps as follows. It is defined as the vector space consisting of finite (formal) sums of symbols called tensors
subject to the rules
These rules ensure that the map from the to that maps a tuple to is bilinear. The universality states that given any vector space and any bilinear map there exists a unique map shown in the diagram with a dotted arrow, whose composition with equals This is called the universal property of the tensor product, an instance of the method—much used in advanced abstract algebra—to indirectly define objects by specifying maps from or to this object.
Vector spaces with additional structure
From the point of view of linear algebra, vector spaces are completely understood insofar as any vector space over a given field is characterized, up to isomorphism, by its dimension. However, vector spaces per se do not offer a framework to deal with the question—crucial to analysis—whether a sequence of functions converges to another function. Likewise, linear algebra is not adapted to deal with infinite series, since the addition operation allows only finitely many terms to be added. Therefore, the needs of functional analysis require considering additional structures.
A vector space may be given a partial order under which some vectors can be compared. For example, -dimensional real space can be ordered by comparing its vectors componentwise. Ordered vector spaces, for example Riesz spaces, are fundamental to Lebesgue integration, which relies on the ability to express a function as a difference of two positive functions
where denotes the positive part of and the negative part.
Normed vector spaces and inner product spaces
"Measuring" vectors is done by specifying a norm, a datum which measures lengths of vectors, or by an inner product, which measures angles between vectors. Norms and inner products are denoted and respectively. The datum of an inner product entails that lengths of vectors can be defined too, by defining the associated norm Vector spaces endowed with such data are known as normed vector spaces and inner product spaces, respectively.
Coordinate space can be equipped with the standard dot product:
In this reflects the common notion of the angle between two vectors and by the law of cosines:
Because of this, two vectors satisfying are called orthogonal. An important variant of the standard dot product is used in Minkowski space: endowed with the Lorentz product
In contrast to the standard dot product, it is not positive definite: also takes negative values, for example, for Singling out the fourth coordinate—corresponding to time, as opposed to three space-dimensions—makes it useful for the mathematical treatment of special relativity. Note that in other conventions time is often written as the first, or "zeroeth" component so that the Lorentz product is written
Topological vector spaces
Convergence questions are treated by considering vector spaces carrying a compatible topology, a structure that allows one to talk about elements being close to each other. Compatible here means that addition and scalar multiplication have to be continuous maps. Roughly, if and in , and in vary by a bounded amount, then so do and To make sense of specifying the amount a scalar changes, the field also has to carry a topology in this context; a common choice is the reals or the complex numbers.
In such topological vector spaces one can consider series of vectors. The infinite sum
denotes the limit of the corresponding finite partial sums of the sequence of elements of For example, the could be (real or complex) functions belonging to some function space in which case the series is a function series. The mode of convergence of the series depends on the topology imposed on the function space. In such cases, pointwise convergence and uniform convergence are two prominent examples.
A way to ensure the existence of limits of certain infinite series is to restrict attention to spaces where any Cauchy sequence has a limit; such a vector space is called complete. Roughly, a vector space is complete provided that it contains all necessary limits. For example, the vector space of polynomials on the unit interval equipped with the topology of uniform convergence is not complete because any continuous function on can be uniformly approximated by a sequence of polynomials, by the Weierstrass approximation theorem. In contrast, the space of all continuous functions on with the same topology is complete. A norm gives rise to a topology by defining that a sequence of vectors converges to if and only if
Banach and Hilbert spaces are complete topological vector spaces whose topologies are given, respectively, by a norm and an inner product. Their study—a key piece of functional analysis—focuses on infinite-dimensional vector spaces, since all norms on finite-dimensional topological vector spaces give rise to the same notion of convergence. The image at the right shows the equivalence of the -norm and -norm on as the unit "balls" enclose each other, a sequence converges to zero in one norm if and only if it so does in the other norm. In the infinite-dimensional case, however, there will generally be inequivalent topologies, which makes the study of topological vector spaces richer than that of vector spaces without additional data.
From a conceptual point of view, all notions related to topological vector spaces should match the topology. For example, instead of considering all linear maps (also called functionals) maps between topological vector spaces are required to be continuous. In particular, the (topological) dual space consists of continuous functionals (or to ). The fundamental Hahn–Banach theorem is concerned with separating subspaces of appropriate topological vector spaces by continuous functionals.
Banach spaces
Banach spaces, introduced by Stefan Banach, are complete normed vector spaces.
A first example is the vector space consisting of infinite vectors with real entries
whose -norm given by
The topologies on the infinite-dimensional space are inequivalent for different For example, the sequence of vectors in which the first components are and the following ones are converges to the zero vector for but does not for
but
More generally than sequences of real numbers, functions are endowed with a norm that replaces the above sum by the Lebesgue integral
The space of integrable functions on a given domain (for example an interval) satisfying and equipped with this norm are called Lebesgue spaces, denoted
These spaces are complete. (If one uses the Riemann integral instead, the space is complete, which may be seen as a justification for Lebesgue's integration theory.) Concretely this means that for any sequence of Lebesgue-integrable functions with satisfying the condition
there exists a function belonging to the vector space such that
Imposing boundedness conditions not only on the function, but also on its derivatives leads to Sobolev spaces.
Hilbert spaces
Complete inner product spaces are known as Hilbert spaces, in honor of David Hilbert. The Hilbert space with inner product given by
where denotes the complex conjugate of is a key case.
By definition, in a Hilbert space, any Cauchy sequence converges to a limit. Conversely, finding a sequence of functions with desirable properties that approximate a given limit function is equally crucial. Early analysis, in the guise of the Taylor approximation, established an approximation of differentiable functions by polynomials. By the Stone–Weierstrass theorem, every continuous function on can be approximated as closely as desired by a polynomial. A similar approximation technique by trigonometric functions is commonly called Fourier expansion, and is much applied in engineering. More generally, and more conceptually, the theorem yields a simple description of what "basic functions", or, in abstract Hilbert spaces, what basic vectors suffice to generate a Hilbert space in the sense that the closure of their span (that is, finite linear combinations and limits of those) is the whole space. Such a set of functions is called a basis of its cardinality is known as the Hilbert space dimension. Not only does the theorem exhibit suitable basis functions as sufficient for approximation purposes, but also together with the Gram–Schmidt process, it enables one to construct a basis of orthogonal vectors. Such orthogonal bases are the Hilbert space generalization of the coordinate axes in finite-dimensional Euclidean space.
The solutions to various differential equations can be interpreted in terms of Hilbert spaces. For example, a great many fields in physics and engineering lead to such equations, and frequently solutions with particular physical properties are used as basis functions, often orthogonal. As an example from physics, the time-dependent Schrödinger equation in quantum mechanics describes the change of physical properties in time by means of a partial differential equation, whose solutions are called wavefunctions. Definite values for physical properties such as energy, or momentum, correspond to eigenvalues of a certain (linear) differential operator and the associated wavefunctions are called eigenstates. The spectral theorem decomposes a linear compact operator acting on functions in terms of these eigenfunctions and their eigenvalues.
Algebras over fields
General vector spaces do not possess a multiplication between vectors. A vector space equipped with an additional bilinear operator defining the multiplication of two vectors is an algebra over a field (or F-algebra if the field F is specified).
For example, the set of all polynomials forms an algebra known as the polynomial ring: using that the sum of two polynomials is a polynomial, they form a vector space; they form an algebra since the product of two polynomials is again a polynomial. Rings of polynomials (in several variables) and their quotients form the basis of algebraic geometry, because they are rings of functions of algebraic geometric objects.
Another crucial example are Lie algebras, which are neither commutative nor associative, but the failure to be so is limited by the constraints ( denotes the product of and ):
(anticommutativity), and
(Jacobi identity).
Examples include the vector space of -by- matrices, with the commutator of two matrices, and endowed with the cross product.
The tensor algebra is a formal way of adding products to any vector space to obtain an algebra. As a vector space, it is spanned by symbols, called simple tensors
where the degree varies.
The multiplication is given by concatenating such symbols, imposing the distributive law under addition, and requiring that scalar multiplication commute with the tensor product ⊗, much the same way as with the tensor product of two vector spaces introduced in the above section on tensor products. In general, there are no relations between and Forcing two such elements to be equal leads to the symmetric algebra, whereas forcing yields the exterior algebra.
Related structures
Vector bundles
A vector bundle is a family of vector spaces parametrized continuously by a topological space X. More precisely, a vector bundle over X is a topological space E equipped with a continuous map
such that for every x in X, the fiber π−1(x) is a vector space. The case dim is called a line bundle. For any vector space V, the projection makes the product into a "trivial" vector bundle. Vector bundles over X are required to be locally a product of X and some (fixed) vector space V: for every x in X, there is a neighborhood U of x such that the restriction of π to π−1(U) is isomorphic to the trivial bundle . Despite their locally trivial character, vector bundles may (depending on the shape of the underlying space X) be "twisted" in the large (that is, the bundle need not be (globally isomorphic to) the trivial bundle ). For example, the Möbius strip can be seen as a line bundle over the circle S1 (by identifying open intervals with the real line). It is, however, different from the cylinder , because the latter is orientable whereas the former is not.
Properties of certain vector bundles provide information about the underlying topological space. For example, the tangent bundle consists of the collection of tangent spaces parametrized by the points of a differentiable manifold. The tangent bundle of the circle S1 is globally isomorphic to , since there is a global nonzero vector field on S1. In contrast, by the hairy ball theorem, there is no (tangent) vector field on the 2-sphere S2 which is everywhere nonzero. K-theory studies the isomorphism classes of all vector bundles over some topological space. In addition to deepening topological and geometrical insight, it has purely algebraic consequences, such as the classification of finite-dimensional real division algebras: R, C, the quaternions H and the octonions O.
The cotangent bundle of a differentiable manifold consists, at every point of the manifold, of the dual of the tangent space, the cotangent space. Sections of that bundle are known as differential one-forms.
Modules
Modules are to rings what vector spaces are to fields: the same axioms, applied to a ring R instead of a field F, yield modules. The theory of modules, compared to that of vector spaces, is complicated by the presence of ring elements that do not have multiplicative inverses. For example, modules need not have bases, as the Z-module (that is, abelian group) Z/2Z shows; those modules that do (including all vector spaces) are known as free modules. Nevertheless, a vector space can be compactly defined as a module over a ring which is a field, with the elements being called vectors. Some authors use the term vector space to mean modules over a division ring. The algebro-geometric interpretation of commutative rings via their spectrum allows the development of concepts such as locally free modules, the algebraic counterpart to vector bundles.
Affine and projective spaces
Roughly, affine spaces are vector spaces whose origins are not specified. More precisely, an affine space is a set with a free transitive vector space action. In particular, a vector space is an affine space over itself, by the map
If W is a vector space, then an affine subspace is a subset of W obtained by translating a linear subspace V by a fixed vector ; this space is denoted by (it is a coset of V in W) and consists of all vectors of the form for An important example is the space of solutions of a system of inhomogeneous linear equations
generalizing the homogeneous case discussed in the above section on linear equations, which can be found by setting in this equation. The space of solutions is the affine subspace where x is a particular solution of the equation, and V is the space of solutions of the homogeneous equation (the nullspace of A).
The set of one-dimensional subspaces of a fixed finite-dimensional vector space V is known as projective space; it may be used to formalize the idea of parallel lines intersecting at infinity. Grassmannians and flag manifolds generalize this by parametrizing linear subspaces of fixed dimension k and flags of subspaces, respectively.
| Mathematics | Algebra | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.