id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
569881 | https://en.wikipedia.org/wiki/Semi-arid%20climate | Semi-arid climate | A semi-arid climate, semi-desert climate, or steppe climate is a dry climate sub-type. It is located on regions that receive precipitation below potential evapotranspiration, but not as low as a desert climate. There are different kinds of semi-arid climates, depending on variables such as temperature, and they give rise to different biomes.
Defining attributes of semi-arid climates
A more precise definition is given by the Köppen climate classification, which treats steppe climates (BSh and BSk) as intermediates between desert climates (BW) and humid climates (A, C, D) in ecological characteristics and agricultural potential. Semi-arid climates tend to support short, thorny or scrubby vegetation and are usually dominated by either grasses or shrubs as they usually cannot support forests.
To determine if a location has a semi-arid climate, the precipitation threshold must first be determined. The method used to find the precipitation threshold (in millimeters):
multiply by 20 the average annual temperature in degrees Celsius and then
add 280 if at least 70% of the total precipitation falls in the summer half of the year (April–September in the northern hemisphere, October–March in the southern hemisphere)
add 140 if 30–70% of the total precipitation falls in the summer half of the year
add nothing if less than 30% of the total precipitation falls in the summer half of the year
If the area's annual precipitation in millimeters is less than the threshold but more than half or 50% the threshold, it is classified as a BS (steppe, semi-desert, or semi-arid climate).
Furthermore, to delineate hot semi-arid climates from cold semi-arid climates, a mean annual temperature of is used as an isotherm. A location with a BS-type climate is classified as hot semi-arid (BSh) if its mean temperature is above this isotherm, and cold semi-arid (BSk) if not.
Hot semi-arid climates
Hot semi-arid climates (type "BSh") tend to be located from the high teens to mid-30s latitudes of the tropics and subtropics, typically in proximity to regions with a tropical savanna climate or a humid subtropical climate. These climates tend to have hot, or sometimes extremely hot, summers and warm to cool winters, with some to minimal precipitation. Hot semi-arid climates are most commonly found around the fringes of subtropical deserts.
Hot semi-arid climates are most commonly found in Africa, Australia, and South Asia. In Australia, a large portion of the Outback surrounding the central desert regions lies within the hot semi-arid climate region. In South Asia, both India and parts of Pakistan experience the seasonal effects of monsoons and feature short but well-defined wet seasons, but are not sufficiently wet overall to qualify as either a tropical savanna or a humid subtropical climate.
Hot semi-arid climates can be also found in parts of North America, such as most of northern Mexico, the ABC Islands, the rain shadows of Hispaniola's mountain ranges in the Dominican Republic and Haiti, parts of the Southwestern United States including California's Central Valley, and sections of South America such as the sertão, the Gran Chaco, and the poleward side of the arid deserts, where they typically feature a Mediterranean precipitation pattern, with generally rainless summers and wetter winters. They are also found in few areas of Europe surrounding the Mediterranean Basin. In Europe, BSh climates are predominantly found in southeastern Spain. It can also be found primarily in parts of south Greece but also in marginal areas of Thessaloniki and Chalkidiki in north Greece, most of Formentera, marginal areas of Ibiza and marginal areas of Italy in Sicily, Sardinia and Lampedusa.
Cold semi-arid climates
Cold semi-arid climates (type "BSk") tend to be located in elevated portions of temperate zones generally from the mid-30s to low 50s latitudes, typically bordering a humid continental climate or a Mediterranean climate. They are also typically found in continental interiors some distance from large bodies of water. Cold semi-arid climates usually feature warm to hot dry summers, though their summers are typically not quite as hot as those of hot semi-arid climates. Unlike hot semi-arid climates, areas with cold semi-arid climates tend to have cold and possibly freezing winters. These areas usually see some snowfall during the winter, though snowfall is much lower than at locations at similar latitudes with more humid climates.
Areas featuring cold semi-arid climates tend to have higher elevations than areas with hot semi-arid climates, and tend to feature major temperature swings between day and night, sometimes by as much as 20 °C (36 °F) or more. These large diurnal temperature variations are seldom seen in hot semi-arid climates. Cold semi-arid climates at higher latitudes tend to have dry winters and wetter summers, while cold semi-arid climates at lower latitudes tend to have precipitation patterns more akin to Mediterranean climates, with dry summers, relatively wet winters, and even wetter springs and autumns.
Cold semi-arid climates are most commonly found in central Asia and the western US, as well as the Middle East and other parts of Asia. However, they can also be found in Northern Africa, South Africa, sections of South America, sections of interior southern Australia (e.g. Kalgoorlie and Mildura) and inland Spain.
Charts of selected cities
Hot semi-arid
Cold semi-arid
| Physical sciences | Climates | Earth science |
570029 | https://en.wikipedia.org/wiki/Siberian%20crane | Siberian crane | The Siberian crane (Leucogeranus leucogeranus), also known as the Siberian white crane or the snow crane, is a bird of the family Gruidae, the cranes. They are distinctive among the cranes: adults are nearly all snowy white, except for their black primary feathers that are visible in flight, and with two breeding populations in the Arctic tundra of western and eastern Russia. The eastern populations migrate during winter to China, while the western population winters in Iran and (formerly) in Bharatpur, India.
Among the cranes, they make the longest distance migrations. Their populations, particularly those in the western range, have declined drastically in the 20th century due to hunting along their migration routes and habitat degradation. The world population was estimated in 2010 at about 3,200 birds, mostly belonging to the eastern population with about 93% of them wintering in the Poyang Lake basin in China, a habitat that may be altered by the Three Gorges Dam.
Taxonomy and systematics
The Siberian crane was formally described by Peter Simon Pallas in 1773 and given the binomial name Grus leucogeranus. The specific epithet is derived from the classical Greek words leukos for "white" and geranos for a "crane". Ustad Mansur, a 17th-century court artist and singer of Jahangir, had illustrated a Siberian crane about 100 years earlier. The genus Megalornis was used for the cranes by George Robert Gray and this species was included in it, while Richard Bowdler Sharpe suggested a separation from Grus and used the genus Sarcogeranus. The Siberian crane lacks the complex tracheal coils found in most other cranes but shares this feature with the wattled crane. The unison call differed from that of most cranes and some authors suggested that the Siberian crane belonged in the genus Bugeranus along with the wattled crane. Comparisons of the DNA sequences of cytochrome-b however suggest that the Siberian crane is basal among the Gruinae and the wattled crane is retained as the sole species in the genus Bugeranus and placed as a sister to the Anthropoides cranes.
A molecular phylogenetic study published in 2010 found that the genus Grus, as then defined, was polyphyletic. In the resulting rearrangement to create monophyletic genera, the Siberian crane was moved to the resurrected genus Leucogeranus. The genus Leucogeranus had been introduced by the French biologist Charles Lucien Bonaparte in 1855.
Description
Adults of both sexes have a pure white plumage except for the black primaries, alula and primary coverts. The fore-crown, face and side of head is bare and brick red, the bill is dark and the legs are pinkish. The iris is yellowish. Juveniles are feathered on the face and the plumage is dingy brown. There are no elongated tertial feathers as in some other crane species. During breeding season, both the male and female cranes are often seen with mud streaking their feathers; they may intentionally smear mud on their feathers, which has been hypothesized to aid camouflage on the nest. The call is very different from the trumpeting of most cranes and is a goose-like high pitched whistling toyoya. This is a fairly large species of crane, typically weighing and standing about tall. The wingspan is reportedly from and length is . Males are on average larger than females. The average weight of adults in one study was while juvenile birds were slightly heavier on average at . There is a single record of an outsized male of this species weighing . Usually, this crane is usually slightly smaller in weight and height than some other cranes, particularly the sarus crane, wattled crane and red-crowned crane.
Distribution and habitat
The breeding area of the Siberian crane formerly extended between the Urals and Ob river south to the Ishim and Tobol rivers and east to the Kolyma region. The populations declined with changes in landuse, the draining of wetlands for agricultural expansion and hunting on their migration routes. The breeding areas in modern times are restricted to two widely disjunct regions. The western area in the river basins of the Ob, Konda and Sossva and to the east a much larger population in Yakutia between the Yana and the Alazeya rivers. Like most cranes, the Siberian crane inhabits shallow marshlands and wetlands and will often forage in deeper water than other cranes. They show very high site fidelity for both their wintering and breeding areas, making use of the same sites year after year. The western population winters in Iran and some individuals formerly wintered in India south to Nagpur and east to Bihar. The eastern populations winter mainly in the Poyang Lake area in China.
Behaviour and ecology
Siberian cranes are widely dispersed in their breeding areas and are highly territorial. They maintain feeding territories in winter but may form small and loose flocks, and gather closer at their winter roosts. They are very diurnal, feeding almost all throughout the day. When feeding on submerged vegetation, they often immerse their heads entirely underwater. When calling, the birds stretch their neck forward. The contexts of several calls have been identified and several of these vary with sex. Individual variation is very slight and most calls have a dominant frequency of about 1.4 kHz. The unison calls, duets between paired males and female however are more distinctive with marked differences across pairs. The female produces a higher pitched call which is the "loo" in the duetted "doodle-loo" call. Pairs will walk around other pairs to threaten them and drive them away from their territory. In captivity, one individual was recorded to have lived for nearly 62 years while another lived for 83 years.
Feeding
These cranes are omnivorous with a tendency to plant matter. In the summer grounds they feed on a range of plants including the roots of hellebore (Veratrum misae), seeds of Empetrum nigrum as well as small rodents like lemmings and voles, earthworms, and fish. They were earlier thought to be predominantly fish eating on the basis of the serrated edge of their bill, but later studies suggest that they take animal prey mainly when the vegetation is covered by snow. They also swallow pebbles and grit to aid in crushing food in their crop. In their wintering grounds in China, they have been noted to feed to a large extent on the submerged leaves of Vallisneria spiralis. Specimens wintering in India have been found to have mainly aquatic plants in their stomachs. They are however noted to pick up beetles and bird's eggs in captivity.
Breeding
Siberian cranes return to the Arctic tundra around the end of April and beginning of May. The nest is usually on the edge of lake in boggy ground and is usually surrounded by water. Most eggs are laid in the first week of June when the tundra is snow free. The usual clutch is two eggs, which are incubated by the female after the second egg is laid, with the male standing guard nearby. The eggs hatch in about 27 to 29 days. The young birds fledge in about 80 days. Usually only a single chick survives due to aggression between young birds. The population increase per year is less than 10%, the lowest recruitment rate among cranes. Their success in breeding may further be hampered by disturbance from reindeer and sometimes dogs that accompany reindeer herders. Captive breeding was achieved by the International Crane Foundation at Baraboo after numerous failed attempts. Males often killed their mates and captive breeding was achieved by artificial insemination and the hatching of eggs by other crane species such as the sandhill and using floodlights to simulate the longer daylengths of the Arctic summer.
Migration
This species breeds in two disjunct regions in the arctic tundra of Russia; the western population along the Ob, Yakutia, and western Siberia. It is a long distance migrant and among the cranes, makes one of the longest migrations. The eastern population winters on the Yangtze River and Lake Poyang in China, and the western population in Fereydoon Kenar in Iran. The central population, which once wintered in Keoladeo National Park, Bharatpur, is extinct.
Status and conservation
The conservation status of the Siberian crane is very serious. In 2008, the decreasing world population was estimated to be around 3500–4000 individuals, nearly all of them belonging to the eastern breeding population. Of the 15 crane species, this is the only regarded as critically endangered, the highest threatened category by the IUCN (the whooping crane of North America has a smaller but rising population that is better protected, giving the species a status of endangered.) The western population of the Siberian crane had dwindled to four in 2002 and subsequently it was thought to be extirpated, but a single individual, named "Omid", has wintered in Iran since 2006–2007. The wintering site at Poyang in China holds an estimated 98% of the population and is threatened by hydrological changes caused by the Three Gorges Dam and other water development projects.
Historical records from India suggest a wider winter distribution in the past including records from Gujarat, near New Delhi and even as far east as Bihar. In the 19th century, larger numbers of birds were noted to visit India. They were sought after by hunters and specimen collectors. In 1974, as many as 75 birds wintered in Bharatpur, but this population declined to a single pair in 1992 and the last bird was seen in 2002. An individual that escaped from a private menagerie was shot in the Outer Hebrides in 1891. The western population may even have wintered as far west as Egypt along the Nile.
Satellite telemetry was used to track the migration of a flock that wintered in Iran. They were noted to rest on the eastern end of the Volga Delta. Satellite telemetry was also used to track the migration of the eastern population in the mid-1990s, leading to the discovery of new resting areas along the species' flyway in eastern Russia and China. The Siberian crane is one of the species to which the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA) applies and is subject of the Memorandum of Understanding concerning Conservation Measures for the Siberian Crane concluded under the Bonn Convention.
Significance in human culture
For the Yakuts and Yukaghirs, the white crane is a sacred bird associated with sun, spring and kind celestial spirits ajyy. In yakut epics Olonkho shamans and shamanesses transform into white cranes.
| Biology and health sciences | Gruiformes | Animals |
570922 | https://en.wikipedia.org/wiki/Action%20at%20a%20distance | Action at a distance | Action at a distance is the concept in physics that an object's motion can be affected by another object without the two being in physical contact; that is, it is the concept of the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance.
Historically, action at a distance was the earliest scientific model for gravity and electricity and it continues to be useful in many practical cases. In the 19th and 20th centuries, field models arose to explain these phenomena with more precision. The discovery of electrons and of special relativity led to new action at a distance models providing alternative to field theories. Under our modern understanding, the four fundamental interactions (gravity, electromagnetism, the strong interaction and the weak interaction) in all of physics are not described by action at a distance.
Categories of action
In the study of mechanics, action at a distance is one of three fundamental actions on matter that cause motion. The other two are direct impact (elastic or inelastic collisions) and actions in a continuous medium as in fluid mechanics or solid mechanics.
Historically, physical explanations for particular phenomena have moved between these three categories over time as new models were developed.
Action-at-a-distance and actions in a continuous medium may be easily distinguished when the medium dynamics are visible, like waves in water or in an elastic solid. In the case of electricity or gravity, no medium is required. In the nineteenth century, criteria like the effect of actions on intervening matter, the observation of a time delay, the apparent storage of energy, or even the possibility of a plausible mechanical model for action transmission were all accepted as evidence against action at a distance. Aether theories were alternative proposals to replace apparent action-at-a-distance in gravity and electromagnetism, in terms of continuous action inside an (invisible) medium called "aether".
Direct impact of macroscopic objects seems visually distinguishable from action at a distance. If however the objects are constructed of atoms, and the volume of those atoms is not defined and atoms interact by electric and magnetic forces, the distinction is less clear.
Roles
The concept of action at a distance acts in multiple roles in physics and it can co-exist with other models according to the needs of each physical problem.
One role is as a summary of physical phenomena, independent of any understanding of the cause of such an action. For example, astronomical tables of planetary positions can be compactly summarized using Newton's law of universal gravitation, which assumes the planets interact without contact or an intervening medium. As a summary of data, the concept does not need to be evaluated as a plausible physical model.
Action at a distance also acts as a model explaining physical phenomena even in the presence of other models. Again in the case of gravity, hypothesizing an instantaneous force between masses allows the return time of comets to be predicted as well as predicting the existence of previously unknown planets, like Neptune. These triumphs of physics predated the alternative more accurate model for gravity based on general relativity by many decades.
Introductory physics textbooks discuss central forces, like gravity, by models based on action-at-distance without discussing the cause of such forces or issues with it until the topics of relativity and fields are discussed. For example, see The Feynman Lectures on Physics on gravity.
History
Early inquiries into motion
Action-at-a-distance as a physical concept requires identifying objects, distances, and their motion. In antiquity, ideas about the natural world were not organized in these terms. Objects in motion were modeled as living beings. Around 1600, the scientific method began to take root. René Descartes held a more fundamental view, developing ideas of matter and action independent of theology. Galileo Galilei wrote about experimental measurements of falling and rolling objects. Johannes Kepler's laws of planetary motion summarized Tycho Brahe's astronomical observations. Many experiments with electrical and magnetic materials led to new ideas about forces. These efforts set the stage for Newton's work on forces and gravity.
Newtonian gravity
In 1687 Isaac Newton published his Principia which combined his laws of motion with a new mathematical analysis able to reproduce Kepler's empirical results. His explanation was in the form of a law of universal gravitation: any two bodies are attracted by a force proportional to their mass and inversely proportional to the square of the distance between them. Thus the motions of planets were predicted by assuming forces working over great distances.
This mathematical expression of the force did not imply a cause. Newton considered action-at-a-distance to be an inadequate model for gravity. Newton, in his words, considered action at a distance to be:
Metaphysical scientists of the early 1700s strongly objected to the unexplained action-at-a-distance in Newton's theory. Gottfried Wilhelm Leibniz complained that the mechanism of gravity was "invisible, intangible, and not mechanical". Moreover, initial comparisons with astronomical data were not favorable. As mathematical techniques improved throughout the 1700s, the theory showed increasing success, predicting the date of the return of Halley's comet and aiding the discovery of planet Neptune in 1846. These successes and the increasingly empirical focus of science towards the 19th century led to acceptance of Newton's theory of gravity despite distaste for action-at-a-distance.
Electrical action at a distance
Electrical and magnetic phenomena also began to be explored systematically in the early 1600s. In William Gilbert's early theory of "electric effluvia," a kind of electric atmosphere, he rules out action-at-a-distance on the grounds that "no action can be performed by matter save by contact".
However subsequent experiments, especially those by Stephen Gray showed electrical effects over distance. Gray developed an experiment call the "electric boy" demonstrating electric transfer without direct contact.
Franz Aepinus was the first to show, in 1759, that a theory of action at a distance for electricity provides a simpler replacement for the electric effluvia theory. Despite this success, Aepinus himself considered the nature of the forces to be unexplained: he did "not approve of the doctrine which assumes the possibility of action at a distance", setting the stage for a shift to theories based on aether.
By 1785 Charles-Augustin de Coulomb showed that two electric charges at rest experience a force inversely proportional to the square of the distance between them, a result now called Coulomb's law. The striking similarity to gravity strengthened the case for action at a distance, at least as a mathematical model.
As mathematical methods improved, especially through the work of Pierre-Simon Laplace, Joseph-Louis Lagrange, and Siméon Denis Poisson, more sophisticated mathematical methods began to influence the thinking of scientists. The concept of potential energy applied to small test particles led to the concept of a scalar field, a mathematical model representing the forces throughout space. While this mathematical model is not a mechanical medium, the mental picture of such a field resembles a medium.
Fields as an alternative
Michael Faraday was the first who suggested that action at a distance was inadequate as an account of electric and magnetic forces, even in the form of a (mathematical) potential field. Faraday, an empirical experimentalist, cited three reasons in support of some medium transmitting electrical force: 1) electrostatic induction across an insulator depends on the nature of the insulator, 2) cutting a charged insulator causes opposite charges to appear on each half, and 3) electric discharge sparks are curved at an insulator. From these reasons he concluded that the particles of an insulator must be polarized, with each particle contributing to continuous action. He also experimented with magnets, demonstrating lines of force made visible by iron filings. However, in both cases his field-like model depends on particles that interact through an action-at-a-distance: his mechanical field-like model has no more fundamental physical cause than the long-range central field model.
Faraday's observations, as well as others, led James Clerk Maxwell to a breakthrough formulation in 1865, a set of equations that combined electricity and magnetism, both static and dynamic, and which included electromagnetic radiation – light. Maxwell started with elaborate mechanical models but ultimately produced a purely mathematical treatment using dynamical vector fields. The sense that these fields must be set to vibrate to propagate light set off a search of a medium of propagation; the medium was called the luminiferous aether or the aether.
In 1873 Maxwell addressed action at a distance explicitly. He reviews Faraday's lines of force, carefully pointing out that Faraday himself did not provide a mechanical model of these lines in terms of a medium. Nevertheless the many properties of these lines of force imply these "lines must not be regarded as mere mathematical abstractions". Faraday himself viewed these lines of force as a model, a "valuable aid" to the experimentalist, a means to suggest further experiments.
In distinguishing between different kinds of action Faraday suggested three criteria: 1) do additional material objects alter the action?, 2) does the action take time, and 3) does it depend upon the receiving end? For electricity, Faraday knew that all three criteria were met for electric action, but gravity was thought to only meet the third one. After Maxwell's time a fourth criteria, the transmission of energy, was added, thought to also apply to electricity but not gravity. With the advent of new theories of gravity, the modern account would give gravity all of the criteria except dependence on additional objects.
Fields fade into spacetime
The success of Maxwell's field equations led to numerous efforts in the later decades of the 19th century to represent electrical, magnetic, and gravitational fields, primarily with mechanical models. No model emerged that explained the existing phenomena. In particular no good model for stellar aberration, the shift in the position of stars with the Earth's relative velocity. The best models required the ether to be stationary while the Earth moved, but experimental efforts to measure the effect of Earth's motion through the aether found no effect.
In 1892 Hendrik Lorentz proposed a modified aether based on the emerging microscopic molecular model rather than the strictly macroscopic continuous theory of Maxwell. Lorentz investigated the mutual interaction of a moving solitary electrons within a stationary aether. He rederived Maxwell's equations in this way but, critically, in the process he changed them to represent the wave in the coordinates moving electrons. He showed that the wave equations had the same form if they were transformed using a particular scaling factor,
where is the velocity of the moving electrons and is the speed of light. Lorentz noted that if this factor were applied as a length contraction to moving matter in a stationary ether, it would eliminate any effect of motion through the ether, in agreement with experiment.
In 1899, Henri Poincaré questioned the existence of an aether, showing that the principle of relativity prohibits the absolute motion assumed by proponents of the aether model. He named the transformation used by Lorentz the Lorentz transformation but interpreted it as a transformation between two inertial frames with relative velocity . This transformation makes the electromagnetic equations look the same in every uniformly moving inertial frame. Then, in 1905, Albert Einstein demonstrated that the principle of relativity, applied to the simultaneity of time and the constant speed of light, precisely predicts the Lorentz transformation. This theory of special relativity quickly became the modern concept of spacetime.
Thus the aether model, initially so very different from action at a distance, slowly changed to
resemble simple empty space.
In 1905, Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves. However, until 1915 gravity stood apart as a force still described by action-at-a-distance. In that year, Einstein showed that a field theory of spacetime, general relativity, consistent with relativity can explain gravity. New effects resulting from this theory were dramatic for cosmology but minor for planetary motion and physics on Earth.
Einstein himself noted Newton's "enormous practical success".
Modern action at a distance
In the early decades of the 20th century, Karl Schwarzschild, Hugo Tetrode, and Adriaan Fokker independently developed non-instantaneous models for action at a distance consistent with special relativity. In 1949 John Archibald Wheeler and Richard Feynman built on these models to develop a new field-free theory of electromagnetism.
While Maxwell's field equations are generally successful, the Lorentz model of a moving electron interacting with the field encounters mathematical difficulties: the self-energy of the moving point charge within the field is infinite. The Wheeler–Feynman absorber theory of electromagnetism avoids the self-energy issue. They interpret Abraham–Lorentz force, the apparent force resisting electron acceleration, as a real force returning from all the other existing charges in the universe.
The Wheeler–Feynman theory has inspired new thinking about the arrow of time and about the nature of quantum non-locality. The theory has implications for cosmology; it has been extended to quantum mechanics. A similar approach has been applied to develop an alternative theory of gravity consistent with general relativity. John G. Cramer has extended the Wheeler–Feynman ideas to create the transactional interpretation of quantum mechanics.
"Spooky action at a distance"
Albert Einstein wrote to Max Born about issues in quantum mechanics in 1947 and used a phrase translated as "spooky action at a distance", and in 1964, John Stewart Bell proved that quantum mechanics predicted stronger statistical correlations in the outcomes of certain far-apart measurements than any local theory possibly could. The phrase has been picked up and used as a description for the cause of small non-classical correlations between physically separated measurement of entangled quantum states. The correlations are predicted by quantum mechanics (the Bell theorem) and verified by experiments (the Bell test). Rather than a postulate like Newton's gravitational force, this use of "action-at-a-distance" concerns observed correlations which cannot be explained with localized particle-based models. Describing these correlations as "action-at-a-distance" requires assuming that particles became entangled and then traveled to distant locations, an assumption that is not required by quantum mechanics.
Force in quantum field theory
Quantum field theory does not need action at a distance. At the most fundamental level, only four forces are needed. Each force is described as resulting from the exchange of specific bosons. Two are short range: the strong interaction mediated by mesons and the weak interaction mediated by the weak boson; two are long range: electromagnetism mediated by the photon and gravity hypothesized to be mediated by the graviton. However, the entire concept of force is of secondary concern in advanced modern particle physics. Energy forms the basis of physical models and the word action has shifted away from implying a force to a specific technical meaning, an integral over the difference between potential energy and kinetic energy.
| Physical sciences | Physics basics: General | Physics |
571106 | https://en.wikipedia.org/wiki/Kale | Kale | Kale (), also called leaf cabbage, belongs to a group of cabbage (Brassica oleracea) cultivars primarily grown for their edible leaves; it has also been used as an ornamental plant.
Description
Kale plants have green or purple leaves, and the central leaves do not form a head (as with headed cabbage). The stems can be white or red, and can be tough even when cooked.
Etymology
The name kale originates from Northern Middle English cale (compare Scots kail and German Kohl) for various cabbages. The ultimate origin is Latin caulis 'cabbage'.
Cultivation
Derived from wild mustard, kale is considered to be closer to wild cabbage than most domesticated forms of B. oleracea.
Kale is usually a biennial plant grown from seed with a wide range of germination temperatures. It is hardy and thrives in wintertime, and can survive in temperatures as low as . Kale can become sweeter after a heavy frost.
History
Kale originated in the eastern Mediterranean and Anatolia, where it was cultivated for food beginning by 2000 BCE at the latest. Curly-leaved varieties of cabbage already existed along with flat-leaved varieties in Greece in the 4th century BC. These forms, which were referred to by the Romans as Sabellian kale, are considered to be the ancestors of modern kales.
The earliest record of cabbages in western Europe is of hard-heading cabbage in the 13th century. Records in 14th-century England distinguish between hard-heading cabbage and loose-leaf kale.
Russian traders introduced Russian kale into Canada and then into the United States in the 19th century. USDA botanist David Fairchild is credited with introducing kale (and many other crops) to Americans, having brought it back from Croatia, although Fairchild himself disliked cabbages, including kale. At the time, kale was widely grown in Croatia mostly because it was easy to grow and inexpensive, and could desalinate soil.
Cultivars
One may differentiate between kale varieties according to the low, intermediate, or high length of the stem, along with the variety of leaf types. The leaf colours range from light green to green, dark green, violet-green, and violet-brown.
Classification by leaf type:
Curly-leaf (Scots kale, blue curled kale)
Bumpy-leaf (black cabbage, better known by its Italian translation 'cavolo nero', and also known as Tuscan Cabbage, Tuscan Kale, lacinato and dinosaur kale)
Sparkly-leaf (shiny and glossy)
Plain-leaf (flat-leaf types like red Russian and white Russian kale)
Leaf and spear, or feathery-type leaf (a cross between curly- and plain-leaf)
Ornamental (less palatable and tougher leaves)
Because kale can grow well into winter, one variety of rape kale is called "hungry gap" after the period in winter in traditional agriculture when little else could be harvested. An extra-tall variety is known as Jersey kale or cow cabbage. Kai-lan or Chinese kale is a cultivar often used in Chinese cuisine. In Portugal, the bumpy-leaved kale is mostly called "couve galega" (Galician kale or Portuguese Cabbage).
Ornamental kale
Many varieties of kale and cabbage are grown mainly for ornamental leaves that are brilliant white, red, pink, lavender, blue, or violet in the interior of the rosette. The different types of ornamental kale are peacock kale, coral prince, kamone coral queen, color up kale, and chidori kale. Ornamental kale is as edible as any other variety, but potentially not as palatable. Kale leaves are increasingly used as an ingredient for vegetable bouquets and wedding bouquets.
Uses
Nutrition
Raw kale is composed of 84% water, 9% carbohydrates, 4% protein, and 1% fat (table). In a serving, raw kale provides of food energy and a large amount of vitamin K at 3.7 times the Daily Value (DV). It is a rich source (20% or more of the DV) of vitamin A, vitamin C, vitamin B6, folate, and manganese (see table "Kale, raw"). Kale is a good source (10–19% DV) of thiamin, riboflavin, pantothenic acid, vitamin E, and several dietary minerals, including iron, calcium, magnesium, potassium, and phosphorus. Boiling raw kale diminishes most of these nutrients, while values for vitamins A, C, and K and manganese remain substantial.
Phytochemicals
Kale is a source of the carotenoids, lutein and zeaxanthin. As with broccoli and other cruciferous vegetables, kale contains glucosinolate compounds, such as glucoraphanin, which contributes to the formation of sulforaphane, a compound under preliminary research for its potential to affect human health beneficially.
Boiling kale decreases the level of glucosinate compounds, whereas steaming, microwaving, or stir frying does not cause significant loss. Kale is high in oxalic acid, the levels of which can be reduced by cooking.
Kale contains high levels of polyphenols, such as ferulic acid, with levels varying due to environmental and genetic factors.
Culinary
Snack product
Kale chips have been produced as a potato chip substitute.
Regional uses
Europe
In the Netherlands, a traditional winter dish called "boerenkoolstamppot" is a mix of curly kale and mashed potatoes, sometimes with fried bacon, and served with rookworst ("smoked sausage").
In Northern Germany, there is a winter tradition known as "Kohlfahrt" ("kale trip"), where a group of people will go on a hike through the woods during the day before gathering at an inn or private residence where kale is served, usually with bacon and Kohlwurst ("kale sausage"). Kale is considered a Northern German staple and comfort food.
In Italy, cavolo nero kale is an ingredient of the Tuscan soup ribollita.
A traditional Portuguese soup, caldo verde, combines pureed potatoes, very finely sliced kale, olive oil and salt. Additional ingredients can include broth and sliced, cooked spicy sausage.
In Scotland, kale provided such a base for a traditional diet that the word in some Scots dialects is synonymous with food. To be "off one's kail" is to feel too ill to eat.
In Ireland, kale is mixed with mashed potatoes to make the traditional dish colcannon. It is popular on Halloween, when it may be served with sausages.
In the United Kingdom, the cultivation of kale (and other vegetables) was encouraged during World War II via the Dig for Victory campaign. The vegetable was easy to grow and provided important nutrients missing from a diet because of rationing.
Asia
In Sri Lanka, it is known as kola gova or ela gova. It is cultivated for edible use. A dish called 'kale mallung' is served almost everywhere on the island, along with rice.
United States
For most of the 20th century, kale was primarily used in the U.S. for decorative purposes; it became more popular as an edible vegetable in the 1990s due to its nutritional value.
In culture
The Kailyard school of Scottish writers, which included J. M. Barrie (creator of Peter Pan), consisted of authors who wrote about traditional rural Scottish life (kailyard = 'kale field'). In Cuthbertson's book Autumn in Kyle and the charm of Cunninghame, he states that Kilmaurs in East Ayrshire was famous for its kale, which was an important foodstuff. A story is told in which a neighbouring village offered to pay a generous price for some kale seeds, an offer too good to turn down. The locals agreed, but a gentle roasting on a shovel over a coal fire ensured the seeds never germinated.
Gallery
| Biology and health sciences | Brassicales | null |
571109 | https://en.wikipedia.org/wiki/Dirichlet%20problem | Dirichlet problem | In mathematics, a Dirichlet problem asks for a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region.
The Dirichlet problem can be solved for many PDEs, although originally it was posed for Laplace's equation. In that case the problem can be stated as follows:
Given a function f that has values everywhere on the boundary of a region in , is there a unique continuous function twice continuously differentiable in the interior and continuous on the boundary, such that is harmonic in the interior and on the boundary?
This requirement is called the Dirichlet boundary condition. The main issue is to prove the existence of a solution; uniqueness can be proven using the maximum principle.
History
The Dirichlet problem goes back to George Green, who studied the problem on general domains with general boundary conditions in his Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, published in 1828. He reduced the problem into a problem of constructing what we now call Green's functions, and argued that Green's function exists for any domain. His methods were not rigorous by today's standards, but the ideas were highly influential in the subsequent developments. The next steps in the study of the Dirichlet's problem were taken by Karl Friedrich Gauss, William Thomson (Lord Kelvin) and Peter Gustav Lejeune Dirichlet, after whom the problem was named, and the solution to the problem (at least for the ball) using the Poisson kernel was known to Dirichlet (judging by his 1850 paper submitted to the Prussian academy). Lord Kelvin and Dirichlet suggested a solution to the problem by a variational method based on the minimization of "Dirichlet's energy". According to Hans Freudenthal (in the Dictionary of Scientific Biography, vol. 11), Bernhard Riemann was the first mathematician who solved this variational problem based on a method which he called Dirichlet's principle. The existence of a unique solution is very plausible by the "physical argument": any charge distribution on the boundary should, by the laws of electrostatics, determine an electrical potential as solution. However, Karl Weierstrass found a flaw in Riemann's argument, and a rigorous proof of existence was found only in 1900 by David Hilbert, using his direct method in the calculus of variations. It turns out that the existence of a solution depends delicately on the smoothness of the boundary and the prescribed data.
General solution
For a domain having a sufficiently smooth boundary , the general solution to the Dirichlet problem is given by
where is the Green's function for the partial differential equation, and
is the derivative of the Green's function along the inward-pointing unit normal vector . The integration is performed on the boundary, with measure . The function is given by the unique solution to the Fredholm integral equation of the second kind,
The Green's function to be used in the above integral is one which vanishes on the boundary:
for and . Such a Green's function is usually a sum of the free-field Green's function and a harmonic solution to the differential equation.
Existence
The Dirichlet problem for harmonic functions always has a solution, and that solution is unique, when the boundary is sufficiently smooth and is continuous. More precisely, it has a solution when
for some , where denotes the Hölder condition.
Example: the unit disk in two dimensions
In some simple cases the Dirichlet problem can be solved explicitly. For example, the solution to the Dirichlet problem for the unit disk in R2 is given by the Poisson integral formula.
If is a continuous function on the boundary of the open unit disk , then the solution to the Dirichlet problem is given by
The solution is continuous on the closed unit disk and harmonic on
The integrand is known as the Poisson kernel; this solution follows from the Green's function in two dimensions:
where is harmonic () and chosen such that for .
Methods of solution
For bounded domains, the Dirichlet problem can be solved using the Perron method, which relies on the maximum principle for subharmonic functions. This approach is described in many text books. It is not well-suited to describing smoothness of solutions when the boundary is smooth. Another classical Hilbert space approach through Sobolev spaces does yield such information. The solution of the Dirichlet problem using Sobolev spaces for planar domains can be used to prove the smooth version of the Riemann mapping theorem. has outlined a different approach for establishing the smooth Riemann mapping theorem, based on the reproducing kernels of Szegő and Bergman, and in turn used it to solve the Dirichlet problem. The classical methods of potential theory allow the Dirichlet problem to be solved directly in terms of integral operators, for which the standard theory of compact and Fredholm operators is applicable. The same methods work equally for the Neumann problem.
Generalizations
Dirichlet problems are typical of elliptic partial differential equations, and potential theory, and the Laplace equation in particular. Other examples include the biharmonic equation and related equations in elasticity theory.
They are one of several types of classes of PDE problems defined by the information given at the boundary, including Neumann problems and Cauchy problems.
Example: equation of a finite string attached to one moving wall
Consider the Dirichlet problem for the wave equation describing a string attached between walls with one end attached permanently and the other moving with the constant velocity i.e. the d'Alembert equation on the triangular region of the Cartesian product of the space and the time:
As one can easily check by substitution, the solution fulfilling the first condition is
Additionally we want
Substituting
we get the condition of self-similarity
where
It is fulfilled, for example, by the composite function
with
thus in general
where is a periodic function with a period :
and we get the general solution
| Mathematics | Differential equations | null |
571215 | https://en.wikipedia.org/wiki/Serenoa | Serenoa | Serenoa repens, commonly known as saw palmetto, is a small palm, growing to a maximum height around .
Taxonomy
It is the sole species in the genus Serenoa. The genus name honors American botanist Sereno Watson.
Distribution and habitat
It is endemic to the subtropical and tropical Southeastern United States as well as Mexico, most commonly along the south Atlantic and Gulf Coastal plains and sand hills. It grows in clumps or dense thickets in sandy coastal areas, and as undergrowth in pine woods or hardwood hammocks.
Description
Erect stems or trunks are rarely produced, but are found in some populations. It is a hardy plant; extremely slow-growing, and long-lived, with some plants (especially in Florida) possibly being as old as 500–700 years.
Saw palmetto is a fan palm, with the leaves that have a bare petiole terminating in a rounded fan of about 20 leaflets. The petiole is armed with fine, sharp teeth or spines that give the species its common name. The teeth or spines are easily capable of breaking the skin, and protection should be worn when working around a saw palmetto. The leaves are light green inland, and silvery-white in coastal regions. The leaves are 1–2 m in length, the leaflets 50–100 cm long. They are similar to the leaves of the palmettos of genus Sabal. The flowers are yellowish-white, about 5 mm across, produced in dense compound panicles up to 60 cm long.
Ecology
The fruit is a large reddish-black drupe and is an important food source for wildlife and historically for humans. The plant is used as a food plant by the larvae of some Lepidoptera species such as Batrachedra decoctor, which feeds exclusively on the plant.
Medical research
Saw palmetto extract has been studied as a possible treatment for people with prostate cancer and for men with lower urinary tract symptoms associated with benign prostatic hyperplasia (BPH). As of 2023, there is insufficient scientific evidence that saw palmetto extract is effective for treating cancer or BPH and its symptoms.
One 2016 review of clinical studies with a standardized extract of saw palmetto (called Permixon) found that the extract was safe and may be effective for relieving BPH-induced urinary symptoms compared against a placebo.
Ethnobotany
Indigenous names are reported to include: or ("palmetto's uncle") in Choctaw; (Timucua); (Koasati); ("big palm", Alabama); ("big palm", Creek); ("big palm", Mikasuki); and (Taíno, possibly). Saw palmetto fibers have been found among materials from indigenous people as far north as Wisconsin and New York, strongly suggesting this material was widely traded prior to European contact. The leaves are used for thatching by several indigenous groups, so commonly that a location in Alachua County, Florida, is named Kanapaha ("palm house"). The fruit may have been used to treat an unclear form of fish poisoning by the Seminoles and Lucayans.
| Biology and health sciences | Arecales (inc. Palms) | Plants |
571274 | https://en.wikipedia.org/wiki/Drug%20discovery | Drug discovery | In the fields of medicine, biotechnology, and pharmacology, drug discovery is the process by which new candidate medications are discovered.
Historically, drugs were discovered by identifying the active ingredient from traditional remedies or by serendipitous discovery, as with penicillin. More recently, chemical libraries of synthetic small molecules, natural products, or extracts were screened in intact cells or whole organisms to identify substances that had a desirable therapeutic effect in a process known as classical pharmacology. After sequencing of the human genome allowed rapid cloning and synthesis of large quantities of purified proteins, it has become common practice to use high throughput screening of large compounds libraries against isolated biological targets which are hypothesized to be disease-modifying in a process known as reverse pharmacology. Hits from these screens are then tested in cells and then in animals for efficacy.
Modern drug discovery involves the identification of screening hits, medicinal chemistry, and optimization of those hits to increase the affinity, selectivity (to reduce the potential of side effects), efficacy/potency, metabolic stability (to increase the half-life), and oral bioavailability. Once a compound that fulfills all of these requirements has been identified, the process of drug development can continue. If successful, clinical trials are developed.
Modern drug discovery is thus usually a capital-intensive process that involves large investments by pharmaceutical industry corporations as well as national governments (who provide grants and loan guarantees). Despite advances in technology and understanding of biological systems, drug discovery is still a lengthy, "expensive, difficult, and inefficient process" with low rate of new therapeutic discovery. In 2010, the research and development cost of each new molecular entity was about US$1.8 billion. In the 21st century, basic discovery research is funded primarily by governments and by philanthropic organizations, while late-stage development is funded primarily by pharmaceutical companies or venture capitalists. To be allowed to come to market, drugs must undergo several successful phases of clinical trials, and pass through a new drug approval process, called the New Drug Application in the United States.
Discovering drugs that may be a commercial success, or a public health success, involves a complex interaction between investors, industry, academia, patent laws, regulatory exclusivity, marketing, and the need to balance secrecy with communication. Meanwhile, for disorders whose rarity means that no large commercial success or public health effect can be expected, the orphan drug funding process ensures that people who experience those disorders can have some hope of pharmacotherapeutic advances.
History
The idea that the effect of a drug in the human body is mediated by specific interactions of the drug molecule with biological macromolecules, (proteins or nucleic acids in most cases) led scientists to the conclusion that individual chemicals are required for the biological activity of the drug. This made for the beginning of the modern era in pharmacology, as pure chemicals, instead of crude extracts of medicinal plants, became the standard drugs. Examples of drug compounds isolated from crude preparations are morphine, the active agent in opium, and digoxin, a heart stimulant originating from Digitalis lanata. Organic chemistry also led to the synthesis of many of the natural products isolated from biological sources.
Historically, substances, whether crude extracts or purified chemicals, were screened for biological activity without knowledge of the biological target. Only after an active substance was identified was an effort made to identify the target. This approach is known as classical pharmacology, forward pharmacology, or phenotypic drug discovery.
Later, small molecules were synthesized to specifically target a known physiological/pathological pathway, avoiding the mass screening of banks of stored compounds. This led to great success, such as the work of Gertrude Elion and George H. Hitchings on purine metabolism, the work of James Black on beta blockers and cimetidine, and the discovery of statins by Akira Endo. Another champion of the approach of developing chemical analogues of known active substances was Sir David Jack at Allen and Hanbury's, later Glaxo, who pioneered the first inhaled selective beta2-adrenergic agonist for asthma, the first inhaled steroid for asthma, ranitidine as a successor to cimetidine, and supported the development of the triptans.
Gertrude Elion, working mostly with a group of fewer than 50 people on purine analogues, contributed to the discovery of the first anti-viral; the first immunosuppressant (azathioprine) that allowed human organ transplantation; the first drug to induce remission of childhood leukemia; pivotal anti-cancer treatments; an anti-malarial; an anti-bacterial; and a treatment for gout.
Cloning of human proteins made possible the screening of large libraries of compounds against specific targets thought to be linked to specific diseases. This approach is known as reverse pharmacology and is the most frequently used approach today.
In the 2020s, qubit and quantum computing started to be used to reduce the time needed to drug discovery.
Targets
A "target" is produced within the pharmaceutical industry. Generally, the "target" is the naturally existing cellular or molecular structure involved in the pathology of interest where the drug-in-development is meant to act. However, the distinction between a "new" and "established" target can be made without a full understanding of just what a "target" is. This distinction is typically made by pharmaceutical companies engaged in the discovery and development of therapeutics. In an estimate from 2011, 435 human genome products were identified as therapeutic drug targets of FDA-approved drugs.
"Established targets" are those for which there is a good scientific understanding, supported by a lengthy publication history, of both how the target functions in normal physiology and how it is involved in human pathology. This does not imply that the mechanism of action of drugs that are thought to act through a particular established target is fully understood. Rather, "established" relates directly to the amount of background information available on a target, in particular functional information. In general, "new targets" are all those targets that are not "established targets" but which have been or are the subject of drug discovery efforts. The majority of targets selected for drug discovery efforts are proteins, such as G-protein-coupled receptors (GPCRs) and protein kinases.
Screening and design
The process of finding a new drug against a chosen target for a particular disease usually involves high-throughput screening (HTS), wherein large libraries of chemicals are tested for their ability to modify the target. For example, if the target is a novel GPCR, compounds will be screened for their ability to inhibit or stimulate that receptor (see antagonist and agonist): if the target is a protein kinase, the chemicals will be tested for their ability to inhibit that kinase.
Another function of HTS is to show how selective the compounds are for the chosen target, as one wants to find a molecule which will interfere with only the chosen target, but not other, related targets. To this end, other screening runs will be made to see whether the "hits" against the chosen target will interfere with other related targets – this is the process of cross-screening. Cross-screening is useful because the more unrelated targets a compound hits, the more likely that off-target toxicity will occur with that compound once it reaches the clinic.
It is unlikely that a perfect drug candidate will emerge from these early screening runs. One of the first steps is to screen for compounds that are unlikely to be developed into drugs; for example compounds that are hits in almost every assay, classified by medicinal chemists as "pan-assay interference compounds", are removed at this stage, if they were not already removed from the chemical library. It is often observed that several compounds are found to have some degree of activity, and if these compounds share common chemical features, one or more pharmacophores can then be developed. At this point, medicinal chemists will attempt to use structure–activity relationships (SAR) to improve certain features of the lead compound:
increase activity against the chosen target
reduce activity against unrelated targets
improve the druglikeness or ADME properties of the molecule.
This process will require several iterative screening runs, during which, it is hoped, the properties of the new molecular entities will improve, and allow the favoured compounds to go forward to in vitro and in vivo testing for activity in the disease model of choice.
Amongst the physicochemical properties associated with drug absorption include ionization (pKa), and solubility; permeability can be determined by PAMPA and Caco-2. PAMPA is attractive as an early screen due to the low consumption of drug and the low cost compared to tests such as Caco-2, gastrointestinal tract (GIT) and Blood–brain barrier (BBB) with which there is a high correlation.
A range of parameters can be used to assess the quality of a compound, or a series of compounds, as proposed in the Lipinski's Rule of Five. Such parameters include calculated properties such as cLogP to estimate lipophilicity, molecular weight, polar surface area and measured properties, such as potency, in-vitro measurement of enzymatic clearance etc. Some descriptors such as ligand efficiency (LE) and lipophilic efficiency (LiPE) combine such parameters to assess druglikeness.
While HTS is a commonly used method for novel drug discovery, it is not the only method. It is often possible to start from a molecule which already has some of the desired properties. Such a molecule might be extracted from a natural product or even be a drug on the market which could be improved upon (so-called "me too" drugs). Other methods, such as virtual high throughput screening, where screening is done using computer-generated models and attempting to "dock" virtual libraries to a target, are also often used.
Another method for drug discovery is de novo drug design, in which a prediction is made of the sorts of chemicals that might (e.g.) fit into an active site of the target enzyme. For example, virtual screening and computer-aided drug design are often used to identify new chemical moieties that may interact with a target protein. Molecular modelling and molecular dynamics simulations can be used as a guide to improve the potency and properties of new drug leads.
There is also a paradigm shift in the drug discovery community to shift away from HTS, which is expensive and may only cover limited chemical space, to the screening of smaller libraries (maximum a few thousand compounds). These include fragment-based lead discovery (FBDD) and protein-directed dynamic combinatorial chemistry. The ligands in these approaches are usually much smaller, and they bind to the target protein with weaker binding affinity than hits that are identified from HTS. Further modifications through organic synthesis into lead compounds are often required. Such modifications are often guided by protein X-ray crystallography of the protein-fragment complex. The advantages of these approaches are that they allow more efficient screening and the compound library, although small, typically covers a large chemical space when compared to HTS.
Phenotypic screens have also provided new chemical starting points in drug discovery. A variety of models have been used including yeast, zebrafish, worms, immortalized cell lines, primary cell lines, patient-derived cell lines and whole animal models. These screens are designed to find compounds which reverse a disease phenotype such as death, protein aggregation, mutant protein expression, or cell proliferation as examples in a more holistic cell model or organism. Smaller screening sets are often used for these screens, especially when the models are expensive or time-consuming to run. In many cases, the exact mechanism of action of hits from these screens is unknown and may require extensive target deconvolution experiments to ascertain. The growth of the field of chemoproteomics has provided numerous strategies to identify drug targets in these cases.
Once a lead compound series has been established with sufficient target potency and selectivity and favourable drug-like properties, one or two compounds will then be proposed for drug development. The best of these is generally called the lead compound, while the other will be designated as the "backup". These decisions are generally supported by computational modelling innovations.
Nature as source
Traditionally, many drugs and other chemicals with biological activity have been discovered by studying chemicals that organisms create to affect the activity of other organisms for survival.
Despite the rise of combinatorial chemistry as an integral part of lead discovery process, natural products still play a major role as starting material for drug discovery. A 2007 report found that of the 974 small molecule new chemical entities developed between 1981 and 2006, 63% were natural derived or semisynthetic derivatives of natural products. For certain therapy areas, such as antimicrobials, antineoplastics, antihypertensive and anti-inflammatory drugs, the numbers were higher.
Natural products may be useful as a source of novel chemical structures for modern techniques of development of antibacterial therapies.
Plant-derived
Many secondary metabolites produced by plants have potential therapeutic medicinal properties. These secondary metabolites contain, bind to, and modify the function of proteins (receptors, enzymes, etc.). Consequently, plant derived natural products have often been used as the starting point for drug discovery.
History
Until the Renaissance, the vast majority of drugs in Western medicine were plant-derived extracts. This has resulted in a pool of information about the potential of plant species as important sources of starting materials for drug discovery. Botanical knowledge about different metabolites and hormones that are produced in different anatomical parts of the plant (e.g. roots, leaves, and flowers) are crucial for correctly identifying bioactive and pharmacological plant properties. Identifying new drugs and getting them approved for market has proved to be a stringent process due to regulations set by national drug regulatory agencies.
Jasmonates
Jasmonates are important in responses to injury and intracellular signals. They induce apoptosis and protein cascade via proteinase inhibitor, have defense functions, and regulate plant responses to different biotic and abiotic stresses. Jasmonates also have the ability to directly act on mitochondrial membranes by inducing membrane depolarization via release of metabolites.
Jasmonate derivatives (JAD) are also important in wound response and tissue regeneration in plant cells. They have also been identified to have anti-aging effects on human epidermal layer. It is suspected that they interact with proteoglycans (PG) and glycosaminoglycan (GAG) polysaccharides, which are essential extracellular matrix (ECM) components to help remodel the ECM. The discovery of JADs on skin repair has introduced newfound interest in the effects of these plant hormones in therapeutic medicinal application.
Salicylates
Salicylic acid (SA), a phytohormone, was initially derived from willow bark and has since been identified in many species. It is an important player in plant immunity, although its role is still not fully understood by scientists. They are involved in disease and immunity responses in plant and animal tissues. They have salicylic acid binding proteins (SABPs) that have shown to affect multiple animal tissues. The first discovered medicinal properties of the isolated compound was involved in pain and fever management. They also play an active role in the suppression of cell proliferation. They have the ability to induce death in lymphoblastic leukemia and other human cancer cells. One of the most common drugs derived from salicylates is aspirin, also known as acetylsalicylic acid, with anti-inflammatory and anti-pyretic properties.
Animal-derived
Some drugs used in modern medicine have been discovered in animals or are based on compounds found in animals. For example, the anticoagulant drugs, hirudin and its synthetic congener, bivalirudin, are based on saliva chemistry of the leech, Hirudo medicinalis. Used to treat type 2 diabetes, exenatide was developed from saliva compounds of the Gila monster, a venomous lizard.
Microbial metabolites
Microbes compete for living space and nutrients. To survive in these conditions, many microbes have developed abilities to prevent competing species from proliferating. Microbes are the main source of antimicrobial drugs. Streptomyces isolates have been such a valuable source of antibiotics, that they have been called medicinal molds. The classic example of an antibiotic discovered as a defense mechanism against another microbe is penicillin in bacterial cultures contaminated by Penicillium fungi in 1928.
Marine invertebrates
Marine environments are potential sources for new bioactive agents. Arabinose nucleosides discovered from marine invertebrates in 1950s, demonstrated for the first time that sugar moieties other than ribose and deoxyribose can yield bioactive nucleoside structures. It took until 2004 when the first marine-derived drug was approved. For example, the cone snail toxin ziconotide, also known as Prialt treats severe neuropathic pain. Several other marine-derived agents are now in clinical trials for indications such as cancer, anti-inflammatory use and pain. One class of these agents are bryostatin-like compounds, under investigation as anti-cancer therapy.
Chemical diversity
As above mentioned, combinatorial chemistry was a key technology enabling the efficient generation of large screening libraries for the needs of high-throughput screening. However, now, after two decades of combinatorial chemistry, it has been pointed out that despite the increased efficiency in chemical synthesis, no increase in lead or drug candidates has been reached. This has led to analysis of chemical characteristics of combinatorial chemistry products, compared to existing drugs or natural products. The chemoinformatics concept chemical diversity, depicted as distribution of compounds in the chemical space based on their physicochemical characteristics, is often used to describe the difference between the combinatorial chemistry libraries and natural products. The synthetic, combinatorial library compounds seem to cover only a limited and quite uniform chemical space, whereas existing drugs and particularly natural products, exhibit much greater chemical diversity, distributing more evenly to the chemical space. The most prominent differences between natural products and compounds in combinatorial chemistry libraries is the number of chiral centers (much higher in natural compounds), structure rigidity (higher in natural compounds) and number of aromatic moieties (higher in combinatorial chemistry libraries). Other chemical differences between these two groups include the nature of heteroatoms (O and N enriched in natural products, and S and halogen atoms more often present in synthetic compounds), as well as level of non-aromatic unsaturation (higher in natural products). As both structure rigidity and chirality are well-established factors in medicinal chemistry known to enhance compounds specificity and efficacy as a drug, it has been suggested that natural products compare favourably to today's combinatorial chemistry libraries as potential lead molecules.
Screening
Two main approaches exist for the finding of new bioactive chemical entities from natural sources.
The first is sometimes referred to as random collection and screening of material, but the collection is far from random. Biological (often botanical) knowledge is often used to identify families that show promise. This approach is effective because only a small part of the earth's biodiversity has ever been tested for pharmaceutical activity. Also, organisms living in a species-rich environment need to evolve defensive and competitive mechanisms to survive. Those mechanisms might be exploited in the development of beneficial drugs.
A collection of plant, animal and microbial samples from rich ecosystems can potentially give rise to novel biological activities worth exploiting in the drug development process. One example of successful use of this strategy is the screening for antitumor agents by the National Cancer Institute, which started in the 1960s. Paclitaxel was identified from Pacific yew tree Taxus brevifolia. Paclitaxel showed anti-tumour activity by a previously undescribed mechanism (stabilization of microtubules) and is now approved for clinical use for the treatment of lung, breast, and ovarian cancer, as well as for Kaposi's sarcoma. Early in the 21st century, Cabazitaxel (made by Sanofi, a French firm), another relative of taxol has been shown effective against prostate cancer, also because it works by preventing the formation of microtubules, which pull the chromosomes apart in dividing cells (such as cancer cells). Other examples are: 1. Camptotheca (Camptothecin · Topotecan · Irinotecan · Rubitecan · Belotecan); 2. Podophyllum (Etoposide · Teniposide); 3a. Anthracyclines (Aclarubicin · Daunorubicin · Doxorubicin · Epirubicin · Idarubicin · Amrubicin · Pirarubicin · Valrubicin · Zorubicin); 3b. Anthracenediones (Mitoxantrone · Pixantrone).
The second main approach involves ethnobotany, the study of the general use of plants in society, and ethnopharmacology, an area inside ethnobotany, which is focused specifically on medicinal uses.
Artemisinin, an antimalarial agent from sweet wormtree Artemisia annua, used in Chinese medicine since 200BC is one drug used as part of combination therapy for multiresistant Plasmodium falciparum.
Additionally, since machine learning has become more advanced, virtual screening is now an option for drug developers. AI algorithms are being used to perform virtual screening of chemical compounds, which involves predicting the activity of a compound against a specific target. By using machine learning algorithms to analyse large amounts of chemical data, researchers can identify potential new drug candidates that are more likely to be effective against a specific disease. Algorithms, such as Nearest-Neighbour classifiers, RF, extreme learning machines, SVMs, and deep neural networks (DNNs), are used for VS based on synthesis feasibility and can also predict in vivo activity and toxicity.
Structural elucidation
The elucidation of the chemical structure is critical to avoid the re-discovery of a chemical agent that is already known for its structure and chemical activity. Mass spectrometry is a method in which individual compounds are identified based on their mass/charge ratio, after ionization. Chemical compounds exist in nature as mixtures, so the combination of liquid chromatography and mass spectrometry (LC-MS) is often used to separate the individual chemicals. Databases of mass spectra for known compounds are available and can be used to assign a structure to an unknown mass spectrum. Nuclear magnetic resonance spectroscopy is the primary technique for determining chemical structures of natural products. NMR yields information about individual hydrogen and carbon atoms in the structure, allowing detailed reconstruction of the molecule's architecture.
New Drug Application
When a drug is developed with evidence throughout its history of research to show it is safe and effective for the intended use in the United States, the company can file an application – the New Drug Application (NDA) – to have the drug commercialized and available for clinical application. NDA status enables the FDA to examine all submitted data on the drug to reach a decision on whether to approve or not approve the drug candidate based on its safety, specificity of effect, and efficacy of doses.
| Biology and health sciences | General concepts_2 | Health |
571490 | https://en.wikipedia.org/wiki/Cordyceps | Cordyceps | Cordyceps is a genus of ascomycete fungi (sac fungi) that includes over 260 species worldwide, many of which are parasitic. Diverse variants of cordyceps have had more than 1,500 years of use in Chinese medicine. Most Cordyceps species are endoparasitoids, parasitic mainly on insects and other arthropods (they are thus entomopathogenic fungi); a few are parasitic on other fungi.
The generic name Cordyceps is derived from the ancient Greek κορδύλη kordýlē, meaning "club", and the Latin -ceps, meaning "-headed". The genus has a worldwide distribution, with most of the known species being from Asia.
Taxonomy
There are two recognized subgenera:
Cordyceps subgen. Cordyceps Fr. 1818
Cordyceps subgen. Cordylia Tul. & C. Tul. 1865
Cordyceps sensu stricto are the teleomorphs of several genera of anamorphic, entomopathogenic fungi such as Beauveria (Cordyceps bassiana), Septofusidium, and Lecanicillium.
Splits
Cordyceps subgen. Epichloe was at one time a subgenus, but is now regarded as a separate genus, Epichloë.
Cordyceps subgen. Ophiocordyceps was at one time a subgenus defined by morphology. Nuclear DNA sampling done in 2007 shows that members, including "C. sinensis" and "C. unilateralis", as well as some others not placed in the subgenus, were distantly related to most of the remainder of species then placed in Cordyceps (e.g. the type species C. militaris). As a result, it became its own genus, absorbing new members.
The 2007 study also peeled off Metacordyceps (anamorph Metarhizium, Pochonia) and Elaphocordyceps. A number of species remain unclearly assigned and provisionally retained in Cordyceps sensu lato.
Selected species
There are over 260 species recognised in the genus Cordyceps including the following species:
Cordyceps caespitosa
Cordyceps militaris
Cordyceps sinclairii
Biology
When Cordyceps attacks a host, the mycelium invades and eventually replaces the host tissue, while the elongated fruit body (ascocarp) may be cylindrical, branched, or of complex shape. The ascocarp bears many small, flask-shaped perithecia containing asci. These, in turn, contain thread-like ascospores, which usually break into fragments and are presumably infective.
Research
Polysaccharide components and the nucleoside cordycepin isolated from C. militaris are under basic research, but more advanced clinical research has been limited and too low in quality to identify any therapeutic potential of cordyceps components.
Uses
Along with Ophiocordyceps, Cordyceps has long been used in traditional Chinese medicine in the belief it can be used to treat diseases. There is no strong scientific evidence for such uses.
Cultural representations
The video game series The Last of Us (2013–present) and its television adaptation present Cordyceps as a deadly threat to the human race, its parasitism powerful enough to result in global calamity. The result is a zombie apocalypse and the collapse of human civilization. Scientific American notes that some species in the genus "are indeed body snatchers–they have been making real zombies for millions of years", though of ants or tarantulas, not of humans. The Last of Us proceeds from the premise that a new species of Cordyceps manages to jump between species of host, just as diseases like influenza have done. Its human hosts initially become violent "infected" beings, before turning into blind zombie "clickers", complete with fungal "fruiting bodies sprouting from their faces". In an additional detail that reflects Cordyceps biology, "clickers" then seek out a dark place in which to die and release the fungal spores, enabling the parasite to complete its life cycle. Scientific American comments that by combining a plausible mechanism with effective artistic design, the series gains "both scientific rigor and beauty".
In similar vein, Cordyceps causes a pandemic that wipes out most of humanity in Mike Carey's 2014 postapocalyptic novel The Girl with All the Gifts and its 2016 film adaptation. In this case, an infected person becomes a "hungry", a zombie thirsting for blood. In the fiction, Dr. Caldwell explains that the human-infecting fungus is a mutated form of Ophiocordyceps unilateralis (a group of species now split off from Cordyceps) which alters the behaviour of infected insects. The children of infected mothers, however, become "hybrids" with antibodies protecting them against the fungus.
Gallery
| Biology and health sciences | Edible fungi | Plants |
571549 | https://en.wikipedia.org/wiki/Muscle%20cell | Muscle cell | A muscle cell, also known as a myocyte, is a mature contractile cell in the muscle of an animal. In humans and other vertebrates there are three types: skeletal, smooth, and cardiac (cardiomyocytes). A skeletal muscle cell is long and threadlike with many nuclei and is called a muscle fiber. Muscle cells develop from embryonic precursor cells called myoblasts.
Skeletal muscle cells form by fusion of myoblasts to produce multinucleated cells (syncytia) in a process known as myogenesis. Skeletal muscle cells and cardiac muscle cells both contain myofibrils and sarcomeres and form a striated muscle tissue.
Cardiac muscle cells form the cardiac muscle in the walls of the heart chambers, and have a single central nucleus. Cardiac muscle cells are joined to neighboring cells by intercalated discs, and when joined in a visible unit they are described as a cardiac muscle fiber.
Smooth muscle cells control involuntary movements such as the peristalsis contractions in the esophagus and stomach. Smooth muscle has no myofibrils or sarcomeres and is therefore non-striated. Smooth muscle cells have a single nucleus.
Structure
The unusual microscopic anatomy of a muscle cell gave rise to its terminology. The cytoplasm in a muscle cell is termed the sarcoplasm; the smooth endoplasmic reticulum of a muscle cell is termed the sarcoplasmic reticulum; and the cell membrane in a muscle cell is termed the sarcolemma. The sarcolemma receives and conducts stimuli.
Skeletal muscle cells
Skeletal muscle cells are the individual contractile cells within a muscle and are more usually known as muscle fibers because of their longer threadlike appearance. Broadly there are two types of muscle fiber performing in muscle contraction, either as slow twitch (type I) or fast twitch (type II).
A single muscle such as the biceps brachii in a young adult human male contains around 253,000 muscle fibers. Skeletal muscle fibers are the only muscle cells that are multinucleated with the nuclei usually referred to as myonuclei. This occurs during myogenesis with the fusion of myoblasts each contributing a nucleus to the newly formed muscle cell or myotube. Fusion depends on muscle-specific proteins known as fusogens called myomaker and myomerger.
A striated muscle fiber contains myofibrils consisting of long protein chains of myofilaments. There are three types of myofilaments: thin, thick, and elastic that work together to produce a muscle contraction. The thin myofilaments are filaments of mostly actin and the thick filaments are of mostly myosin and they slide over each other to shorten the fiber length in a muscle contraction. The third type of myofilament is an elastic filament composed of titin, a very large protein.
In striations of muscle bands, myosin forms the dark filaments that make up the A band. Thin filaments of actin are the light filaments that make up the I band. The smallest contractile unit in the fiber is called the sarcomere which is a repeating unit within two Z bands. The sarcoplasm also contains glycogen which provides energy to the cell during heightened exercise, and myoglobin, the red pigment that stores oxygen until needed for muscular activity.
The sarcoplasmic reticulum, a specialized type of smooth endoplasmic reticulum, forms a network around each myofibril of the muscle fiber. This network is composed of groupings of two dilated end-sacs called terminal cisternae, and a single T-tubule (transverse tubule), which bores through the cell and emerge on the other side; together these three components form the triads that exist within the network of the sarcoplasmic reticulum, in which each T-tubule has two terminal cisternae on each side of it. The sarcoplasmic reticulum serves as a reservoir for calcium ions, so when an action potential spreads over the T-tubule, it signals the sarcoplasmic reticulum to release calcium ions from the gated membrane channels to stimulate muscle contraction.
In skeletal muscle, at the end of each muscle fiber, the outer layer of the sarcolemma combines with tendon fibers at the myotendinous junction. Within the muscle fiber pressed against the sarcolemma are multiply flattened nuclei; embryologically, this multinucleate condition results from multiple myoblasts fusing to produce each muscle fiber, where each myoblast contributes one nucleus.
Cardiac muscle cells
The cell membrane of a cardiac muscle cell has several specialized regions, which may include the intercalated disc, and transverse tubules. The cell membrane is covered by a lamina coat which is approximately 50 nm wide. The laminar coat is separable into two layers; the lamina densa and lamina lucida. In between these two layers can be several different types of ions, including calcium.
Cardiac muscle like the skeletal muscle is also striated and the cells contain myofibrils, myofilaments, and sarcomeres as the skeletal muscle cell.
The cell membrane is anchored to the cell's cytoskeleton by anchor fibers that are approximately 10 nm wide. These are generally located at the Z lines so that they form grooves and transverse tubules emanate. In cardiac myocytes, this forms a scalloped surface.
The cytoskeleton is what the rest of the cell builds off of and has two primary purposes; the first is to stabilize the topography of the intracellular components and the second is to help control the size and shape of the cell. While the first function is important for biochemical processes, the latter is crucial in defining the surface-to-volume ratio of the cell. This heavily influences the potential electrical properties of excitable cells. Additionally, deviation from the standard shape and size of the cell can have a negative prognostic impact.
Smooth muscle cells
Smooth muscle cells are so-called because they have neither myofibrils nor sarcomeres and therefore no striations. They are found in the walls of hollow organs, including the stomach, intestines, bladder and uterus, in the walls of blood vessels, and in the tracts of the respiratory, urinary, and reproductive systems. In the eyes, the ciliary muscles dilate and contract the iris and alter the shape of the lens. In the skin, smooth muscle cells such as those of the arrector pili cause hair to stand erect in response to cold temperature or fear.
Smooth muscle cells are spindle-shaped with wide middles, and tapering ends. They have a single nucleus and range from 30 to 200 micrometers in length. This is thousands of times shorter than skeletal muscle fibers. The diameter of their cells is also much smaller which removes the need for T-tubules found in striated muscle cells. Although smooth muscle cells lack sarcomeres and myofibrils they do contain large amounts of the contractile proteins actin and myosin. Actin filaments are anchored by dense bodies (similar to the Z discs in sarcomeres) to the sarcolemma.
Development
A myoblast is an embryonic precursor cell that differentiates to give rise to the different muscle cell types. Differentiation is regulated by myogenic regulatory factors, including MyoD, Myf5, myogenin, and MRF4. GATA4 and GATA6 also play a role in myocyte differentiation.
Skeletal muscle fibers are made when myoblasts fuse together; muscle fibers therefore are cells with multiple nuclei, known as myonuclei, with each cell nucleus originating from a single myoblast. The fusion of myoblasts is specific to skeletal muscle, and not cardiac muscle or smooth muscle.
Myoblasts in skeletal muscle that do not form muscle fibers dedifferentiate back into myosatellite cells. These satellite cells remain adjacent to a skeletal muscle fiber, situated between the sarcolemma and the basement membrane of the endomysium (the connective tissue investment that divides the muscle fascicles into individual fibers). To re-activate myogenesis, the satellite cells must be stimulated to differentiate into new fibers.
Myoblasts and their derivatives, including satellite cells, can now be generated in vitro through directed differentiation of pluripotent stem cells.
Kindlin-2 plays a role in developmental elongation during myogenesis.
Function
Muscle contraction in striated muscle
Skeletal muscle contraction
When contracting, thin and thick filaments slide concerning each other by using adenosine triphosphate. This pulls the Z discs closer together in a process called the sliding filament mechanism. The contraction of all the sarcomeres results in the contraction of the whole muscle fiber. This contraction of the myocyte is triggered by the action potential over the cell membrane of the myocyte. The action potential uses transverse tubules to get from the surface to the interior of the myocyte, which is continuous within the cell membrane.
Sarcoplasmic reticula are membranous bags that transverse tubules touch but remain separate from. These wrap themselves around each sarcomere and are filled with Ca2+.
Excitation of a myocyte causes depolarization at its synapses, the neuromuscular junctions, which triggers an action potential. With a singular neuromuscular junction, each muscle fiber receives input from just one somatic efferent neuron. Action potential in a somatic efferent neuron causes the release of the neurotransmitter acetylcholine.
When the acetylcholine is released it diffuses across the synapse and binds to a receptor on the sarcolemma, a term unique to muscle cells that refers to the cell membrane. This initiates an impulse that travels across the sarcolemma.
When the action potential reaches the sarcoplasmic reticulum it triggers the release of Ca2+ from the Ca2+ channels. The Ca2+ flows from the sarcoplasmic reticulum into the sarcomere with both of its filaments. This causes the filaments to start sliding and the sarcomeres to become shorter. This requires a large amount of ATP, as it is used in both the attachment and release of every myosin head. Very quickly Ca2+ is actively transported back into the sarcoplasmic reticulum, which blocks the interaction between the thin and thick filament. This in turn causes the muscle cell to relax.
There are four main types of muscle contraction: isometric, isotonic, eccentric and concentric. Isometric contractions are skeletal muscle contractions that do not cause movement of the muscle. and isotonic contractions are skeletal muscle contractions that do cause movement. Eccentric contraction is when a muscle moves under a load. Concentric contraction is when a muscle shortens and generates force.
Cardiac muscle contraction
Specialized cardiomyocytes in the sinoatrial node generate electrical impulses that control the heart rate. These electrical impulses coordinate contraction throughout the remaining heart muscle via the electrical conduction system of the heart. Sinoatrial node activity is modulated, in turn, by nerve fibers of both the sympathetic and parasympathetic nervous systems. These systems act to increase and decrease, respectively, the rate of production of electrical impulses by the sinoatrial node.
Evolution
The evolutionary origin of muscle cells in animals is highly debated: One view is that muscle cells evolved once, and thus all muscle cells have a single common ancestor. Another view is that muscles cells evolved more than once, and any morphological or structural similarities are due to convergent evolution, and the development of shared genes that predate the evolution of muscle – even the mesoderm (the mesoderm is the germ layer that gives rise to muscle cells in vertebrates).
Schmid & Seipel (2005) argue that the origin of muscle cells is a monophyletic trait that occurred concurrently with the development of the digestive and nervous systems of all animals, and that this origin can be traced to a single metazoan ancestor in which muscle cells are present. They argue that molecular and morphological similarities between the muscles cells in Cnidaria and Ctenophora are similar enough to those of bilaterians that there would be one ancestor in metazoans from which muscle cells derive. In this case, Schmid & Seipel argue that the last common ancestor of Bilateria, Ctenophora and Cnidaria, was a triploblast (an organism having three germ layers), and that diploblasty, meaning an organism with two germ layers, evolved secondarily, because of their observation of the lack of mesoderm or muscle found in most cnidarians and ctenophores. By comparing the morphology of cnidarians and ctenophores to bilaterians, Schmid & Seipel were able to conclude that there were myoblast-like structures in the tentacles and gut of some species of cnidarians and the tentacles of ctenophores. Since this is a structure unique to muscle cells, these scientists determined based on the data collected by their peers that this is a marker for striated muscles similar to that observed in bilaterians. The authors also remark that the muscle cells found in cnidarians and ctenophores are often contested due to the origin of these muscle cells being the ectoderm rather than the mesoderm or mesendoderm.
The origin of true muscle cells is argued by other authors to be the endoderm portion of the mesoderm and the endoderm. However, Schmid & Seipel (2005) counter skepticism – about whether the muscle cells found in ctenophores and cnidarians are "true" muscle cells – by considering that cnidarians develop through a medusa stage and polyp stage. They note that in the hydrozoans' medusa stage, there is a layer of cells that separate from the distal side of the ectoderm, which forms the striated muscle cells in a way similar to that of the mesoderm; they call this third separated layer of cells the ectocodon. Schmid & Seipel argue that, even in bilaterians, not all muscle cells are derived from the mesendoderm: Their key examples are that in both the eye muscles of vertebrates and the muscles of spiralians, these cells derive from the ectodermal mesoderm, rather than the endodermal mesoderm. Furthermore, they argue that since myogenesis does occur in cnidarians with the help of the same molecular regulatory elements found in the specification of muscle cells in bilaterians, that there is evidence for a single origin for striated muscle.
In contrast to this argument for a single origin of muscle cells, Steinmetz, Kraus, et al. (2012) argue that molecular markers such as the myosin II protein used to determine this single origin of striated muscle predate the formation of muscle cells. They use an example of the contractile elements present in the Porifera, or sponges, that do truly lack this striated muscle containing this protein. Furthermore, Steinmetz, Kraus, et al. present evidence for a polyphyletic origin of striated muscle cell development through their analysis of morphological and molecular markers that are present in bilaterians and absent in cnidarians, ctenophores, and bilaterians. Steinmetz, Kraus, et al. showed that the traditional morphological and regulatory markers such as actin, the ability to couple myosin side chains phosphorylation to higher concentrations of the positive concentrations of calcium, and other MyHC elements are present in all metazoans not just the organisms that have been shown to have muscle cells. Thus, the usage of any of these structural or regulatory elements in determining whether or not the muscle cells of the cnidarians and ctenophores are similar enough to the muscle cells of the bilaterians to confirm a single lineage is questionable according to Steinmetz, Kraus, et al. Furthermore, they explain that the orthologues of the Myc genes that have been used to hypothesize the origin of striated muscle occurred through a gene duplication event that predates the first true muscle cells (meaning striated muscle), and they show that the Myc genes are present in the sponges that have contractile elements but no true muscle cells. Steinmetz, Kraus, et al. also showed that the localization of this duplicated set of genes that serve both the function of facilitating the formation of striated muscle genes, and cell regulation and movement genes, were already separated into striated much and non-muscle MHC. This separation of the duplicated set of genes is shown through the localization of the striated much to the contractile vacuole in sponges, while the non-muscle much was more diffusely expressed during developmental cell shape and change. Steinmetz, Kraus, et al. found a similar pattern of localization in cnidarians except with the cnidarian N. vectensis having this striated muscle marker present in the smooth muscle of the digestive tract. Thus, they argue that the pleisiomorphic trait of the separated orthologues of much cannot be used to determine the monophylogeny of muscle, and additionally argue that the presence of a striated muscle marker in the smooth muscle of this cnidarian shows a fundamental different mechanism of muscle cell development and structure in cnidarians.
Steinmetz, Kraus, et al. (2012) further argue for multiple origins of striated muscle in the metazoans by explaining that a key set of genes used to form the troponin complex for muscle regulation and formation in bilaterians is missing from the cnidarians and ctenophores, and 47 structural and regulatory proteins observed, Steinmetz, Kraus, et al. were not able to find even on unique striated muscle cell protein that was expressed in both cnidarians and bilaterians. Furthermore, the Z-disc seemed to have evolved differently even within bilaterians and there is a great deal of diversity of proteins developed even between this clade, showing a large degree of radiation for muscle cells. Through this divergence of the Z-disc, Steinmetz, Kraus, et al. argue that there are only four common protein components that were present in all bilaterians muscle ancestors and that of these for necessary Z-disc components only an actin protein that they have already argued is an uninformative marker through its pleisiomorphic state is present in cnidarians. Through further molecular marker testing, Steinmetz et al. observe that non-bilaterians lack many regulatory and structural components necessary for bilaterians muscle formation and do not find any unique set of proteins to both bilaterians and cnidarians and ctenophores that are not present in earlier, more primitive animals such as the sponges and amoebozoans. Through this analysis, the authors conclude that due to the lack of elements that bilaterian muscles are dependent on for structure and usage, nonbilaterian muscles must be of a different origin with a different set of regulatory and structural proteins.
In another take on the argument, Andrikou & Arnone (2015) use the newly available data on gene regulatory networks to look at how the hierarchy of genes and morphogens and another mechanism of tissue specification diverge and are similar among early deuterostomes and protostomes. By understanding not only what genes are present in all bilaterians but also the time and place of deployment of these genes, Andrikou & Arnone discuss a deeper understanding of the evolution of myogenesis.
In their paper, Andrikou & Arnone (2015) argue that to truly understand the evolution of muscle cells the function of transcriptional regulators must be understood in the context of other external and internal interactions. Through their analysis, Andrikou & Arnone found that there were conserved orthologues of the gene regulatory network in both invertebrate bilaterians and cnidarians. They argue that having this common, general regulatory circuit allowed for a high degree of divergence from a single well-functioning network. Andrikou & Arnone found that the orthologues of genes found in vertebrates had been changed through different types of structural mutations in the invertebrate deuterostomes and protostomes, and they argue that these structural changes in the genes allowed for a large divergence of muscle function and muscle formation in these species. Andrikou & Arnone were able to recognize not only any difference due to mutation in the genes found in vertebrates and invertebrates but also the integration of species-specific genes that could also cause divergence from the original gene regulatory network function. Thus, although a common muscle patterning system has been determined, they argue that this could be due to a more ancestral gene regulatory network being coopted several times across lineages with additional genes and mutations causing very divergent development of muscles. Thus it seems that the myogenic patterning framework may be an ancestral trait. However, Andrikou & Arnone explain that the basic muscle patterning structure must also be considered in combination with the cis regulatory elements present at different times during development. In contrast with the high level of gene family apparatuses structure, Andrikou and Arnone found that the cis-regulatory elements were not well conserved both in time and place in the network which could show a large degree of divergence in the formation of muscle cells. Through this analysis, it seems that the myogenic GRN is an ancestral GRN with actual changes in myogenic function and structure possibly being linked to later coopts of genes at different times and places.
Evolutionarily, specialized forms of skeletal and cardiac muscles predated the divergence of the vertebrate / arthropod evolutionary line. This indicates that these types of muscle developed in a common ancestor sometime before 700 million years ago (mya). Vertebrate smooth muscle was found to have evolved independently from the skeletal and cardiac muscle types.
Invertebrate muscle cell types
The properties used for distinguishing fast, intermediate, and slow muscle fibers can be different for invertebrate flight and jump muscle. To further complicate this classification scheme, the mitochondrial content, and other morphological properties within a muscle fiber, can change in a tsetse fly with exercise and age.
| Biology and health sciences | Muscular system | null |
571662 | https://en.wikipedia.org/wiki/Muskrat | Muskrat | The muskrat or common muskrat (Ondatra zibethicus) is a medium-sized semiaquatic rodent native to North America and an introduced species in parts of Europe, Asia, and South America.
The muskrat is found in wetlands over various climates and habitats. It has crucial effects on the ecology of wetlands, and is a resource of food and fur for humans.
Adult muskrats weigh , with a body length (excluding the tail) of . They are covered with short, thick fur of medium to dark brown color. Their long tails, covered with scales rather than hair, are laterally compressed and generate a small amount of thrust, with their webbed hind feet being the main means of propulsion, and the unique tail mainly important in directional stability. Muskrats spend most of their time in the water and can swim underwater for 12 to 17 minutes. They live in families of a male and female pair and their young. They build nests to protect themselves from the cold and predators, often burrowed into the bank with an underwater entrance. Muskrats feed mostly on cattail and other aquatic vegetation but also eat small animals.
Ondatra zibethicus is the only extant species in the genus Ondatra; its closest relative is the round-tailed muskrat (Neofiber alleni). It is the largest species in the subfamily Arvicolinae, which includes 142 other species of rodents, mostly voles and lemmings. Muskrats are referred to as "rats" in a general sense because they are medium-sized rodents with an adaptable lifestyle and an omnivorous diet. They are not, however, members of the genus Rattus. They are not closely related to beavers, with which they share habitat and general appearance.
Etymology
The muskrat's name probably comes from a word of Algonquian (possibly Powhatan) origin, muscascus (literally "it is red", so called for its colorings), or from the Abenaki native word mòskwas, as seen in the archaic English name for the animal, musquash. Because of the association with the "musky" odor, which the muskrat uses to mark its territory, and its flattened tail, the name became altered to musk-beaver; later it became "muskrat" due to its resemblance to rats.
Similarly, its specific name zibethicus means "musky", being the adjective of zibethus "civet musk; civet". The genus name comes from the Huron word for the animal, ondathra, and entered Neo-Latin as Ondatra via French.
Description
An adult muskrat is about long, half of that length being the tail, and weighs . That is about four times the weight of the brown rat (Rattus norvegicus), though an adult muskrat is only slightly longer. It is almost certainly the most prominent and heaviest member of the diverse family Cricetidae, which includes all voles, lemmings, and most mice native to the Americas, and hamsters in Eurasia. The muskrat is much smaller than a beaver (Castor canadensis), with which they often share a habitat.
Muskrats are covered with short, thick fur, which is medium to dark brown or black, with the belly a bit lighter (countershaded); as the animal ages, it turns partly gray. The fur has two layers, which protect it from cold water. They have long tails covered with scales rather than hair. To aid in swimming, their tails are slightly flattened vertically, a shape that is unique to them. When they walk on land, their tails drag on the ground, which makes their tracks easy to recognize.
Muskrats spend most of their time in water and are well suited to their semiaquatic life. They can swim underwater for 12 to 17 minutes. Their bodies, like those of seals and whales, are less sensitive to the buildup of carbon dioxide than those of most other mammals. They can close off their ears to keep water out. Their hind feet are partially webbed and are their primary means of propulsion. Their tail functions as a rudder, controlling the direction they swim.
Distribution and ecology
Muskrats are found in most of Canada, the United States, and a small part of northern Mexico. They were introduced to Europe at the beginning of the 20th century and have become an invasive species in northwestern Europe. They primarily inhabit wetlands, areas in or near saline and freshwater wetlands, rivers, lakes, or ponds. They are not found in Florida, where the round-tailed muskrat, or Florida water rat (Neofiber alleni), fills their ecological niche.
Their populations naturally cycle; in areas where they become abundant, they can remove much of the vegetation in wetlands. They are thought to play a major role in determining the vegetation of prairie wetlands in particular. They also selectively remove preferred plant species, thereby changing the abundance of plant species in many kinds of wetlands. Species commonly eaten include cattail and yellow water lily. Alligators are thought to be an important natural predator, and the absence of muskrats from Florida may, in part, be the result of alligator predation.
While much wetland habitat has been eliminated due to human activity, new muskrat habitat has been created by the construction of canals or irrigation channels (e.g., acequias), and the muskrat remains widespread. They can live alongside streams that contain the sulfurous water that drains away from coal mines. Fish and frogs perish in such streams, yet muskrats may thrive and occupy the wetlands. Muskrats also benefit from human persecution of some of their predators.
The muskrat is classed as a "prohibited new organism" under New Zealand's Hazardous Substances and New Organisms Act 1996, preventing it from being imported into the country.
The trematode Metorchis conjunctus can also infect muskrats.
Decline in the United States
According to an article in Hakai Magazine, from April 2024, the muskrat populations have declined by at least one-half in 34 US states. The collapse was near-total, between 90 and 99 percent in a handful of states. Rhode Island's muskrat populations are estimated to be roughly 15 percent of what they were several decades ago.
The decline in muskrat populations began in the 1990s and early 2000s.
Subspecies
Ondatra zibethicus has 16 subspecies: O.z. albus, O.z. aquihnis, O.z. bemardi, O.z. cinnamominus, O.z. macrodom, O.z. mergens, O.z. obscurus, O.z. occipitalis, O.z. osoyoosensis, O.z. pallidus, O.z.ripensis, O.z. rivalicus, O.z. roidmani, O.z. spatulatus, O.z. zalaphus and O.z. zibethicus.
Invasiveness status
In Europe, the muskrat has been included in the list of invasive alien species of Union concern (the Union list) since August 2, 2017. This implies that this species cannot be imported, bred, transported, commercialized, or intentionally released into the environment in the whole of the European Union. Muskrats were introduced to Europe in the early 20th century for fur farming. In many European countries, muskrats have become problematic, damaging flood control systems, crops, and river banks with burrowing activities. Their presence is particularly concerning in areas with delicate ecosystems, where they can outcompete or displace native species. Several European countries have implemented control measures and eradication programs to manage muskrat populations and mitigate their impact.
Behavior
Muskrats normally live in families consisting of a male and female and their young. During the spring, they often fight with other muskrats over territory and potential mates. Many are injured or killed in these fights. Muskrat families build nests to protect themselves and their young from cold and predators. Muskrats burrow into the bank with an underwater entrance in streams, ponds, or lakes. These entrances are wide. In marshes, push-ups are constructed from vegetation and mud. These push-ups are up to in height. In snowy areas, they keep the openings to their push-ups closed by plugging them with vegetation, which they replace daily. Some muskrat push-ups are swept away in spring floods and must be replaced yearly. Muskrats also build feeding platforms constructed in the water from cut pieces of vegetation supported by a branch structure. They help maintain open areas in marshes, which helps to provide habitat for aquatic birds.
Muskrats are most active at night or near dawn and dusk. They feed on cattails and other aquatic vegetation. They do not store food for the winter, but sometimes eat the insides of their push-ups. While they may appear to steal food beavers have stored, more seemingly cooperative partnerships with beavers exist, as featured in the BBC David Attenborough wildlife documentary The Life of Mammals. Plant materials compose about 95% of their diets, but they also eat small animals, such as freshwater mussels, frogs, crayfish, fish, and small turtles. Muskrats follow trails they make in swamps and ponds. They continue to follow their trails under the ice when the water freezes.
Muskrats provide an important food resource for many other animals, including mink, red and gray foxes, cougars, coyotes, wolves, boreal lynxe, Canada lynx bobcats, raccoons, brown and black bears, wolverines, eagles, hawks, large owls, snakes, alligators, and bull sharks. Otters, snapping turtles, heron,s bullfrogs, large fish such as pike and largemouth bass, and predatory land reptiles such as monitor lizards prey on baby muskrats. Caribou, moose, and elk sometimes feed on the vegetation which makes up muskrat push-ups during the winter when other food is scarce for them. In their introduced range in the former Soviet Union, the muskrat's greatest predator is the golden jackal. They can be completely eradicated in shallow water bodies. During the winter of 1948–49 in the Amu Darya (river in central Asia), muskrats constituted 12.3% of jackal feces contents, and 71% of muskrat houses were destroyed by jackals, 16% of which froze and became unsuitable for muskrat occupation. Jackals also harm the muskrat industry by eating muskrats caught in traps or taking skins left out to dry.
Muskrats, like most rodents, are prolific breeders. Females can have two or three litters a year of six to eight young each. The babies are born small and hairless and weigh only about . In southern environments, young muskrats mature in six months, while in colder northern environments, it takes about a year. Muskrat populations appear to go through a regular pattern of rise and dramatic decline spread over a six- to 10-year period. Some other rodents, including famously the muskrat's close relatives, such as the lemmings, go through the same type of population changes.
In human history
Native Americans have long considered the muskrat to be an important animal. Some predict winter snowfall levels by observing the size and timing of muskrat lodge construction.
In several Native American creation myths, the muskrat dives to the bottom of the primordial sea to bring up the mud from which the earth is created after other animals have failed in the task.
Muskrats have sometimes been a food resource for North Americans. In the southeastern portion of Michigan, a longstanding dispensation allows Catholics to consume muskrat as their Friday penance, on Ash Wednesday, and on Lenten Fridays (when the eating of flesh, except for fish, is prohibited); this tradition dates back to at least the early 19th century. In 2019, it was reported that a series of muskrat dinners were held during Lent in the areas along the Detroit River, with up to 900 muskrats being consumed at a single dinner. The preparation involved the removal of the musk glands and the gutting and cleaning of the carcass before the meat was parboiled for four hours with onion and garlic and finally fried.
Muskrat fur is warm, becoming prime in northern North America at the beginning of December. In the early 20th century, the trapping of the animal for its fur became an important industry there. During that era, the fur was specially trimmed and dyed to be sold widely in the US as "Hudson seal" fur. Muskrats were introduced at that time to Europe as a fur resource and spread throughout northern Europe and Asia.
In some European countries, such as Belgium, France, and the Netherlands, the muskrat is considered an invasive pest, as its burrowing damages the dikes and levees on which these low-lying countries depend for protection from flooding. In those countries, it is trapped, poisoned, and hunted to attempt to keep the population down. Muskrats also eat corn and other farm and garden crops growing near water bodies.
Royal Canadian Mounted Police winter hats are made from muskrat fur.
| Biology and health sciences | Rodents | null |
571727 | https://en.wikipedia.org/wiki/Sabal | Sabal | Sabal is a genus of New World palms (or fan-palms). Currently, there are 17 recognized species of Sabal, including one hybrid species.
Distribution
The species are native to the subtropical and tropical regions of the Americas, from the Gulf Coast/South Atlantic states in the Southeastern United States, south through the Caribbean, Mexico, and Central America to Colombia and Venezuela.
Description
Members of this genus are typically identified by the leaves which originate from a bare, unarmed petiole in a fan-like structure. All members of this genus have a costa (or midrib) that extends into the leaf blade. This midrib can vary in length; and it is due to this variation that leaf blades of certain species of Sabal are strongly curved or strongly costapalmate (as in Sabal palmetto and Sabal etonia) or weakly curved (almost flattened), weakly costapalmate, (as in Sabal minor). Like many other palms, the fruit of Sabal are drupe, that typically change from green to black when mature.
Taxonomy
The name Sabal was first applied to members of the group by Michel Adanson in the 18th century. Previous names that this genus was associated with include Corypha, Chamaerops, Rhapis. This section highlights important phylogenetic work done within the genus Sabal.
In 1990, Scott Zona outlined key morphological and anatomical characters that he used to analyze species relationships of Sabal. Through this analysis of characters, Zona produced a cladogram that portrays evolutionary relationships amongst 15 species of Sabal. Based on the distribution of species within his cladogram, Zona recognized four distinct clades. The clades within his study include (Clade 1) Sabal minor; (Clade 2) Sabal bermudana, Sabal palmetto, Sabal miamiensis, and Sabal etonia; (Clade 3) Sabal maritima, Sabal domingensis, Sabal causiarum, Sabal maurittiformis, Sabal yapa, Sabal mexicana, and Sabal guatemalensis; (Clade 4) Sabal uresana, Sabal rosei, and Sabal pumos. These clades associate closely with geographic distributions. Most of the species within Clade 3 occur in the Greater Antilles and southern Mexico, where species that occur in the Greater Antilles are more closely related to each other than those that occur in southern Mexico. Although Clade 4 also occurs in Mexico, these species occur on the west coast where they are geographically separated from the Mexican species within the southern part of the country. The remaining two clades, Clade 1 and Clade 2 predominantly occur in the southeastern United States although S. palmetto and S. minor are also known from Cuba and the Bahamas (S. palmetto) and northern Mexico (S. minor). Sabal bermudana is only known from Bermuda.
In 2016 Heyduk, Trapnell, Barrett, and Leebens-Mack conducted a new study on Sabal that analyzed molecular (e.g. nuclear, plastid) data from 15 species of the group. This study incorporated plastid and nuclear sequence data that together were used to estimate the relatedness between the species of Sabal. The results of the study show species relationships to be different from the distribution of Zona's cladogram. Within the framework of this study, a major difference between the results of Zona and this study is the placement of "Clade 4" (Sabal uresana, Sabal rosei, and Sabal pumos) which split and integrate these species throughout the phylogeny of Sabal. The largest of the clades identified by Zona, "Clade 3" is disrupted significantly as it is split into multiple clades. Although Sabal causiarum and S. domingensis retain their relationship as sister species, they are included in a clade that also includes S. maritima and S. rosei. Despite these disruptions in placement between these two studies, the overall integrity of "Clade 1" and "Clade 2" is in congruence with the clades established from the molecular data.
Species
Prehistoric taxa
Extinct species within this genus include:
†Sabal bigbendense Manchester et al. 2010
†Sabal bracknellense (Chandler) Mai
†Sabal grayana Brown 1962
†Sabal imperialis Brown 1962
†Sabal jenkinsii (Reid & Chandler) Manchester 1994
†Sabal lamanonis
†Sabal raphipholia
Plants of the genus lived from the late Cretaceous to the Quaternary period (from 66 million to 12 thousand years ago). Fossils have been found in the United States, as well as in Europe (Italy, Switzerland, Germany, Greece, Slovakia, the United Kingdom, France) and Japan. Leaf fossils of Sabal lamanonis have been recovered from rhyodacite tuff of Lower Miocene age in southern Slovakia near the town of Lučenec. 27 million year old Sabal lamanonis and Sabal raphipholia leaf fossils in volcanic rocks have been described from the Evros region in Western Thrace, Greece.
Formerly placed in Sabal
Serenoa repens (W.Bartram) Small (as S. serrulata (Michx.) Nutt. ex Schult. & Schult.f.)
Ecology
Sabal species are used as food sources by several species of birds (including Mimus polyglottos, Turdus migratorius, Dendroica coronata, Corvus ossifragus, and Drycopus pileatus) as well as insects, such as Caryobruchus and various species of Hymenoptera. American black bears (Ursus americanus) and raccoons (Procyon lotor) are also known to feed on fruit of various species of Sabal. Sabal palmetto is recorded to have its own lichen, Arthonia rubrocincta, that only occurs on its leaf bases. In Europe, the introduced Lepidopteran species Paysandisia archon has become a prominent pest whose larvae are known to feed on some of the cultivated species of Sabal.
Uses
Arborescent species are often transplanted from natural stands into urban landscapes and are rarely grown in nurseries due to slow growth. Several species are cultivated as ornamental plants and because several species are relatively cold-hardy, can be grown farther north than most other palms. The central bud of Sabal palmetto is edible and, when cooked, is known as 'swamp cabbage'. Mature fronds are used as thatch, to make straw hats, and for weaving mats.
| Biology and health sciences | Arecales (inc. Palms) | Plants |
571816 | https://en.wikipedia.org/wiki/Campylobacter%20jejuni | Campylobacter jejuni | Campylobacter jejuni is a species of pathogenic bacteria that is commonly associated with poultry, and is also often found in animal feces. This species of microbe is one of the most common causes of food poisoning in Europe and in the US, with the vast majority of cases occurring as isolated events rather than mass outbreaks. Active surveillance through the Foodborne Diseases Active Surveillance Network (FoodNet) indicates that about 20 cases are diagnosed each year for each 100,000 people in the US, while many more cases are undiagnosed or unreported; the CDC estimates a total of 1.5 million infections every year. The European Food Safety Authority reported 246,571 cases in 2018, and estimated approximately nine million cases of human campylobacteriosis per year in the European Union. In Africa, Asia, and the Middle East, data indicates that C. jejuni infections are endemic.
Campylobacter is a genus of bacteria that is among the most common causes of bacterial infections in humans worldwide. Campylobacter means "curved rod", deriving from the Greek kampylos (curved) and baktron (rod). Of its many species, C. jejuni is considered one of the most important from both a microbiological and public health perspective.
C. jejuni is commonly associated with poultry, and is also commonly found in animal feces. Campylobacter is a helical-shaped, non-spore-forming, Gram-negative, microaerophilic, nonfermenting motile bacterium with a single flagellum at one or both poles, which are also oxidase-positive and grow optimally at 37 to 42 °C. When exposed to atmospheric oxygen, C. jejuni is able to change into a coccal form. This species of pathogenic bacteria is one of the most common causes of human gastroenteritis in the world. Food poisoning caused by Campylobacter species can be severely debilitating, but is rarely life-threatening. It has been linked with subsequent development of Guillain–Barré syndrome, which usually develops two to three weeks after the initial illness. Individuals with recent C. jejuni infections develop Guillain-Barré syndrome at a rate of 0.3 per 1000 infections, about 100 times more often than the general population. Another chronic condition that may be associated with campylobacter infection is reactive arthritis. Reactive arthritis is a complication strongly associated with a particular genetic make-up. That is, persons who have the human leukocyte antigen B27 (HLA-B27) are most susceptible. Most often, the symptoms of reactive arthritis will occur up to several weeks after infection.
History
Campylobacter jejuni was originally named Vibrio jejuni due to its likeness to Vibrio spp. until 1963. Seabald and Vernon proposed the genus Campylobacter due to its low levels of guanine and cytosine, non-fermentative metabolism, and microaerophilic growth requirements. The first well recorded incident of Campylobacter infection occurred in 1938. Campylobacter found in milk caused diarrhea among 355 inmates in two state institutions in Illinois. C. jejuni was first discovered in the small intestines of humans in the 1970s, however, symptoms have been noted since the early 20th century. The CDC, USDA and FDA collectively identified C. jejuni as responsible for over 40% of bacterial gastroenteritis found in laboratories as of 1996.
Metabolism
C. jejuni is unable to use sugars as a carbon source, primarily using amino acids for growth instead. The main reason C. jejuni lacks glycolytic capabilities is a lack of glucokinase and a lack of the 6-phosphofructokinase enzyme to employ the EMP pathway. The four main amino acids C. jejuni takes in are serine, aspartate, asparagine, and glutamate, which are listed in order of preference. If all of these are depleted, some strains can use proline as well. Either the host or metabolic activity of other gut microbes can supply these amino acids.
The metabolic pathways C. jejuni is capable of include the TCA cycle, a non-oxidative pentose phosphate pathway, gluconeogenesis, and fatty acid synthesis. Serine is the most important amino acid used for growth, brought into the cell by SdaC transport proteins and further broken down into pyruvate by the SdaA dehydratase. Though this pyruvate cannot directly be converted into phosphoenolpyruvic acid (as C. jejuni lacks this synthetase), the pyruvate can enter the TCA cycle to form oxaloacetic acid intermediates that can be converted to phosphoenolpyruvic acid for gluconeogenesis. This production of carbohydrates is important for the virulence factors of C. jejuni. The pyruvate created from serine can also be converted to acetyl CoA and be applied to fatty acid synthesis or continue into the TCA cycle to create precursors for other biosynthetic pathways. Aspartate and glutamate are both brought into the cell via Peb1A transport proteins. Glutamate can be transaminated into aspartate, and aspartate can be deaminated to make fumerate that feeds into the TCA cycle as well. Asparagine is also able to be deaminated into aspartate (which follows the process into the TCA cycle mentioned above). While the amino acids listed above are able to be metabolized, C. jejuni is capable of taking in many of the other amino acids which helps to lower the anabolic cost of de novo synthesis.
If other sources of carbon are exhausted, C. jejuni can also use acetate and lactate as carbon sources. Acetate is a normal secreted byproduct of C. jejuni metabolism stemming from the recycling of CoA, and the absence of other carbon sources can cause C. jejuni to "switch" this reaction to take in acetate for the conversion to acetyl-CoA (catalyzed by phosphate acetyltransferase and acetate kinase enzymes). Lactate is a normal byproduct of many fermentative bacteria in the gut, and C. jejuni can take in and oxidize this lactate to supply pyruvate through the activity of dehydrogenase iron-sulfur enzyme complexes.
The energetic needs of these anabolic pathways are met in multiple ways. The cytochrome c and quinol terminal oxidases allow for C. jejuni to use oxygen as a terminal electron acceptor for the reduced carriers produced through the TCA cycle (hence why C. jejuni is considered an obligate microaerophile). The conversion of acetyl-CoA to acetate mentioned above has substrate-level phosphorylation take place, giving another form of energy production without the use of microaerophilic respiration.
C. jejuni can use many different electron donors for its metabolic processes, using NADH and FADH most commonly – though C. jejuni uses NADH poorly compared to FADH due to a replacement of genes encoding subunits for NADH dehydrogenases for genes contributing to processes relating to FADH electron donation. Aside from these donors, C. jejuni can turn to products from the host gut microbiota including hydrogen, lactate, succinate, and formate to contribute electrons; formate, for example, is generated through intestinal mixed-acid fermentation. Unlike almost all other Campylobacter or Helicobacter species, C. jejuni can also accept electrons from sulfite and metabisulfite through its cytochrome c oxidoreductase system.
While oxygen is mainly used as a terminal electron acceptor, C. jejuni can use nitrate, nitrite, sulfur oxides (such as dimethyl sulfoxide or trimethylamine N-oxide), or fumarate as terminal electron acceptors as well to survive as a microaerophilic bacterium. Due to oxygen-limited conditions in the common areas of colonization, C. jejuni possesses two separate terminal oxidases with different affinities for oxygen, where the low affinity oxidase can directly retrieve electrons from menaquinones. The adaptations allowing for multiple electron acceptors help to combat the problem with reactive oxygen species arising from the sole use of oxygen as well; C. jejuni cannot grow under strictly aerobic conditions. Enzymes C. jejuni carries to impede the effects of reactive oxygen species include: superoxide dismutase SodB, alkyl hydroxide reductase AhpC, catalase KatA, and thiol peroxidases Tpx and Bcp.
Disease
Campylobacteriosis is an infectious disease caused by bacteria of the genus Campylobacter. In most patients presenting with campylobacteriosis, symptoms develop within two to five days of exposure to the organism and illness typically lasts seven days following onset. Infection with C. jejuni typically results in enteritis, or inflammation of the small intestine, which is characterized by abdominal pain, voluminous diarrhea (often bloody), fever, and malaise. Individuals infected with this bacteria can experience a prodromal phase of symptoms for the first 1 to 3 days, in which the more severe portion of the disease occurs. The prodromal phase presents with symptoms including rigors, high fever, body aches, and dizziness. Other than the prodromal phase, the acute diarrheal phase of enteritis usually lasts around 7 days, however abdominal pain can persist for weeks afterward. The disease is usually self-limiting; however, it does respond to antibiotics. Severe (accompanying fevers, blood in stools) or prolonged cases may require erythromycin, azithromycin, ciprofloxacin, or norfloxacin. Fluid replacement via oral rehydration salts may be needed and intravenous fluid may be required for serious cases. Possible complications of campylobacteriosis include Guillain–Barré syndrome and reactive arthritis.
Transmission
C. jejuni is a zoonotic disease meaning it is more commonly spread from animals to people than in between humans. People most often contract it by touching something that has been in contact with raw or undercooked chicken in addition to eating or touching poultry that is raw or undercooked. Additionally, it can also be obtained from being in contact with animals or eating undercooked seafood. The fecal oral route is the most common way it spreads as the bacterium is excreted in animal feces. C. jejuni seldomly causes disease in animals and infections are more common in lower income countries. Deadly infections are not often seen in young adults but rather among the young and elderly. Due to poor sanitation practices in some areas, the bacteria can also be found in ice and water. It is difficult to know the science behind its transmission due to its sporadic nature. The use of antibiotics and other treatments help in slowing and preventing the transmission of C. jejuni. C. jejuni is a fastidious microaerophiles meaning it does need some oxygen to grown, spread, and transmit. However, it is highly adaptable and has adapted to grow in higher concentrations of oxygen.
Pathogenesis
C. jejuni employs unique strategies to breach the intestinal epithelial layer of its host cells. It uses proteases, particularly HtrA, to cleverly disrupt cell junctions and temporarily traverse the cells. The membrane-bound protein Fibronectin is a critical binding site for C. jejuni on the basolateral side of the polarized epithelial cell, facilitating this process. Once inside the cell, C. jejuni leverages dynein to access the perinuclear space within the Clathrin-Coated Vesicle, avoiding lysosomal digestion for up to 72 hours.
To initiate infection, C. jejuni must penetrate the gut enterocytes. C. jejuni releases several different toxins, mainly enterotoxin and cytotoxins, which vary from strain to strain and correlate with the severity of the enteritis (inflammation of the small intestine). During infection, levels of all immunoglobulin classes rise. Of these, IgA is the most important because it can cross the gut wall. IgA immobilises organisms, causing them to aggregate and activate complement, and also gives short-term immunity against the infecting strain of organism. The bacteria colonize the small and large intestines, causing inflammatory diarrhea with fever. Stools contain leukocytes and blood. The role of toxins in pathogenesis is unclear. C jejuni antigens that cross-react with one or more neural structures may be responsible for triggering the Guillain–Barré syndrome.
Hypoacylated lipopolysaccharide (LPS) from C. jejuni induces moderate TLR4-mediated inflammatory response in macrophages and such LPS bioactivity may eventually result in the failure of local and systemic bacterial clearance in patients. At the same time, moderation of anti-bacterial responses may be advantageous for infected patients in clinical practice, since such an attenuated LPS may not be able to induce severe sepsis in susceptible individuals.
One of the most important virulence factors of C. jejuni are flagella. The flagellar protein FlaA has been proven to be one of the abundant proteins in the cell. Flagella are required for motility, biofilm formation, host cell interactions and host colonization. The flagella in C. jejuni can also aid in the secretion intracellular proteins. The production of flagella is energetically costly so the production must be regulated from metabolic standpoint. CsrA is a post-transcriptional regulator that regulates the expression of FlaA by binding to flaA mRNA and is able to repress its translation. CsrA mutant strains have been studied and the mutant strains exhibit dysregulation of 120–150 proteins that are included in motility, host cell adherence, host cell invasion, chemotaxis, oxidative stress resistance, respiration and amino acid and acetate metabolism. Transcriptional and post-transcriptional regulation of flagellar synthesis in C. jejuni enables proper biosynthesis of flagella and it is important for pathogenesis of this bacteria.
C. jejuni employs a highly sophisticated navigation system called chemotaxis. This system is crucial when the bacterium requires guidance through chemical signals. The chemotaxis system uses specific chemoattractants that direct the bacterium toward areas with a higher concentration of the attractants. The exact nature of chemoattractants is dependent on the surrounding environmental conditions. Additionally, when the bacterium needs to move away, it uses negative chemotaxis to move in the opposite direction.
Other important virulence factors of C. jejuni include the pgl locus, which confers the ability to produce N-linked glycosylation of at least 22 bacterial proteins, at least some of which appear to be important for competence, host adherence and invasion. C. jejuni secretes Campylobacter invasive antigens (Cia), which facilitate invasion. The bacteria also produce cytolethal distending toxins that participate in cell cycle control and induction of host cell apoptosis. C. jejuni also exploits different adaptation strategies in which the host factors seem to play a role for pathogenesis of this bacteria.
DNA repair
In the intestines, bile functions as a defensive barrier against colonization by C. jejuni. When C. jejuni is grown in a medium containing the bile acid deoxycholic acid, a component of bile, the DNA of C. jejuni is damaged by a process involving oxidative stress. To survive, C. jejuni cells repair this DNA damage by a system employing proteins AddA and AddB that are needed for repair of DNA double-strand breaks.
C. jejuni uses homologous recombination to repair its DNA, facilitated by the AddA and AddB proteins. These proteins replace RecBCD, which is used in other bacteria like Escherichia coli. AddA and AddB are crucial for nuclease, helicase, and Chi recognition, which allow for successful homologous recombination.
When AddA and AddB are introduced into a wild C. jejuni variant, an added deletion mutant gene addAB gene is formed, which repairs DNA damaged by oxidative stress. This inclusion protects C. jejuni from deoxycholate found in bile, allowing for survival. However, the added gene is absent during growth in deoxycholate from 10 to 16 hours and may be up-regulated in response to environmental conditions. Additionally, AddAB proteins enhance C. jejuni colonization of chicken intestines.
Immune response
Campylobacter jejuni infection and eventual destruction of host cell cause the release of chemokines that cause inflammation and activate immune response cells. Inflammatory chemokines such as CXCL1, CCL3/CCL4, CCL2, and CXCL10 are upregulated, further triggering the immune response. The immune response activation is primarily driven by the use of ADP-heptoses to activate ALPK1, by a C. jejuni infection
Neutrophil granulocytes use phagocytosis to combat C. jejuni infection, releasing antimicrobial proteins and proinflammatory substances. However, C. jejuni can influence the differentiation process of specific types of neutrophil granulocytes, triggering hypersegmentation and increased reactivity, which leads to delayed apoptosis and higher production of reactive oxygen species. In experimental processes, T cells from an immune response only start to grow in number at the inflammation site from the seventh day after infection.
After 11 days of having a Campylobacter jejuni infection, the B lymphocytes in the body increase the production of antibodies that specifically fight against C. jejuni flagellin. The persistence of these antibodies in the body can last up to one-year post-infection. In this case, the development of Guillain-Barré syndrome (GBS) is associated with autoimmune IgG1 antibodies.
Campylobacter infections often precede GBS, indicating that molecular mimicry between the bacteria and host nervous tissues may be the underlying cause. C. jejuni , the most common causative agent of human campylobacteriosis, can survive in the gut for several days but does not establish a long-term infection due to its low replication rate, which is incompatible with a persistent bacterial presence. The bacteria-induced apoptosis of infected gut cells results in the rapid clearance of the pathogen, which likely contributes to the self-limiting nature of the disease.
Sources
Campylobacter jejuni is commonly associated with poultry, and it naturally colonises the digestive tract of many bird species. All types of poultry and wild birds can become colonized with campylobacter. One study found that 30% of European starlings in farm settings in Oxfordshire, United Kingdom, were carriers of C. jejuni. It is also common in cattle, and although it is normally a harmless commensal of the gastrointestinal tract in these animals, it can cause campylobacteriosis in calves. It has also been isolated from wombat and kangaroo feces, being a cause of bushwalkers' diarrhea. Contaminated drinking water and unpasteurized milk provide an efficient means for distribution. Contaminated food is a major source of isolated infections, with incorrectly prepared meat and poultry as the primary source of the bacteria. Moreover, surveys show that 20 to 100% of retail chickens are contaminated. This is not overly surprising, since many healthy chickens carry these bacteria in their intestinal tracts and often in high concentrations, up to 108 cfu/g. The bacteria contaminate the carcasses due to poor hygiene during the slaughter process. Several studies have shown increased concentrations of campylobacter on the carcasses after the evisceration. Studies have investigated the chicken microbiome to understand how, why and when campylobacter appears within the chicken gut. The impact of industrial system production systems on the chicken gut microbiome and campylobacter prevalence has also been investigated.
Raw milk is also a source of infections. The bacteria are often carried by healthy cattle and by flies on farms. Unchlorinated water may also be a source of infections. However, properly cooking chicken, pasteurizing milk, and chlorinating drinking water kill the bacteria. While salmonella is transmitted vertically in eggs, campylobacter is not. Therefore, consumption of eggs does result in human infection from campylobacter.
Complications
Local complications of campylobacter infections occur as a result of direct spread from the gastrointestinal tract and can include cholecystitis, pancreatitis, peritonitis, and massive gastrointestinal hemorrhage. Extraintestinal manifestations of campylobacter infection are quite rare and may include meningitis, endocarditis, septic arthritis, osteomyelitis, and neonatal sepsis. Bacteremia is detected in <1% of patients with campylobacter enteritis and is most likely to occur in patients who are immunocompromised or among the very young or very old. Transient bacteremia in immunocompetent hosts with C. jejuni enteritis may be more common but not detected because the killing action rapidly clears most normal human serotypes, and blood cultures are not routinely performed for patients with acute gastrointestinal illness.
Serious systemic illness caused by campylobacter infection rarely occurs, but can lead to sepsis and death. The case-fatality rate for campylobacter infection is 0.05 per 1000 infections. For instance, one major possible complication that C. jejuni can cause is Guillain–Barré syndrome, which induces neuromuscular paralysis in a sizeable percentage of those who suffer from it. Over time, the paralysis is typically reversible to some extent; nonetheless, about 20% of patients with GBS are left disabled, and around 5% die. Another chronic condition that may be associated with campylobacter infection is reactive arthritis. Reactive arthritis is a complication strongly associated with a particular genetic make-up. That is, persons who have the human leukocyte antigen B27 (HLA-B27) are most susceptible. Most often, the symptoms of reactive arthritis will occur up to several weeks after infection.
Epidemiology
Frequency
United States
An estimated 2 million cases of campylobacter enteritis occur annually, accounting for 5–7% of cases of gastroenteritis. Campylobacter has a large animal reservoir, with up to 100% of poultry, including chickens, turkeys, and waterfowl, having asymptomatic intestinal infections. The major reservoirs of C. fetus are cattle and sheep. More than 90% of campylobacter infections occur during the summer months due to undercooked meats from outdoor cooking. Nonetheless, the incidence of campylobacter infections has been declining. Changes in the incidence of culture-confirmed Campylobacter infections have been monitored by the Foodborne Diseases Active Surveillance Network (FoodNet) since 1996. In 2010, campylobacter incidence showed a 27% decrease compared with 1996–1998. In 2010, the incidence was 13.6 cases per 100,000 population, and this did not change significantly compared with 2006–2008.
Europe
In 2020, there were around 120,000 cases of C. jejuni infection, which showed a decline of about 25.4% compared to the previous year. However, the COVID-19 pandemic may have influenced this decrease, and its statistical significance is yet to be determined. C. jejuni infections tend to peak in July, which could be linked to the rise in temperature worldwide. This pattern is associated with an increased reflection rate of the bacteria, which needs further investigation to establish any potential correlations.
Globally
Campylbacter jejuni infections are extremely common worldwide, although exact figures are not available. New Zealand reported the highest national rate, which peaked in May 2006 at 400 per 100,000 population. C. jejuni infection is a significant global health issue, with infection rates ranging from 0.3 to 2.9%. It is a widespread infection that affects individuals of all ages but is more prevalent in developing countries. In these areas, diarrhea is the most common clinical presentation, and it has a severe impact on children.
Sex
Campylobacter is more frequently isolated in males than females, and homosexual men appear to have a higher risk of infection by atypical campylobacter-related species such as Helicobacter cinaedi and Helicobacter fennelliae.
Age
Campylobacter infections can occur in all age groups. Studies show a peak incidence in children younger than 1 year and in people aged 15–29 years. The age-specific attack rate is highest in young children. In the United States, the highest incidence of Campylobacter infection in 2010 was in children younger than 5 years and was 24.4 cases per 100,000 population. Community based studies done in developing countries show about 60,000 out of every 100,000 children under five years old are affected by campylobacter infections. However, the rate of fecal cultures positive for campylobacter species is greatest in adults and older children.
Diagnosis
Diagnostic tests are available to identify campylobacter infections, including those caused by C. jejuni. The stool culture is considered the gold standard for diagnosing C. jejuni, and selective culture techniques are used to distinguish it from other variants. Stool cultures are grown at 42 degrees Celsius in an atmosphere of 85% N2, 10% CO2, and 5% O2, as C. jejuni requires these conditions due to being thermophilic and microaerophilic. A final diagnosis from a stool sample requires a gram stain or phase contrast microscopy.
Aside from stool cultures, C. jejuni can be detected using enzyme immunoassay (EIA) or polymerase chain reaction (PCR). These methods are more sensitive than stool cultures, but PCR tends to be the most sensitive especially in children and developing countries.
Treatment
Campylobacter infections tend to be mild, requiring only hydration and electrolyte repletion while diarrhea lasts. Maintenance of electrolyte balance, not antibiotic treatment, is the cornerstone of treatment for campylobacter enteritis. Depending on the degree of dehydration, alternate measures may be taken including parenteral methods of hydration. Indeed, most patients with this infection have a self-limited illness and do not require antibiotics at all; however, they may be the best form of treatment in more severe cases of infection.
Antibiotic treatment
Antibiotic treatment for Campylobacter infections is usually not required nor recommended. Antibiotics are limited for treating high-risk patients including immunocompromised and older individuals. Severe cases exhibiting symptoms such as bloody stools, fever, severe abdominal pain, pregnancy, infection with HIV, and prolonged illness (symptoms that last > 1 week) may also require treatment by antibiotics which can help to shorten the duration of the symptoms. It is advisable to treat these infections with macrolide antibiotics, such as erythromycin or azithromycin. Erythromycin is inexpensive and limits toxic exposure to patients, however resistance rates are reportedly increasing; its use is continued however, as resistance rates remain below 5%. Azithromycin usage is increasing due to various drug characteristics, including its once-a-day dosage, tolerability by patients, decreased relation to Infantile hypertrophic pyloric stenosis (IHPS), and less negative symptoms; this is comparative to erythromycin. Fluoroquinolones are another source of treatment, however resistance rates of bacteria to this type of antibiotic is greatly increasing.
Antibiotic resistance
Fluoroquinolones were first approved as a treatment for campylobacter infections in 1986, and were later U.S. Food and Drug Administration (FDA) approved in 1996, so as to control infections in poultry flocks. The CDC began monitoring campylobacter in 1997 in the National Antimicrobial Resistance Monitoring System (NARMS). Data from NARMS indicated ciprofloxacin, a fluoroquinolone, had microbial resistance rates of 17% in 1997–1999, which further increased to 27% in 2015–2017. On September 12, 2005, the FDA suspended the use of all fluoroquinolones in poultry production, and the prevalence of campylobacter strains that are fluoroquinolone resistant in poultry flocks, poultry products, production facilities, and human infections became vital to monitor; this was in an effort to determine if the fluoroquinolone ban led to a reduction in the antibiotic-resistant strains. A presence of drug-resistance to ciprofloxacin has been observed in isolate studies, as well as significant drug-resistance among campylobacter to the antibiotics nalidixic acid and tetracyclines. There is a low rate of resistance to erythromycin, the preferred source of antibiotic treatment for campylobacter infections, however resistant strains have been detected in many countries among sources of the origin of food from farm animals.
Prevention
Some simple food-handling practices can help prevent campylobacter infections.
Cook all poultry products thoroughly. Make sure that the meat is cooked throughout (no longer pink) and any juices run clear. All poultry should be cooked to reach a minimum internal temperature of .
Wash hands with soap before preparing food.
Wash hands with soap after handling raw foods of animal origin and before touching anything else.
Prevent cross-contamination in the kitchen by using separate cutting boards for foods of animal origin and other foods and by thoroughly cleaning all cutting boards, countertops, and utensils with soap and hot water after preparing raw food of animal origin.
Do not drink unpasteurized milk or untreated surface water.
Make sure that people with diarrhea, especially children, wash their hands carefully and frequently with soap to reduce the risk of spreading the infection.
Wash hands with soap after contact with pet feces.
Laboratory characteristics
Under light microscopy, C. jejuni has a characteristic "sea-gull" shape as a consequence of its helical form. Campylobacter is grown on specially selective "CAMP" agar plates at 42 °C, the normal avian body temperature, rather than at 37 °C, the temperature at which most other pathogenic bacteria are grown. Since the colonies are oxidase positive, they usually only grow in scanty amounts on the plates. Microaerophilic conditions are required for luxurious growth. A selective blood agar medium (Skirrow's medium) can be used. Greater selectivity can be gained with an infusion of a cocktail of antibiotics: vancomycin, polymixin-B, trimethoprim, and actidione (Preston's agar), and growth under microaerophilic conditions at 42 °C.
Genome
The genome of C. jejuni strain NCTC11168 was published in 2000, revealing 1,641,481 base pairs (30.6% G+C) predicted to encode 1,654 proteins and 54 stable RNA species. The genome is unusual in that virtually no insertion sequences or phage-associated sequences and very few repeat sequences are found. One of the most striking findings in the genome was the presence of hypervariable sequences. These short homopolymeric runs of nucleotides were commonly found in genes encoding the biosynthesis or modification of surface structures, or in closely linked genes of unknown function. The apparently high rate of variation of these homopolymeric tracts may be important in the survival strategy of C. jejuni. The genome was re-annotated in 2007 updating 18.2% of product functions. Analysis also predicted the first pathogenicity island in C. jejuni among select strains, harbouring the bacteria's Type VI secretion system and putative cognate effectors.
Initial transposon mutagenesis screens revealed 195 essential genes, although this number is likely to go up with additional analysis.
Natural genetic transformation
C. jejuni is naturally competent for genetic transformation. Natural genetic transformation is a sexual process involving DNA transfer from one bacterium to another through the intervening medium, and the integration of the donor sequence into the recipient genome by homologous recombination. C. jejuni freely takes up foreign DNA harboring genetic information responsible for antibiotic resistance. Antibiotic resistance genes are more frequently transferred in biofilms than between planktonic cells (single cells that float in liquid media).
| Biology and health sciences | Gram-negative bacteria | Plants |
571903 | https://en.wikipedia.org/wiki/Tencent%20QQ | Tencent QQ | Tencent QQ (), also known as QQ, is an instant messaging software service and web portal developed by the Chinese technology company Tencent. QQ offers services that provide online social games, music, shopping, microblogging, movies, and group and voice chat software. As of March 2023, there were 597 million monthly active QQ accounts.
History
Tencent QQ was first released in China in February 1999 under the name of OICQ ("Open ICQ", a reference to the early IM service ICQ).
After the threat of a trademark infringement lawsuit by the AOL-owned ICQ, the product's name was changed to QQ (with "Q" and "QQ" used to imply "cute"). The software inherited existing functions from ICQ, and additional features such as software skins, people's images, and emoticons. QQ was first released as a "network paging" real-time communications service. Other features were later added, such as chatrooms, games, personal avatars (similar to "Meego" in MSN), online storage, and Internet dating services.
The official client runs on Microsoft Windows and a beta public version was launched for Mac OS X version 10.4.9 or newer. Formerly, two web versions, WebQQ (full version) and WebQQ Mini (Lite version), which made use of Ajax, were available. Development, support, and availability of WebQQ Mini, however, has since been discontinued. On 31 July 2008, Tencent released an official client for Linux, but this has not been made compatible with the Windows version and it is not capable of voice chat.
In response to competition with other instant messengers, such as Windows Live Messenger, Tencent released Tencent Messenger, which is aimed at businesses.
Membership
In 2002, Tencent stopped its free membership registration, requiring all new members to pay a fee. In 2003, however, this decision was reversed due to pressure from other instant messaging services such as Windows Live Messenger and Sina UC.
Tencent currently offers a premium membership scheme, where premium members enjoy features such as QQ mobile, ringtone downloads, and SMS sending/receiving. In addition, Tencent offers "Diamond" level memberships. Currently, there are seven diamond schemes available:
Red for the QQ Show service which features some superficial abilities such as having a colored account name.
Yellow to obtain extra storage and decorations in Qzone—a blog service.
Blue to obtain special abilities in the game-plays of QQ games.
Purple for obtaining special abilities in games including QQ Speed, QQ Nana, and QQ Tang
Pink for having different boosts in the pet-raising game called QQ Pet.
Green for using QQ music—a service for users to stream music online.
VIP for having extra features in the chat client such as removing advertisements
Black for gaining benefits related to DNF (Dungeon & Fighter), a multiplayer PC beat 'em up video game.
QQ Coin
The QQ Coin is a virtual currency used by QQ users to "purchase" QQ related items for their avatar and blog. QQ Coins are obtained either by purchase (one coin for one RMB) or by using the mobile phone service. Due to the popularity of QQ among young people in China, QQ Coins are accepted by online vendors in exchange for "real" merchandise such as small gifts. This has raised concerns of replacing (and thus "inflating") real currency in these transactions.
The People's Bank of China, China's central bank, tried to crack down on QQ Coins due to people using QQ Coins in exchange for real world goods. However, this only caused the value of QQ coins to rise as more and more third-party vendors started to accept them. Tencent claims the QQ Coin is a mere regular commodity, and is, therefore, not a currency.
Q Zone
Qzone is a social networking website based in China which was created by Tencent in 2005. Q Zone is a personal blog for QQ users. It can be set as a public page or a private friend-only page. Users can upload diaries and share photos.
QQ International
Windows
In 2009, QQ began to expand its services internationally with its QQ International client for Windows distributed through a dedicated English-language portal.
QQ International offers non-Mandarin speakers the opportunity to use most of the features of its Chinese counterpart to get in touch with other QQ users via chat, VoIP, and video calls, and it provides a non-Mandarin interface to access Qzone, Tencent's social network. The client supports English, French, Spanish, German, Korean, Japanese and Traditional Chinese.
One of the main features of QQ International is the optional and automatic machine translation in all chats.
Android
An Android version of QQ International was released in September 2013. The client's interface is in English, French, Spanish, German, Korean, Japanese and Traditional Chinese. In addition to text messaging, users can send each other images, videos, and audio media messages. Moreover, users can share multimedia content with all contacts through the client's Qzone interface.
The live translation feature is available for all incoming messages and supports up to 18 languages.
iOS
QQ International for iPhone and iOS devices was released at the end of 2013, fully equivalent to its Android counterpart.
Partnerships
In India, Tencent has partnered with ibibo to bring services such as chat, mail and game to the developing Indian internet sphere.
In Vietnam, Tencent has struck a deal with VinaGame to bring the QQ Casual Gaming portal as well as the QQ Messenger as an addition to the already thriving Vietnamese gaming communities.
In the United States, Tencent has partnered with AOL to bring QQ Games as a contender in the US social gaming market. Launched in 2007, QQ Games came bundled with the AIM installer, and competed with AOL's own games.com to provide a gaming experience for the AIM user base.
Web QQ
Tencent launched its web-based QQ formally on 15 September 2009, the latest version of which being 3.0. Rather than solely a web-based IM, WebQQ 3.0 functions more like its own operating system, with a desktop in which web applications can be added.
Social network website
In 2009, Tencent launched Xiaoyou (校友, 'schoolmate'), its first social network website. In mid-2010, Tencent changed direction and replaced Xiaoyou with Pengyou (朋友, 'friends'), trying to establish a more widespread network, to which extant QQ users could be easily redirected, hence giving Pengyou a major advantage over its competitors. Tencent's social network Qzone is linked to in the International and native versions of QQ.
Open source and cross-platform clients
Using reverse engineering, open source communities have come to understand the QQ protocol better and have attempted to implement client core libraries compatible with more user-friendly clients, free of advertisements. Most of these clients are cross-platform, so they are usable on operating systems which the official client does not support. However, these implementations had only a subset of functions of the official client and therefore were limited in features. Furthermore, QQ's parent company, Tencent, has over successive versions modified the QQ protocol to the extent that it can no longer be supported by most, and perhaps any, of the third-party implementations that were successful in the past (some of which are listed below). As of 2009, none of the developers of third-party clients have publicized any plans to restore QQ support.
Pidgin, an open source cross-platform multiprotocol client, with third-party plugin
Adium, an open source macOS client, with third-party plugin built on top of libqq-pidgin
Kopete, an open source multiprotocol client by KDE
Note: Kopete, old versions of Pidgin, and any other client whose QQ support was based on libpurple no longer supports QQ as of May 2011
Miranda NG, an open source multiprotocol client, designed for Microsoft Windows, with MirandaQQ2 plugin.
Eva
Merchandise
Tencent has taken advantage of the popularity of the QQ brand and has set up many Q-Gen stores selling QQ branded merchandise such as bags, watches, clothing as well as toy penguins.
Related characteristics
The accounts of QQ are purely a combination of numbers. The account numbers provided for the registered users are selected randomly by the system user registration. In 1995, the registered QQ accounts had only 5 digits, while currently, the digital numbers used for QQ accounts has reached 12. The first QQ number is held by Ma Huateng and his account number is 10001.
Membership to a QQ account usually lasts one month. When this membership is overdue and not renewed, the membership of the account will be suspended.
In relation to calculating "QQ Age", being logged in for 2 full hours would be considered as one full day. Thus, being logged in to QQ for around 700 hours would make the age increase by 1 year. In the 2012 version of QQ, users can see the age on the personal information page.
In 2004, Tencent launched QQ hierarchy which shows the level of a registered member. At the very beginning, this hierarchy was solely based on the hours a member spent in QQ. Hence, the longer the member stayed, the higher level they can attain. These results, however, were criticized as people tend to waste electrical energy due to longer hours of stay on the site. Therefore, Tencent changed the basis from an hour unit to a daily unit due to the involvement of several departments.
Controversies and criticisms
Coral QQ
Coral QQ, a modification of Tencent QQ, is another add-on for the software, providing free access to some of the services and blocking Tencent's advertisements. In 2006, Tencent filed a copyright lawsuit against Chen Shoufu (aka Soft), the author of Coral QQ, after his distribution of a modified Tencent QQ was ruled illegal. Chen then published his modification as a separate add-on. On 16 August 2007, Chen was detained again for allegedly making profits off of his ad-blocking add-on. The case resulted in a three-year prison sentence for Shoufu.
Dispute with Qihoo 360
In 2010, Chinese anti-virus company, Qihoo 360, analyzed the QQ protocol and accused QQ of automatically scanning users' computers and uploading their personal information to QQ's servers without the users' consent. In response, Tencent called 360 a malware and denied users of installing 360 access to some of the QQ's services. The Chinese Ministry of Industry and Information reprimanded both companies for "improper competition" and ordered them to come to an agreement.
Government surveillance
Some observers have criticized QQ's compliance in the Chinese government's Internet surveillance and censorship. A 2013 report by Reporters Without Borders specifically mentioned QQ as allowing authorities to monitor online conversations for keywords or phrases and track participants by their user number.
Adware controversy
The Chinese version of QQ makes use of embedded advertisements. Older versions of the client have been branded as malicious adware by some antivirus and anti-spyware vendors.
Both the Chinese and International versions of QQ had been tested in 2013. Currently it is identified as malware by DrWeb, Zillya, NANO-Antivirus, and VBA32 give positive results, most of which identify it as a trojan.
Security
On March 6, 2015, QQ scored 2 out of 7 points on the Electronic Frontier Foundation's secure messaging scorecard. It received points for having communications encrypted in transit and for having a recent independent security audit. It lost points because communications are not end-to-end encrypted, users can not verify contacts' identities, past messages are not secure if the encryption keys are stolen (i.e. the service does not provide forward secrecy), the code is not open to independent review (i.e. the code is not open-source), and the security design is not properly documented.
| Technology | Social network and blogging | null |
571933 | https://en.wikipedia.org/wiki/Cottontail%20rabbit | Cottontail rabbit | Cottontail rabbits are in the Sylvilagus genus, which is in the Leporidae family. They are found in the Americas. Most Sylvilagus species have stub tails with white undersides that show when they retreat, giving them their characteristic name. However, this feature is not present in all Sylvilagus, nor is it unique to the genus.
The genus is widely distributed across North America, Central America, and northern and central South America, though most species are confined to some particular regions. Most species live in nests called forms, and all have altricial young. An adult female averages three litters per year, which can occur in any season. Occurrence and litter size depend on several factors, including time of the year, weather, and location. The average litter size is four, but can range from as few as two to as many as eight, most of whom do not go on to survive to adulthood.
Cottontail rabbits show a greater resistance to myxomatosis than European rabbits.
Evolution
Cottontails are one of several species of Sylvilagus. Their closest relative is Brachylagus, the pygmy rabbit. They are more distantly related to the European and other rabbits, and more distantly still to the hares. The cladogram is based on mitochondrial gene analysis.
Lifespan
The lifespan of a cottontail averages about two years, depending on the location. Almost every living carnivorous creature comparable to or larger in size than these lagomorphs is a potential predator, including such diverse creatures as domestic dogs, cats, humans, snakes, coyotes, mountain lions, foxes, and if the cottontail is showing signs of illness, even squirrels. The cottontail's most frequent predators are various birds of prey. Cottontails can also be parasitized by botfly species including Cuterebra fontinella. Newborn cottontails are particularly vulnerable to these attacks. Cottontails use burrows vacated by other animals, and the burrows are used for long enough periods that predators can learn where the cottontails reside and repeatedly return to prey on the lagomorphs. Though cottontails are prolific animals that can have multiple litters in a year, few of the resulting offspring survive to adulthood. Those that do survive grow very quickly and are full grown adults at three months.
Eating mechanics
In contrast to rodents (squirrels, etc.), which generally sit on their hind legs and hold food with their front paws while feeding, cottontail rabbits eat while on all fours. Cottontail rabbits typically only use their nose to move and adjust the position of the food that it places directly in front of its front paws on the ground. The cottontail will turn the food with its nose to find the cleanest part of the vegetation (free of sand and inedible parts) to begin its meal. The only time a cottontail uses its front paws while feeding is when vegetation is above its head on a living plant, at which point the cottontail will lift its paw to bend the branch to bring the food within reach.
Cottontails are rarely found foraging for food on windy days, because the wind interferes with their hearing capabilities. Hearing an incoming predator before they get close enough to attack is their primary defense mechanism.
Species
The subgenera were described in the 19th century based on limited morphological data that have been shown to not be of great use, nor to depict phylogenetic relationships. Molecular studies (limited in scope to the mitochondrial 12S gene) have shown that the currently accepted subgeneric structure, while of some heuristic value, is unlikely to withstand additional scrutiny.
Prehistoric species
Sylvilagus hibbardi (Early-Mid Pleistocene)
Sylvilagus leonensis - Dwarf cottontail (Late Pleistocene)
Sylvilagus webbi (Pleistocene)
| Biology and health sciences | Lagomorphs | Animals |
571970 | https://en.wikipedia.org/wiki/Pointing%20dog | Pointing dog | Pointing dogs, sometimes called bird dogs, are a type of gundog typically used in finding game. Gundogs are traditionally divided into three classes: retrievers, flushing dogs, and pointing breeds. The name pointer comes from the dog's instinct to point, by stopping and aiming its muzzle towards game. This demonstrates to the hunter the location of their quarry and allows them to move into gun range. Pointers were selectively bred from dogs who had abundant pointing and backing instinct. They typically start to acquire their hunting instincts at about 2 months of age.
History
Pointing dogs may have descended from dogs from Spain, specifically of the Old Spanish Pointer (Furgus, 2002). Pointing dogs were originally used by hunters who netted the game. The dog would freeze or set (as in Setter) and allow the hunter to throw the net over the game before it flushed. Flushing dogs, on the other hand, were often used by falconers to flush game for the raptors. Most continental European pointing breeds are classified as versatile gun dog breeds or sometimes HPR breeds (for hunt, point, and retrieve). The distinction is made because versatile breeds were developed to find and point game as all pointing breeds, but were also bred to perform other hunting tasks as well. This distinction likely arose because while the British developed breeds which specialized in tasks such as pointing, flushing, and retrieving from land or water, in Continental Europe, the same dog was trained to be able to perform each of these tasks (albeit less effectively). The North American Versatile Hunting Dog Association defines versatility as "the dog that is bred and trained to dependably hunt and point game, to retrieve on both land and water, and to track wounded game on both land and water." As an example, German Shorthaired Pointers are often used to retrieve birds, i.e. duck hunting, whereas calling upon a Pointer to do the same would be less common. Unlike the pure pointing and setting breeds, many versatile dogs were bred for working in dense cover, and traditionally have docked tails.
The Westminster Kennel Club was organized in the early 1870s, and the club's early English import, "Sensation", is still used as the club logo.
Appearance
Pointing dogs come in all varieties of coats, from short-haired dogs, to wire-haired dogs, to silky-coated Setters. Most breeds tend to have some sort of spots on their body, whether the spots are small and round, or a large oval shape.
Breeds
Pointers (and setters) include the following breeds:
English Setter
Gordon Setter
Irish Red and White Setter
Irish Setter
Pointer
The following breeds are also considered versatile hunting dogs:
Ariège Pointer
Bracco Italiano
Braque d'Auvergne
Braque du Bourbonnais
Braque Dupuy (extinct)
Braque Français (two sizes: larger type Gascogne and Braque Français and smaller type Pyrénées)
Braque Saint-Germain
Brittany
Burgos Pointer
Český Fousek
French Spaniel
German Longhaired Pointer
German Roughhaired Pointer
German Shorthaired Pointer
German Wirehaired Pointer
Large Münsterländer
Labrador Retriever
Old Danish Pointer
Pachón Navarro
Perdigueiro Galego
Portuguese Pointer
Pudelpointer
Slovak Rough-haired Pointer
Small Münsterländer
Spinone Italiano
Stabyhoun
Vizsla
Weimaraner
Wirehaired Pointing Griffon
Wirehaired Vizsla
| Biology and health sciences | Dogs | Animals |
572393 | https://en.wikipedia.org/wiki/Hairbrush | Hairbrush | A hairbrush is a brush with rigid (hard or inflexible) or light and soft spokes used in hair care for smoothing, styling, and detangling human hair, or for grooming an animal's fur. It can also be used for styling in combination with a curling iron or hair dryer.
A brush is normally used for detangling hair, for example after sleep or showering. A round brush can be used for styling and curling hair, especially by a professional stylist, often with a hair dryer. A paddle brush is used to straighten hair and tame fly-aways. For babies with fine, soft hair, many bristle materials are not suitable due to the hardness; some synthetic materials and horse/goat hair bristles are used instead.
Animal use
Special brushes are made for cats, dogs and horses. Two different brushes can be made specifically for either short hairy pets, or long haired pets. For an equine's tougher hair, a curry-comb is used.
Types
Various types of brushes are used for different purposes, or have special features that are beneficial to certain hair types. For example:
Round brush: Usually used with a blow dryer to add fullness and movement to the hair. Some have a metal or ceramic base which is designed to heat up during blow drying.
Vent brush: Vents allow more air flow between the bristles but does not affect the drying time.
Cushion brush: Bristles are mounted on a rubber cushion or mat for added flexibility, to minimize hair breakage during detangling.
Paddle brush: A wide base allows more exposure while blow drying; flexible teeth are designed to minimize breakage, though some believe it does not work.
Detangler brush: Features such as widely spaced, flexible, nylon teeth are designed to be gentle on tangled or knotted hair.
Boar bristle brush: Tightly spaced boar-hair bristles are designed to increase tension while brushing, to smooth the hair.
Blow Dryer Attachment: An additional tool that can be attached to a hair blow dryer to brush and dry hair at the same time.
The effects of brushing will be different depending texture and whether the hair is wet or dry. Straight hair typically looks smoother when brushed. Curly hair tends to expand when brushed while dry.
Materials
Common materials used for the handle are ebony, rosewood, New Guinea rosewood, beech, ABS plastic and polyacetal. Common materials used for bristles include boar bristle, horsehair, nylon, stainless steel and goat hair.
United States history
The earliest U.S. patent for a modern hairbrush was by Hugh Rock in 1854.
A brush with elastic wire teeth along with natural bristles, was patented by Samuel Firey in 1870 as . In 1898, Lyda D. Newman invented an "Improved Hairbrush", which allowed for easy cleaning and had bristles separated wide enough to allow for easily combing. She was awarded .
| Biology and health sciences | Hygiene products | Health |
572518 | https://en.wikipedia.org/wiki/Vicu%C3%B1a | Vicuña | The vicuña (Lama vicugna) or vicuna (both , very rarely spelled vicugna, its former genus name) is one of the two wild South American camelids, which live in the high alpine areas of the Andes; the other camelid is the guanaco, which lives at lower elevations. Vicuñas are relatives of the llama, and are now believed to be the wild ancestor of domesticated alpacas, which are raised for their coats. Vicuñas produce small amounts of extremely fine wool, which is very expensive because the animal can only be shorn every three years and has to be caught from the wild. When knitted together, the product of the vicuña's wool is very soft and warm. The Inca valued vicuñas highly for their wool, and it was against the law for anyone but royalty to wear vicuña garments; today, the vicuña is the national animal of Peru and appears on the Peruvian coat of arms.
Both under the rule of the Inca and today, vicuñas have been protected by law, but they were heavily hunted in the intervening period. When they were declared endangered in 1974, only about 6,000 animals were left. Today, the vicuña population has recovered to about 350,000, and although conservation organizations have reduced its level of threat classification, they still call for active conservation programs to protect populations from poaching, habitat loss, and other threats.
Previously, the vicuña was not considered domesticated, and the llama and the alpaca were regarded as descendants of the closely related guanaco. However, DNA research published in 2001 has demonstrated that the alpaca may have vicuña parentage. Today, the vicuña is mainly wild, but the local people still perform special rituals with these creatures, including a fertility rite.
Description
The vicuña is considered more delicate and gracile than the guanaco and smaller. A key distinguishing element of morphology is the better-developed incisor roots for the guanaco. The vicuña's long, woolly coat is tawny brown on the back, whereas the hair on the throat and chest is white and quite long. Its head is slightly shorter than guanaco's, and the ears are slightly longer. The length of the head and body ranges from 1.45 to 1.60 m (about 5 ft); shoulder height is from 75 to 85 cm (around 3 ft); its weight is from 35 to 65 kg (under 150 lb). It falls prey to the cougar and culpeo.
Taxonomy and evolution
There are two subspecies of vicuña:
Lama vicugna vicugna
Lama vicugna mensalis
While vicuñas are restricted to the more extreme elevations of the Andes in modern times, they may have also been present in the lowland regions of Patagonia as much as 3500 km south of their current range during the Late Pleistocene and Early Holocene. Fossils of these lowland camelids have been assigned to a species known as Lama gracilis, but genetic and morphological analysis between them and modern vicuña indicate the two may be the same.
Distribution and habitat
Vicuñas are native to South America's central Andes. They are found in Peru, northwestern Argentina, Bolivia, and northern Chile. A smaller, introduced population lives in central Ecuador.
Vicuñas live at altitudes of . They feed in the daytime on the grassy plains of the Andes Mountains but spend the nights on the slopes. In these areas, only nutrient-poor, tough, bunch grasses and Festuca grow. The sun's rays can penetrate the thin atmosphere, producing relatively warm temperatures during the day; however, the temperatures drop to freezing at night. The vicuña's thick but soft coat is a unique adaptation that traps layers of warm air close to its body to tolerate freezing temperatures.
Chief predators include pumas and the culpeo.
Behavior
The behavior of vicuñas is similar to that of the guanacos. They are timid animals and are easily aroused by intruders due, among other things, to their extraordinary hearing. Like the guanacos, they frequently lick calcareous stones and rocks, which, together with salt water, is its source of salt. Vicuñas are clean animals and always deposit their excrement in the same place. Their diets consist mainly of low grasses which grow in clumps on the ground.
Vicuñas live in family-based groups of a male, 5 to 15 females, and their young. Each group has its territory of about , which can fluctuate depending on food availability.
Mating usually occurs in March–April. After a gestation about 11 months, the female gives birth to a single fawn, which is nursed for about ten months. The fawn becomes independent at about 12 to 18 months old. Young males form bachelor groups, and the young females search for a sorority to join. This deters intraspecific competition and inbreeding.
Conservation
Until 1964, hunting of the vicuña was unrestricted, which reduced its numbers to only 6,000 in the 1960s. As a result, the species was declared endangered in 1974, and its status prohibited the trade of vicuña wool. In Peru, during 1964–1966, the Servicio Forestal y de Caza in cooperation with the US Peace Corps, Nature Conservancy, World Wildlife Fund, and the National Agrarian University of La Molina established a nature conservatory for the vicuña called the Pampa Galeras – Barbara D'Achille in Lucanas Province, Ayacucho. During that time, a game warden academy was held in Nazca, where eight men from Peru and six from Bolivia were trained to protect the vicuña from poaching.
To cooperate on the conservation of the vicuña, the governments of Bolivia and Peru signed the Convention for the Conservation of the Vicuña on 16 August 1969 in La Paz, explicitly leaving the treaty open to accession by Argentina and Chile. Ecuador acceded on 11 February 1976. The Convention prohibited their international trade and domestic exploitation, and ordered the parties to create reserves and breeding centres. A follow-up treaty, the Convention for the Conservation and Management of the Vicuña, was signed between Bolivia, Chile, Ecuador and Peru on 20 December 1979 in Lima. It explicitly allowed only Argentina to sign it if it also signed the 1969 La Paz Convention (Article 12; Argentina joined in 1981), and did not allow other countries to accede to the convention 'due to its specific character' (Article 13). The 1979 Convention did allow the use of the vicuña under strict circumstances if the animal population had recovered sufficiently. In combination with CITES (effective in 1975), as well as USA and EU trade legislation, the Conventions were highly successful, as the vicuña population substantially grew as a result.
The estimated population in Peru was 66,559 in 1994, 103,161 in 1997, 118,678 in 2000, and 208,899 in 2012. Currently, the community of Lucanas conducts a chaccu (herding, capturing, and shearing) on the reserve each year to harvest the wool, organized by the National Council for South American Camelids (CONACS).
In Bolivia, the Ulla Ulla National Reserve was founded 1977 partly as a sanctuary for the species. Their numbers grew to 125,000 in Peru, Chile, Argentina, and Bolivia. Since this was a ready "cash crop" for community members, the countries relaxed regulations on vicuña wool in 1993, enabling its trade once again. The wool is sold on the world market for over $300 per kg. In 2002, the US Fish and Wildlife Service reclassified most populations as threatened, but still lists Ecuador's population as endangered. While the population levels have recovered to a healthy level, poaching remains a constant threat, as do habitat loss and other threats. Consequently, the IUCN still supports active conservation programs to protect vicuñas, though they lowered their status to least concern in 2018.
In 2015, French luxury group LVMH said that "Loro Piana saved the species." The Italian company has been criticized for underpaying local communities collecting the wool. In 2022, the Argentine government's National Council for Scientific and Technical Investigation estimated that "Andean communities receive around 3% of the value generated by the vicuña fiber chain."
Vicuña wool
Its wool is famous for its warmth and is used for apparel, such as socks, sweaters, accessories, shawls, coats, suits, and home furnishings, such as blankets and throws. Its properties come from the tiny scales on the hollow, air-filled fibres, which causes them to interlock and trap insulating air. Vicuñas have some of the finest fibers in the world, at a diameter of 12 μm. The fiber of cashmere goats is 14 to 19 μm, while angora rabbit is 8 to 12 μm, and that of shahtoosh from the Tibetan antelope, or chiru, is from 9 to 12 μm.
Gallery
| Biology and health sciences | Artiodactyla | null |
573313 | https://en.wikipedia.org/wiki/Physical%20disability | Physical disability | A physical disability is a limitation on a person's physical functioning, mobility, dexterity or stamina. Other physical disabilities include impairments which limit other facets of daily living, such as respiratory disorders, blindness, epilepsy and sleep disorders.
Causes
Prenatal disabilities are acquired before birth. These may be due to diseases or substances that the mother has been exposed to during pregnancy, embryonic or fetal developmental accidents or genetic disorders.
Perinatal disabilities are acquired between some weeks before to up to four weeks after birth in humans. These can be due to prolonged lack of oxygen or obstruction of the respiratory tract, damage to the brain during birth (due to the accidental misuse of forceps, for example) or the baby being born prematurely. These may also be caused due to genetic disorders or accidents.
Post-natal disabilities are gained after birth. They can be due to accidents, injuries, obesity, infection or other illnesses. These may also be caused due to genetic disorders.
Types
Mobility impairment includes upper or lower limb loss or impairment, poor manual dexterity, and damage to one or multiple organs of the body. Disability in mobility can be a congenital or acquired problem or a consequence of disease. People who have a broken skeletal structure also fall into this category.
Visual impairment is another type of physical impairment. There are hundreds of thousands of people with minor to various serious vision injuries or impairments. These types of injuries can also result in severe problems or diseases such as blindness and ocular trauma. Some other types of vision impairment include scratched cornea, scratches on the sclera, diabetes-related eye conditions, dry eyes and corneal graft, macular degeneration in old age and retinal detachment.
Hearing loss is a partial or total inability to hear. Deaf and hard of hearing people have a rich culture and benefit from learning sign language for communication purposes. People who are only partially deaf can sometimes make use of hearing aids to improve their hearing ability.
Speech and language disability: the person with deviations of speech and language processes which are outside the range of acceptable deviation within a given environment and which prevent full social or educational development
Physical impairment can also be attributed to disorders causing, among others, sleep deficiency, chronic fatigue, chronic pain, and seizures.
| Biology and health sciences | Disability | null |
573489 | https://en.wikipedia.org/wiki/C3%20carbon%20fixation | C3 carbon fixation | {{DISPLAYTITLE: C3 carbon fixation}}
carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction:
CO2 + H2O + RuBP → (2) 3-phosphoglycerate
This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.)
Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley.
plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth.
plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete plants in these areas.
The isotopic signature of plants shows higher degree of 13C depletion than the plants, due to variation in fractionation of carbon isotopes in oxygenic photosynthesis across plant types. Specifically, plants do not have PEP carboxylase like plants, allowing them to only utilize ribulose-1,5-bisphosphate carboxylase (Rubisco) to fix through the Calvin cycle. The enzyme Rubisco largely discriminates against carbon isotopes, evolving to only bind to 12C isotope compared to 13C (the heavier isotope), contributing to more 13C depletion seen in plants compared to plants especially since the pathway uses PEP carboxylase in addition to Rubisco.
Variations
Not all C3 carbon fixation pathways operate at the same efficiency.
Refixation
Bamboos and the related rice have an improved C3 efficiency. This improvement might be due to its ability to recapture CO2 produced during photorespiration, a behavior termed "carbon refixation". These plants achieve refixation by growing chloroplast extensions called "stromules" around the stroma in mesophyll cells, so that any photorespired CO2 from the mitochondria has to pass through the RuBisCO-filled chloroplast.
Refixation is also performed by a wide variety of plants. The common approach involving growing a bigger bundle sheath leads down to C2 photosynthesis.
Synthetic glycolate pathway
C3 carbon fixation is prone to photorespiration (PR) during dehydration, accumulating toxic glycolate products. In the 2000s scientists used computer simulation combined with an optimization algorithm to figure out what parts of the metabolic pathway may be tuned to improve photosynthesis. According to simulation, improving glycolate metabolism would help significantly to reduce photorespiration.
Instead of optimizing specific enzymes on the PR pathway for glycolate degradation, South et al. decided to bypass PR altogether. In 2019, they transferred Chlamydomonas reinhardtii glycolate dehydrogenase and Cucurbita maxima malate synthase into the chloroplast of tobacco (a model organism). These enzymes, plus the chloroplast's own, create a catabolic cycle: acetyl-CoA combines with glyoxylate to form malate, which is then split into pyruvate and CO2; the former in turn splits into acetyl-CoA and CO2. By forgoing all transport among organelles, all the CO2 released will go into increasing the CO2 concentration in the chloroplast, helping with refixation. The end result is 24% more biomass. An alternative using E. coli glycerate pathway produced a smaller improvement of 13%. They are now working on moving this optimization into other crops like wheat.
| Biology and health sciences | Metabolic processes | Biology |
2980973 | https://en.wikipedia.org/wiki/Culling | Culling | Culling is the process of segregating organisms from a group according to desired or undesired characteristics. In animal breeding, it is removing or segregating animals from a breeding stock based on a specific trait. This is done to exaggerate desirable characteristics, or to remove undesirable characteristics by altering the genetic makeup of the population. For livestock and wildlife, culling often refers to killing removed animals based on their characteristics, such as their sex or species membership, or as a means of preventing infectious disease transmission.
In fruits and vegetables, culling is the sorting or segregation of fresh harvested produce into marketable lots, with the non-marketable lots being discarded or diverted into food processing or non-food processing activities. This usually happens at collection centres located at, or close to farms.
Etymology
The word cull comes from the Latin verb , meaning "to gather". The term can be applied broadly to mean partitioning a collection into two groups: one that will be kept and one that will be rejected. The cull is the set of items rejected during the selection process. The culling process is repeated until the selected group is of proper size and consistency desired.
Pedigreed animals
Culling is:
In the breeding of pedigreed animals, both desirable and undesirable traits are considered when choosing which animals to retain for breeding and which to place as pets. The process of culling starts with examination of the conformation standard of the animal and will often include additional qualities such as health, robustness, temperament, color preference, etc. The breeder takes all things into consideration when envisioning their ideal for the breed or goal of their breeding program. From that vision, selections are made as to which animals, when bred, have the best chance of producing the ideal for the breed.
Breeders of pedigreed animals cull based on many criteria. The first culling criterion should always be health and robustness. Secondary to health, temperament and conformation of the animal should be considered. The filtering process ends with the breeder's personal aesthetic preferences on pattern, color, etc.
Tandem method
The tandem method is a form of selective breeding where a breeder addresses one characteristic of the animal at a time, thus selecting only animals that measure above a certain threshold for that particular trait while keeping other traits constant. Once that level of quality in the single trait is achieved, the breeder will focus on a second trait and cull based on that quality. With the tandem method, a minimum level of quality is set for important characteristics that the breeder wishes to remain constant. The breeder is focusing improvement in one particular trait without losing quality of the others. The breeder will raise the threshold for selection on this trait with each successive generation of progeny, thus ensuring improvement in this single characteristic of his breeding program.
For example, a breeder that is pleased with the muzzle length, muzzle shape, and eye placement in the breeding stock, but wishes to improve the eye shape of progeny produced may determine a minimum level of improvement in eye shape required for progeny to be returned into the breeding program. Progeny is first evaluated on the existing quality thresholds in place for muzzle length, muzzle shape, and eye placement with the additional criterion being improvement in eye shape. Any animal that does not meet this level of improvement in the eye shape while maintaining the other qualities is culled from the breeding program; i.e., that animal is not used for breeding, but is instead neutered and placed in a pet home.
Independent levels
Independent levels is a method where any animal who falls below a given standard in any single characteristic is not used in a breeding program. With each successive mating, the threshold culling criteria are raised thus improving the breed with each successive generation.
This method measures several characteristics at once. Should progeny fall below the desired quality in any one characteristic being measured, it will not be used in the breeding program regardless of the level of excellence of other traits. With each successive generation of progeny, the minimum quality of each characteristic is raised thus ensuring improvement of these traits.
For example, a breeder has a view of what the minimum requirements for muzzle length, muzzle shape, eye placement, and eye shape they are breeding toward. The breeder will determine what the minimum acceptable quality for each of these traits will be for progeny to be folded back into their breeding program. Any animal that fails to meet the quality threshold for any one of these criteria is culled from the breeding program.
Total score method
The total score method is a selection method where the breeder evaluates and selects breeding stock based on a weighted table of characteristics. The breeder selects qualities that are most important to them and assigns them a weight. The weights of all the traits should add up to 100. When evaluating an individual for selection, the breeder measures the traits on a scale of 1 to 10, with 10 being the most desirable expression and 1 being the lowest. The scores are then multiplied by their weights and then added together to give a total score. Individuals that fail to meet a threshold are culled (or removed) from the breeding program. The total score gives a breeder a way to evaluate multiple traits on an animal at the same time.
The total score method is the most flexible of the three. it allows for weighted improvement of multiple characteristics. It allows the breeder to make major gains in one aspect while moderate or lesser gains in others.
For example, a breeder is willing to make a smaller improvement in muzzle length and muzzle shape in order to have a moderate gain in improvement of eye placement and a more dramatic improvement in eye shape. Suppose the breeder determines that she would like to see 40% improvement in eye shape, 30% improvement in eye placement, and 15% improvement in both muzzle length and shape. The breeder would evaluate these characteristics on a scale of 1 to 10 and multiply by the weights. The formula would look something like: 15 (muzzle length) + 15 (muzzle shape) + 30 (eye placement) + 40 (eye shape) = total score for that animal. The breeder determines the lowest acceptable total score for an animal to be folded back into their breeding program. Animals that do not meet this minimum total score are culled from the breeding program.
Livestock and production animals
Livestock bred for the production of meat or milk may be culled by farmers. Animals not selected to remain for breeding are sold, killed, or sent to the slaughterhouse.
Criteria for culling livestock and production animals can be based on population or production (milk or egg). In a domestic or farming situation, the culling process involves the selection and selling of surplus stock. The selection may be done to improve breeding stock—for example, for improved production of eggs or milk—or simply to control the group's population for environmental and species preservation. In order to increase the frequency of preferred phenotypes, agricultural practices typically involve using the most productive animals as breeding stock.
With dairy cattle, culling may be practised by inseminating cows—considered to be inferior—with beef breed semen and by selling the produced offspring for meat production.
Approximately half of the chicks of egg-laying chickens are males who would grow up to be roosters. These individuals have little use in an industrial egg-producing facility as they do not lay eggs, so the majority of male chicks are killed shortly after hatching.
Culling of farmed animals is considered a necessary practice to prevent the spread of damaging and fatal diseases such as foot-and-mouth disease, avian flu, Influenza A virus subtype H5N1 and bovine spongiform encephalopathy ("mad cow disease").
Wildlife
In the United States, hunting licenses and hunting seasons are a means by which the population of game animals is maintained. Each season, a hunter is allowed to kill a certain amount of wild animals, determined both by species and sex. If the population seems to have surplus females, hunters are allowed to take more females during that hunting season. If the population is below what is desired, hunters may not be permitted to hunt that particular species, or only hunt a restricted number of males.
Populations of game animals such as elk may be informally culled if they begin to excessively eat winter food set out for domestic cattle herds. In such instances the rancher will inform hunters that they may hunt on their property in order to thin the wild herd to controllable levels. These efforts are aimed to counter excessive depletion of the winter feed supplies. Other managed culling instances involve extended issuance of extra hunting licenses, or the inclusion of additional "special hunting seasons" during harsh winters or overpopulation periods, governed by state fish and game agencies.
Culling for population control is common in wildlife management, particularly on African game farms and Australian national parks. In the case of very large animals such as elephants, adults are often targeted. Their orphaned young, easily captured and transported, are then relocated to other reserves. Culling is controversial in many African countries, but reintroduction of the practice has been recommended in recent years for use at the Kruger National Park in South Africa, which has experienced a swell in its elephant population since culling was banned in 1995.
Arguments against wildlife culling
Culling acts as a strong selection force and can therefore impact the population genetics of a species. For example, culling based on specific traits, such as size, can enforce directional selection and remove those traits from the population. This can have long-term effects on the genetic diversity of a population.
However, culling can act as a selection force intentionally implemented by humans to counteract the selection force of trophy hunting. Hunting typically enforces selection towards unfavorable phenotypic traits because of the strong hunting bias for specific traits, such as large antler size. Culling "low-quality" traits can counteract this force.
Animal rights activists argue that killing animals for any reason (including hunting) is cruel and unethical.
Birds
Some bird species are culled when their populations impact upon human property, business or recreational activity, disturb or modify habitats or otherwise impact species of conservation concern. Cormorants are culled in many countries due to their impact on commercial and recreational fisheries and habitat modification for nesting and guano deposition. They are culled by shooting and the smothering of eggs with oil. Another example is the culling of silver gulls in order to protect the chicks of the vulnerable banded stilt at ephemeral inland salt lake breeding sites in South Australia. The gulls were culled using bread laced with a narcotic substance. In the Australian states of Tasmania and South Australia, Cape Barren geese are culled to limit damage to crops and the fouling of waterholes. Cape Barren Geese remain one of the rarest geese in the world, though much of their habitat is now regarded as secure.
Seals
In South Australia, the recovery of the state's native population of New Zealand fur seals (Arctocephalus forsteri) after severe depletion by sealers in the 1800s has brought them into conflict with the fishing industry. This has prompted members of Parliament to call for seal culling in South Australia. The State Government continues to resist the pressure and as of July 2015, the animals remain protected as listed Marine Mammals under the state's National Parks and Wildlife Act 1972.
Sharks
Shark culling occurs in four locations : New South Wales, Queensland, KwaZulu-Natal and Réunion. Between 1950 and 2008, 352 tiger sharks and 577 great white sharks were killed in the nets in New South Wales—also during this period, a total of 15,135 marine animals were caught and killed in the nets, including whales, turtles, rays, dolphins, and dugongs. From 2001 to 2018, a total of 10,480 sharks were killed on lethal drum lines in Queensland. In a 30-year period up to early 2017, more than 33,000 sharks were killed in KwaZulu-Natal's shark-killing program—during the same 30-year period, 2,211 turtles, 8,448 rays, and 2,310 dolphins were killed. Authorities on Réunion kill about 100 sharks per year. All of these culls have been criticized by environmentalists, who say killing sharks harms the marine ecosystem.
In 2014, a controversial policy was introduced by the Western Australian state government which became known as the Western Australian shark cull. Baited hooks known as drum lines were to be set over several consecutive summers to catch and kill otherwise protected great white sharks. The policy's objective was to protect users of the marine environment from fatal shark attack. Thousands of people protested against its implementation, claiming that it was indiscriminate, inhumane and worked against scientific advice the government had previously received. Seasonal setting of drum lines was abandoned in September 2014 after the program failed to catch any great white sharks, instead catching 172 other elasmobranchii, mostly tiger sharks.
Deer
White-tailed deer (Odocoileus virginianus) have been becoming an issue in suburbs across the United States due to large population increases. This is thought to be caused mainly by the extirpation of most of their major predators in these areas. In response to these population booms, different management approaches have been taken to decrease their numbers mainly in the form of culls. Culls of deer are often partnered with exclusions with fencing and also administering contraceptives.
The effectiveness of these deer culls has been debated and often criticized as only a temporary fix to the larger problem of deer overpopulation and argue that the use of culling will increase fertility of remaining deer by reducing competition. Those in favor of the culls argue that they can be used to combat the selection pressure that is imposed by hunting that creates smaller antler and body sizes in deer. People in favor of the culls recommend that they not be random and actively select for smaller individuals and bucks with smaller antlers, specifically "button bucks" or bucks with only spiked antler in their first year as opposed to forked antlers.
Culling of deer can also have benefits in the form of disease prevention and in places that the white-tailed deer is an invasive species such as New Zealand culling of deer has added benefits for native species. Diseases are density dependent factors and decreases in the density of the deer populations through culling causes diseases, such as chronic wasting disease and Lyme disease, to spread less quickly and effectively.
Zoos
Many zoos participate in an international breeding program to maintain a genetically viable population and prevent inbreeding. Animals that can no longer contribute to the breeding program are considered less desirable and are often replaced by more desirable individuals. If an animal is surplus to a zoo's requirements and a place in another zoo can not be found, the animal may be killed. In 2014, the culling of a young, healthy giraffe Marius raised an international public controversy.
Zoos sometimes consider female animals to be more desirable than males. One reason for this is that while individual males can contribute to the birth of many young in a short period of time, females give birth to only a few young and are pregnant for a relatively long period of time. This makes it possible to keep many females with just one or two males, but not the reverse. Another reason is that the birth of some animal species increases public interest in the zoo.
Germany's Animal Welfare Act 1972 orders that zoo animals cannot be culled without verification by official veterinary institutes of the Landkreis or federated state.
In the UK, there is no general prohibition on animal euthanasia, which is allowed when overcrowding compromises the well-being of the animals.
Ethics
Jaak Panksepp, an American neuroscientist, concludes that both animals and humans have brains wired to feel emotions, and that animals have the capacity to experience pleasure and happiness from their lives.
Culling has been criticized on animal rights grounds as speciesist—it has been argued that killing animals for any reason is cruel and unethical, and that animals have a right to live.
Some argue that culling is necessary when biodiversity is threatened. However, the protection of biodiversity argument has been questioned by some animal rights advocates who point out that the animal which most greatly threatens and damages biodiversity is humanity, so if we are not willing to cull our own species we cannot morally justify culling another.
Non-lethal alternatives
There are non-lethal alternatives which may still be considered culling, and serve the same purpose of reducing population numbers and selecting for desired traits without killing existing members of the population. These methods include the use of wildlife contraceptives and reproductive inhibitors. By using such methods population numbers might be reduced more gradually and in a potentially more humane fashion than by directly lethal culling actions.
Currently, wildlife contraceptives are largely in the experimental phase and include such products as Gonacon, an adjuvant vaccine which delivers a high dosage of a competitor ligand of the hormone GnRH to female mammals (e.g. whitetail deer). The complex formed of GnRH and the Gonacon molecule promotes production of antibodies against the animal's own GnRH, which themselves complex with GnRH. This encourages an extended duration of the drug's effects (namely, reduction of active/unbound GnRH in the animal's system). Though the endocrinology behind Gonacon is sound, the need for multiple lifetime doses for full efficacy make it a less-guaranteed and less-permanent solution for wildlife than lethal culls. Even among domestic animals in controlled conditions, Gonacon cannot ensure 100% reduction in the occurrence of pregnancies.
Reproductive inhibitors need not act on the parental individuals directly, instead damaging reproductive processes and/or developing offspring to reduce the number of viable offspring per mating pair. One such compound called Nicarbazin has been formulated into bait for consumption by Canada Geese, and damages egg yolk formation to reduce the viability of clutches without harming the adult geese.
| Technology | Animal husbandry | null |
2981932 | https://en.wikipedia.org/wiki/Sea%20apple | Sea apple | Sea apple is the common name for the colorful and somewhat round sea cucumbers of the genus Pseudocolochirus, found in Indo-Pacific waters. Sea apples are filter feeders with tentacles, ovate bodies, and tube-like feet. As with many other holothurians, they can release their internal organs or a toxin into the water when stressed.
Physiology
Sea apples are holothuroids, and as such share many of the same physical characteristics. A few notable characteristics are discussed below.
Anatomy and feeding
The ovate body of an adult sea apple can grow up to long. A central mouth-like cavity is surrounded by feathery tentacles, which add additional length. Sea apples, like many echinoderms, have rows of tube feet which help them move over and adhere to structures.
The bodies and tentacles of sea apples come in many different colorings. The Australian species has a primarily purple body, red feet, and purple and white tentacles.
The sea apple feeds primarily on plankton, which it filters from the water with its tentacles. It alternately brings each tentacle to its mouth, scraping off the captured plankton.
Sea apples usually feed at night, a time when their delicate tentacles are less at risk from predators.
Defense
When disturbed, sea apples, like other holothuroids, can violently extrude their entrails from their posterior in a process called evisceration (autotomy). In addition, sea apples can release a toxic saponin called holothurin into the water as a defense mechanism.
In addition, if threatened or in an unsuitable environment, sea apples can consume large amounts of surrounding seawater to swell to nearly double their original size, this allows them to be moved to a new area by water currents, and much more quickly than they could walk.
Problems in captivity
Because of their interesting appearance and behaviour, sea apples are often widely desired as specimen for display marine aquarium. They are considered reef safe as far as their compatibility with other species. However, they can be considered unsafe for reef aquaria for multiple reasons:
Starvation
Sea apples often starve to death in display aquaria. Levels of plankton in aquaria are often lower than optimal, and sea apples are often seen attempting to feed not only at night, as in their natural habitat, but also in the daytime. With only low levels of food available, these sea apples often starve, becoming progressively smaller as this happens. To try to circumvent these problems, hobbyists attempt to give the sea apple specimens supplemental feedings of plankton and liquid food.
Harassment and predation
Sea apples are often harassed by many aquarium inhabitants. Crustaceans, such as hermit crabs, and fish often peck or pick at sea apple's feathery tentacles. This may be for predatory purposes, or simply to steal trapped particles and plankton from the tentacles.
Occasionally, sea apples use their defense mechanisms in response to harassment. The release of their toxin can poison other aquarium inhabitants, and is one of the reasons they are not commonly seen in aquariums.
| Biology and health sciences | Echinoderms | Animals |
6961053 | https://en.wikipedia.org/wiki/Transform%2C%20clipping%2C%20and%20lighting | Transform, clipping, and lighting | Transform, clipping, and lighting (T&L or TCL) is a term used in computer graphics.
Overview
Transformation is the task of producing a two-dimensional view of a three-dimensional scene. Clipping means only drawing the parts of the scene that will be present in the picture after rendering is completed. Lighting is the task of altering the colour of the various surfaces of the scene on the basis of lighting information.
Hardware
Hardware T&L had been used by arcade game system boards since 1993, and by home video game consoles since the Sega Genesis's Virtua Processor (SVP), Sega Saturn's SCU-DSP and Sony PlayStation's GTE in 1994 and the Nintendo 64's RSP in 1996, though it wasn't traditional hardware T&L, but still software T&L running on a coprocessor instead of the main CPU, and could be used for rudimentary programmable pixel and vertex shaders as well. More traditional hardware T&L would appear on consoles with the GameCube and Xbox in 2001 (the PS2 still using a vector coprocessor for T&L). Personal computers implemented T&L in software until 1999, as it was believed faster CPUs would be able to keep pace with demands for ever more realistic rendering. However, 3D computer games of the time were producing increasingly complex scenes and detailed lighting effects much faster than the increase of CPU processing power.
Nvidia's GeForce 256 was released in late 1999 and introduced hardware support for T&L to the consumer PC graphics card market. It had faster vertex processing not only due to the T&L hardware, but also because of a cache that avoided having to process the same vertex twice in certain situations. While DirectX 7.0 (particularly Direct3D 7) was the first release of that API to support hardware T&L, OpenGL had supported it much longer and was typically the purview of older professionally oriented 3D accelerators which were designed for computer-aided design (CAD) instead of games.
Aladdin's ArtX integrated graphics chipset also featured T&L hardware, being released in November 1999 as part of the Aladdin VII motherboards for socket 7 platform.
S3 Graphics launched the Savage 2000 accelerator in late 1999, shortly after GeForce 256, but S3 never developed working Direct3D 7.0 drivers that would have enabled hardware T&L support.
Usefulness
Hardware T&L did not have broad application support in games at the time (mainly due to Direct3D games transforming their geometry on the CPU and not being allowed to use indexed geometries), so critics contended that it had little real-world value. Initially, it was only somewhat beneficial in a few OpenGL-based 3D first-person shooter titles of the time, most notably Quake III Arena. 3dfx and other competing graphics card companies contended that a fast CPU would make up for the lack of a T&L unit.
ATI's initial response to GeForce 256 was the dual-chip Rage Fury MAXX. By using two Rage 128 chips, each rendering an alternate frame, the card was able to somewhat approach the performance of SDR memory GeForce 256 cards, but the GeForce 256 DDR still retained the top speed. ATI was developing their own GPU at the time known as the Radeon which also implemented hardware T&L.
3dfx's Voodoo5 5500 did not have a T&L unit but it was able to match the performance of the GeForce 256, although the Voodoo5 was late to market and by its release it could not match the succeeding GeForce 2 GTS.
STMicroelectronics' PowerVR Kyro II, released in 2001, was able to rival the costlier ATI Radeon DDR and NVIDIA GeForce 2 GTS in benchmarks of the time, despite not having hardware transform and lighting. As more and more games were optimised for hardware transform and lighting, the KYRO II lost its performance advantage and is not supported by most modern games.
Futuremark's 3DMark 2000 heavily utilized hardware T&L, which resulted in the Voodoo 5 and Kyro II both scoring poorly in the benchmark tests, behind budget T&L video cards such as the GeForce 2 MX and Radeon SDR.
Industry standardization
By 2000, only ATI with their comparable Radeon 7xxx series, would remain in direct competition with Nvidia's GeForce 256 and GeForce 2. By the end of 2001, all discrete graphics chips would have hardware T&L.
Support of hardware T&L assured the GeForce and Radeon of a strong future, unlike its Direct3D 6 predecessors which relied upon software T&L. While hardware T&L does not add new rendering features, the extra performance allowed for much more complex scenes and an increasing number of games recommended it anyway to run at optimal performance. GPUs that support T&L in hardware are usually considered to be in the DirectX 7.0 generation.
After hardware T&L had become standard in GPUs, the next step in computer 3D graphics was DirectX 8.0 with fully programmable vertex and pixel shaders. Nonetheless, many early games using DirectX 8.0 shaders, such as Half-Life 2, made that feature optional so DirectX 7.0 hardware T&L GPUs could still run the game. For instance, the GeForce 256 was supported in games up until approximately 2006, in games such as Star Wars: Empire at War.
| Technology | Computer science | null |
14419104 | https://en.wikipedia.org/wiki/Rivet%20gun | Rivet gun | A rivet gun, also known as a rivet hammer or a pneumatic hammer, is a type of tool used to drive rivets. The rivet gun is used on rivet's factory head (the head present before riveting takes place), and a bucking bar is used to support the tail of the rivet. The energy from the hammer in the rivet gun drives the work and the rivet against the bucking bar. As a result, the tail of the rivet is compressed and work-hardened. At the same time the work is tightly drawn together and retained between the rivet head and the flattened tail (now called the shop head, or buck-tail, to distinguish it from the factory head). Nearly all rivet guns are pneumatically powered. Those rivet guns used to drive rivets in structural steel are quite large while those used in aircraft assembly are easily held in one hand. A rivet gun differs from an air hammer in the precision of the driving force.
Rivet guns vary in size and shape and have a variety of handles and grips. Pneumatic rivet guns typically have a regulator which adjusts the amount of air entering the tool. Regulated air entering passes through the throttle valve which is typically controlled by a trigger in the hand grip. When the trigger is squeezed, the throttle valve opens, allowing the pressurized air to flow into the piston. As the piston moves, a port opens allowing the air pressure to escape. The piston strikes against the rivet set. The force on the rivet set pushes the rivet into the work and against the bucking bar. The bucking bar deforms the tail of the rivet. The piston is returned to the original position by a spring or the shifting of a valve allowing air to drive the piston back to the starting position.
Slow-hitting
The slow-hitting gun strikes multiple blows as long as the trigger is held down. The repetition rate is about 2,500 blows-per-minute (bpm). It is easier to control than a one-hit gun. This is probably the most common type of rivet gun in use.
Fast-hitting gun
The fast-hitting gun strikes multiple light-weight blows at a high rate as long as the trigger is held down. These are repeated in the range of 2,500 to 5,000 bpm. The fast-hitting gun, sometimes referred to as a vibrator, is generally used with softer rivets.
Corner riveter
The corner riveter is a compact rivet gun that can be used in close spaces. The rivet is driven at right-angles to handle by a very short barreled driver.
Squeeze riveter
This gun is different from the above rivet guns in that the air pressure is used to provide a squeezing action that compresses the rivet from both sides rather than distinct blows. The squeeze riveter can only be used close to the edge because of the limited depth of the anvil. Once properly adjusted, the squeeze riveter will produce very uniform rivet bucks. The stationary (fixed) jaw is placed against the head and the buck is compressed by the action of the gun.
Pop-rivet gun
A pop rivet gun is made to apply pop rivets to a workpiece, and was invented in 1916 by Hamilton Wylie. This type of rivet gun is unique in its operation, because it does not hammer the rivet into place. Rather, a pop rivet gun will form a rivet in-place.The gun is fed over the rivet's mandrel (a shaft protruding from the rivet head) and the rivet tail is inserted into the work. When the gun is actuated (typically by squeezing the handle), a ball on the rivet's tail is drawn towards the head, compressing a metal sleeve between the ball and the head. This forms another "head" on the opposing side to the workpiece, drawing the work together and holding it securely in place. The mandrel has a weak point that breaks, or "pops" when the riveting process is complete. This style of rivet does not require the use of a bucking bar, because the force applied is away from the work.
| Technology | Hydraulics and pneumatics | null |
156739 | https://en.wikipedia.org/wiki/Hinny | Hinny | A hinny is a domestic equine hybrid, the offspring of a male horse (a stallion) and a female donkey (a jenny). It is the reciprocal cross to the more common mule, which is the product of a male donkey (a jack) and a female horse (a mare). The hinny is distinct from the mule both in physiology and temperament as a consequence of genomic imprinting and is also less common.
Description
The hinny is the offspring of a stallion and a jenny or female donkey, and is thus the reciprocal cross to the more common mule foaled by a jack (male donkey) out of a mare. Like the mule, the hinny displays hybrid vigour (heterosis).
In general terms, in both these hybrids the foreparts and head of the animal are similar to those of the sire, while the hindparts and tail are more similar to those of the dam. A hinny is generally smaller than a mule, with shorter ears and a lighter head; the tail is tasselled like that of its donkey mother.
The distinct phenotypes of the hinny and the mule are partly attributable to genomic imprinting – an element of epigenetic inheritance. Hinnies and mules differ in temperament despite sharing nuclear genomes; this too is believed to be attributable to the action of imprinted genes.
Fertility, sterility and rarity
According to most reports, hinnies are sterile and are not capable of reproduction. The male hinny can mate, but has testicles that will only produce malformed spermatozoa. The dam of a foal carried to term in Henan Province of China in 1981 is variously reported to have been a mule or a hinny. Many supposed examples of the jumart, a supposed hybrid between and horse and a cow in European folklore, were found to be hinnies.
| Biology and health sciences | Hybrids | Animals |
156778 | https://en.wikipedia.org/wiki/Hypochondriasis | Hypochondriasis | Hypochondriasis or hypochondria is a condition in which a person is excessively and unduly worried about having a serious illness. Hypochondria is an old concept whose meaning has repeatedly changed over its lifespan. It has been claimed that this debilitating condition results from an inaccurate perception of the condition of body or mind despite the absence of an actual medical diagnosis. An individual with hypochondriasis is known as a hypochondriac. Hypochondriacs become unduly alarmed about any physical or psychological symptoms they detect, no matter how minor the symptom may be, and are convinced that they have, or are about to be diagnosed with, a serious illness.
Often, hypochondria persists even after a physician has evaluated a person and reassured them that their concerns about symptoms do not have an underlying medical basis or, if there is a medical illness, their concerns are far in excess of what is appropriate for the level of disease. It is also referred to hypochondriaism which is the act of being in a hypochondriatic state, acute hypochondriaism. Many hypochondriacs focus on a particular symptom as the catalyst of their worrying, such as gastro-intestinal problems, palpitations, or muscle fatigue. To qualify for the diagnosis of hypochondria the symptoms must have been experienced for at least six months.
International Classification of Diseases (ICD-10) classifies hypochondriasis as a mental and behavioral disorder. In the Diagnostic and Statistical Manual of Mental Disorders, DSM-IV-TR defined the disorder "Hypochondriasis" as a somatoform disorder and one study has shown it to affect about 3% of the visitors to primary care settings. The 2013 DSM-5 replaced the diagnosis of hypochondriasis with the diagnoses of somatic symptom disorder (75%) and illness anxiety disorder (25%).
Hypochondria is often characterized by fears that minor bodily or mental symptoms may indicate a serious illness, constant self-examination and self-diagnosis, and a preoccupation with one's body. Many individuals with hypochondriasis express doubt and disbelief in the doctors' diagnosis, and report that doctors’ reassurance about an absence of a serious medical condition is unconvincing, or short-lasting. Additionally, many hypochondriacs experience elevated blood pressure, stress, and anxiety in the presence of doctors or while occupying a medical facility, a condition known as "white coat syndrome". Many hypochondriacs require constant reassurance, either from doctors, family, or friends, and the disorder can become a debilitating challenge for the individual with hypochondriasis, as well as their family and friends. Some individuals with hypochondria completely avoid any reminder of illness, whereas others frequently visit medical facilities, sometimes obsessively. Some may never speak about it.
A research based on 41,190 people, and published in December 2023 by JAMA Psychiatry, found that people suffering from hypochondriasis had a five-year shorter life expectancy compared to those without symptoms.
Signs and symptoms
Hypochondriasis is categorized as a somatic amplification disorder—a disorder of "perception and cognition"—that involves a hyper-vigilance of situation of the body or mind and a tendency to react to the initial perceptions in a negative manner that is further debilitating. Hypochondriasis manifests in many ways. Some people have numerous intrusive thoughts and physical sensations that push them to check with family, friends, and physicians. For example, a person who has a minor cough may think that they have tuberculosis. Or sounds produced by organs in the body, such as those made by the intestines, might be seen as a sign of a very serious illness to patients dealing with hypochondriasis.
Other people are so afraid of any reminder of illness that they will avoid medical professionals for a seemingly minor problem, sometimes to the point of becoming neglectful of their health when a serious condition may exist and go undiagnosed. Yet others live in despair and depression, certain that they have a life-threatening disease and no physician can help them. Some consider the disease as a punishment for past misdeeds.
Hypochondriasis is often accompanied by other psychological disorders. Bipolar disorder, clinical depression, obsessive-compulsive disorder (OCD), phobias, and somatization disorder,
panic disorder are the most common accompanying conditions in people with hypochondriasis, as well as a generalized anxiety disorder diagnosis at some point in their life.
Many people with hypochondriasis experience a cycle of intrusive thoughts followed by compulsive checking, which is very similar to the symptoms of obsessive-compulsive disorder. However, while people with hypochondriasis are afraid of having an illness, patients with OCD worry about getting an illness or of transmitting an illness to others. Although some people might have both, these are distinct conditions.
Patients with hypochondriasis often are not aware that depression and anxiety produce their own physical symptoms, and mistake these symptoms for manifestations of another mental or physical disorder or disease. For example, people with depression often experience changes in appetite and weight fluctuation, fatigue, decreased interest in sex, and motivation in life overall. Intense anxiety is associated with rapid heartbeat, palpitations, sweating, muscle tension, stomach discomfort, dizziness, shortness of breath, and numbness or tingling in certain parts of the body (hands, forehead, etc.).
If a person is ill with a medical disease such as diabetes or arthritis, there will often be psychological consequences, such as depression. Some even report being suicidal. In the same way, someone with psychological issues such as depression or anxiety will sometimes experience physical manifestations of these affective fluctuations, often in the form of medically unexplained symptoms. Common symptoms include headaches; abdominal, back, joint, rectal, or urinary pain; nausea; fever and/or night sweats; itching; diarrhea; dizziness; or balance problems. Many people with hypochondriasis accompanied by medically unexplained symptoms feel they are not understood by their physicians, and are frustrated by their doctors’ repeated failure to provide symptom relief.
Cause
The genetic contribution to hypochondriasis is probably moderate, with heritability estimates around 10–37%. Non-shared environmental factors (i.e., experiences that differ between twins in the same family) explain most of the variance in key components of the condition such as the fear of illness and disease conviction. In contrast, the contribution of shared environmental factors (i.e., experiences shared by twins in the same family) to hypochondriasis is approximately zero.
Although little is known about exactly which non-shared environmental factors typically contribute to causing hypochondriasis, certain factors such as exposure to illness-related information are widely believed to lead to short-term increases in health anxiety and to have contributed to hypochondriasis in individual cases. An excessive focus on minor health concerns and serious illness of the individual or a family member in childhood have also been implicated as potential causes of hypochondriasis. Underlying anxiety disorders, such as general anxiety disorder, also increases an individual's risk.
In the media and on the Internet, articles, TV shows, and advertisements regarding serious illnesses such as cancer and multiple sclerosis often portray these diseases as being random, obscure, and somewhat inevitable. In the short term, inaccurate portrayal of risk and the identification of non-specific symptoms as signs of serious illness may contribute to exacerbating fear of illness. Major disease outbreaks or predicted pandemics can have similar effects.
Anecdotal evidence suggests that some individuals become hypochondriac after experiencing major medical diagnosis or death of a family member or friend. Similarly, when approaching the age of a parent's premature death from disease, many otherwise healthy, happy individuals fall prey to hypochondria. These individuals believe they have the same disease that caused their parent's death, sometimes causing panic attacks with corresponding symptoms.
Diagnosis
The ICD-10 defines hypochondriasis as follows:
A. Either one of the following:
A persistent belief, of at least six months' duration, of the presence of a minimum of two serious physical diseases (of which at least one must be specifically named by the patient).
A persistent preoccupation with a presumed deformity or disfigurement (body dysmorphic disorder).
B. Preoccupation with the belief and the symptoms causes persistent distress or interference with personal functioning in daily living and leads the patient to seek medical treatment or investigations (or equivalent help from local healers).
C. Persistent refusal to accept medical advice that there is no adequate physical cause for the symptoms or physical abnormality, except for short periods of up to a few weeks at a time immediately after or during medical investigations.
D. Most commonly used exclusion criteria: not occurring only during any of the schizophrenia and related disorders (F20–F29, particularly F22) or any of the mood disorders (F30–F39).
The DSM-IV defines hypochondriasis according to the following criteria:
A. Preoccupation with fears of having, or the idea that one has, a serious disease based on the person's misinterpretation of bodily symptoms.
B. The preoccupation persists despite appropriate medical evaluation and reassurance.
C. The belief in Criterion A is not of delusional intensity (as in Delusional Disorder, Somatic Type) and is not restricted to a circumscribed concern about appearance (as in Body Dysmorphic Disorder).
D. The preoccupation causes clinically significant distress or impairment in social, occupational, or other important areas of functioning.
E. The duration of the disturbance is at least 6 months.
F. The preoccupation is not better accounted for by Generalized Anxiety Disorder, Obsessive-Compulsive Disorder, Panic Disorder, a Major Depressive Episode, Separation Anxiety, or another Somatoform Disorder.
In the fifth version of the DSM (DSM-5), most who met criteria for DSM-IV hypochondriasis instead meet criteria for a diagnosis of somatic symptom disorder (SSD) or illness anxiety disorder (IAD).
Classification
The classification of hypochondriasis in relation to other psychiatric disorders has long been a topic of scholarly debate and has differed widely between different diagnostic systems and influential publications.
In the case of the DSM, the first and second versions listed hypochondriasis as a neurosis, whereas the third and fourth versions listed hypochondriasis as a somatoform disorder. The current version of the DSM (DSM-5) lists somatic symptom disorder (SSD) under the heading of "somatic symptom and related disorders", and illness anxiety disorder (IAD) under both this heading and as an anxiety disorder.
The ICD-10, like the third and fourth versions of the DSM, lists hypochondriasis as a somatoform disorder. The ICD-11, however, lists hypochondriasis under the heading of "obsessive-compulsive or related disorders".
There are also numerous influential scientific publications that have argued for other classifications of hypochondriasis. Notably, since the early 1990s, it has become increasingly common to regard hypochondriasis as an anxiety disorder, and to refer to the condition as "health anxiety" or "health related obsessive-compulsive disorder."
Treatment
Approximately 20 randomized controlled trials and numerous observational studies indicate that cognitive behavioral therapy (CBT) is an effective treatment for hypochondriasis. Typically, about two-thirds of patients respond to treatment, and about 50% of patients achieve remission, i.e., no longer have hypochondriasis after treatment. The effect size, or magnitude of benefit, appears to be moderate to large. CBT for hypochondriasis and health anxiety may be offered in various formats, including as face-to-face individual or group therapy, via telephone, or as guided self-help with information conveyed via a self-help book or online treatment platform. Effects are typically sustained over time.
There is also evidence that antidepressant medications such as selective serotonin reuptake inhibitors can reduce symptoms. In some cases, hypochondriasis responds well to antipsychotics, particularly the newer atypical antipsychotic medications.
Etymology
Among the regions of the abdomen, the hypochondrium is the uppermost part. The word derives from the Greek term ὑποχόνδριος hypokhondrios, meaning "of the soft parts between the ribs and navel" from ὑπό hypo ("under") and χόνδρος khondros, or cartilage (of the sternum). Hypochondria in Late Latin meant "the abdomen".
The term hypochondriasis for a state of disease without real cause reflected the ancient belief that the viscera of the hypochondria were the seat of melancholy and sources of the vapor that caused morbid feelings. Until the early 18th century, the term referred to a "physical disease caused by imbalances in the region that was below your rib cage" (i.e., of the stomach or digestive system). For example, Robert Burton's The Anatomy of Melancholy (1621) blamed it "for everything from 'too much spittle' to 'rumbling in the guts.
Immanuel Kant discussed hypochondria in his 1798 book, Anthropology from a Pragmatic Point of View, like this: The disease of the hypochondriac consists in this: that certain bodily sensations do not so much indicate a really existing disease in the body as rather merely excite apprehensions of its existence: and human nature is so constituted – a trait which the animal lacks – that it is able to strengthen or make permanent local impressions simply by paying attention to them, whereas an abstraction – whether produced on purpose or by other diverting occupations – lessens these impressions, or even effaces them altogether.
Anthropology by Immanuel Kant, 1798 Journal of Speculative Philosophy Vol. XVI edited by William Torrey Harris p. 395–396
| Biology and health sciences | Mental disorders | Health |
156787 | https://en.wikipedia.org/wiki/Desalination | Desalination | Desalination is a process that removes mineral components from saline water. More generally, desalination is the removal of salts and minerals from a substance. One example is soil desalination. This is important for agriculture. It is possible to desalinate saltwater, especially sea water, to produce water for human consumption or irrigation. The by-product of the desalination process is brine. Many seagoing ships and submarines use desalination. Modern interest in desalination mostly focuses on cost-effective provision of fresh water for human use. Along with recycled wastewater, it is one of the few water resources independent of rainfall.
Due to its energy consumption, desalinating sea water is generally more costly than fresh water from surface water or groundwater, water recycling and water conservation; however, these alternatives are not always available and depletion of reserves is a critical problem worldwide. Desalination processes are using either thermal methods (in the case of distillation) or membrane-based methods (e.g. in the case of reverse osmosis).
An estimate in 2018 found that "18,426 desalination plants are in operation in over 150 countries. They produce 87 million cubic meters of clean water each day and supply over 300 million people." The energy intensity has improved: It is now about 3 kWh/m3 (in 2018), down by a factor of 10 from 20–30 kWh/m3 in 1970. Nevertheless, desalination represented about 25% of the energy consumed by the water sector in 2016.
History
Ancient Greek philosopher Aristotle observed in his work Meteorology that "salt water, when it turns into vapour, becomes sweet and the vapour does not form salt water again when it condenses", and that a fine wax vessel would hold potable water after being submerged long enough in seawater, having acted as a membrane to filter the salt.
At the same time the desalination of seawater was recorded in China. Both the Classic of Mountains and Water Seas in the Period of the Warring States and the Theory of the Same Year in the Eastern Han Dynasty mentioned that people found that the bamboo mats used for steaming rice would form a thin outer layer after long use. The as-formed thin film had adsorption and ion exchange functions, which could adsorb salt.
Numerous examples of experimentation in desalination appeared throughout Antiquity and the Middle Ages, but desalination became feasible on a large scale only in the modern era. A good example of this experimentation comes from Leonardo da Vinci (Florence, 1452), who realized that distilled water could be made cheaply in large quantities by adapting a still to a cookstove. During the Middle Ages elsewhere in Central Europe, work continued on distillation refinements, although not necessarily directed towards desalination.
The first major land-based desalination plant may have been installed under emergency conditions on an island off the coast of Tunisia in 1560. It is believed that a garrison of 700 Spanish soldiers was besieged by the Turkish army and that, during the siege, the captain in charge fabricated a still capable of producing 40 barrels of fresh water per day, though details of the device have not been reported.
Before the Industrial Revolution, desalination was primarily of concern to oceangoing ships, which otherwise needed to keep on board supplies of fresh water. Sir Richard Hawkins (1562–1622), who made extensive travels in the South Seas, reported that he had been able to supply his men with fresh water by means of shipboard distillation. Additionally, during the early 1600s, several prominent figures of the era such as Francis Bacon and Walter Raleigh published reports on desalination. These reports and others, set the climate for the first patent dispute concerning desalination apparatus. The two first patents regarding water desalination were approved in 1675 and 1683 (patents No. 184 and No. 226, published by William Walcot and Robert Fitzgerald (and others), respectively). Nevertheless, neither of the two inventions entered service as a consequence of scale-up difficulties. No significant improvements to the basic seawater distillation process were made during the 150 years from the mid-1600s until 1800.
When the frigate Protector was sold to Denmark in the 1780s (as the ship Hussaren) its still was studied and recorded in great detail. In the United States, Thomas Jefferson catalogued heat-based methods going back to the 1500s, and formulated practical advice that was publicized to all U.S. ships on the reverse side of sailing clearance permits.
Beginning about 1800, things started changing as a consequence of the appearance of the steam engine and the so-called age of steam. Knowledge of the thermodynamics of steam processes and the need for a pure water source for its use in boilers generated a positive effect regarding distilling systems. Additionally, the spread of European colonialism induced a need for freshwater in remote parts of the world, thus creating the appropriate climate for water desalination.
In parallel with the development and improvement of systems using steam (multiple-effect evaporators), these type of devices quickly demonstrated their desalination potential. In 1852, Alphonse René le Mire de Normandy was issued a British patent for a vertical tube seawater distilling unit that, thanks to its simplicity of design and ease of construction, gained popularity for shipboard use. Land-based units did not significantly appear until the latter half of the nineteenth century. In the 1860s, the US Army purchased three Normandy evaporators, each rated at 7000 gallons/day and installed them on the islands of Key West and Dry Tortugas. Another land-based plant was installed at Suakin during the 1880s that provided freshwater to the British troops there. It consisted of six-effect distillers with a capacity of 350 tons/day.
After World War II, many technologies were developed or improved such as Multi Effect Flash desalination (MEF) and Multi Stage Flash desalination (MSF). Another notable technology is freeze-thaw desalination. Freeze-thaw desalination, (cryo-desalination or FD), excludes dissolved minerals from saline water through crystallization.
The Office of Saline Water was created in the United States Department of the Interior in 1955 in accordance with the Saline Water Conversion Act of 1952. This act was motivated by a water shortage in California and inland western United States. The Department of the Interior allocated resources including research grants, expert personnel, patent data, and land for experiments to further advancements.
The results of these efforts included the construction of over 200 electrodialysis and distillation plants globally, reverse osmosis (RO) research, and international cooperation (for example, the First International Water Desalination Symposium and Exposition in 1965). The Office of Saline Water merged into the Office of Water Resources Research in 1974.
The first industrial desalination plant in the United States opened in Freeport, Texas in 1961 after a decade of regional drought.
By the late 1960s and the early 1970s, RO started to show promising results to replace traditional thermal desalination units. Research took place at state universities in California, at the Dow Chemical Company and DuPont. Many studies focus on ways to optimize desalination systems. The first commercial RO plant, the Coalinga desalination plant, was inaugurated in California in 1965 for brackish water. Dr. Sidney Loeb, in conjunction with staff at UCLA, designed a large pilot plant to gather data on RO, but was successful enough to provide freshwater to the residents of Coalinga. This was a milestone in desalination technology, as it proved the feasibility of RO and its advantages compared to existing technologies (efficiency, no phase change required, ambient temperature operation, scalability, and ease of standardization). A few years later, in 1975, the first sea water reverse osmosis desalination plant came into operation.
As of 2000, more than 2000 plants were operated. The largest are in Saudi Arabia, Israel, and the UAE; and the biggest plant with a volume of 1,401,000 m3/d is in Saudi Arabia (Ras Al Khair).
As of 2021 22,000 plants were in operation In 2024 the Catalan government installed a floating offshore plant near the port of Barcelona and purchased 12 mobile desalination units for the northern region of the Costa Brava to combat the severe drought.
In 2012, cost averaged $0.75 per cubic meter. By 2022, that had declined (before inflation) to $0.41. Desalinated supplies are growing at a 10%+ compound rate, doubling in abundance every seven years.
Applications
There are now about 21,000 desalination plants in operation around the globe. The biggest ones are in the United Arab Emirates, Saudi Arabia, and Israel. The world's largest desalination plant is located in Saudi Arabia (Ras Al-Khair Power and Desalination Plant) with a capacity of 1,401,000 cubic meters per day.
Desalination is currently expensive compared to most alternative sources of water, and only a very small fraction of total human use is satisfied by desalination. It is usually only economically practical for high-valued uses (such as household and industrial uses) in arid areas. However, there is growth in desalination for agricultural use and highly populated areas such as Singapore or California. The most extensive use is in the Persian Gulf.
While noting costs are falling, and generally positive about the technology for affluent areas in proximity to oceans, a 2005 study argued, "Desalinated water may be a solution for some water-stress regions, but not for places that are poor, deep in the interior of a continent, or at high elevation. Unfortunately, that includes some of the places with the biggest water problems.", and, "Indeed, one needs to lift the water by 2000 m, or transport it over more than 1600 km to get transport costs equal to the desalination costs."
Thus, it may be more economical to transport fresh water from somewhere else than to desalinate it. In places far from the sea, like New Delhi, or in high places, like Mexico City, transport costs could match desalination costs. Desalinated water is also expensive in places that are both somewhat far from the sea and somewhat high, such as Riyadh and Harare. By contrast in other locations transport costs are much less, such as Beijing, Bangkok, Zaragoza, Phoenix, and, of course, coastal cities like Tripoli. After desalination at Jubail, Saudi Arabia, water is pumped 320 km inland to Riyadh. For coastal cities, desalination is increasingly viewed as a competitive choice.
In 2023, Israel was using desalination to replenish the Sea of Galilee's water supply.
Not everyone is convinced that desalination is or will be economically viable or environmentally sustainable for the foreseeable future. Debbie Cook wrote in 2011 that desalination plants can be energy intensive and costly. Therefore, water-stressed regions might do better to focus on conservation or other water supply solutions than invest in desalination plants.
Technologies
Desalination is an artificial process by which saline water (generally sea water) is converted to fresh water. The most common desalination processes are distillation and reverse osmosis.
There are several methods. Each has advantages and disadvantages but all are useful. The methods can be divided into membrane-based (e.g., reverse osmosis) and thermal-based (e.g., multistage flash distillation) methods. The traditional process of desalination is distillation (i.e., boiling and re-condensation of seawater to leave salt and impurities behind).
There are currently two technologies with a large majority of the world's desalination capacity: multi-stage flash distillation and reverse osmosis.
Distillation
Solar distillation
Solar distillation mimics the natural water cycle, in which the sun heats sea water enough for evaporation to occur. After evaporation, the water vapor is condensed onto a cool surface. There are two types of solar desalination. The first type uses photovoltaic cells to convert solar energy to electrical energy to power desalination. The second type converts solar energy to heat, and is known as solar thermal powered desalination.
Natural evaporation
Water can evaporate through several other physical effects besides solar irradiation. These effects have been included in a multidisciplinary desalination methodology in the IBTS Greenhouse. The IBTS is an industrial desalination (power)plant on one side and a greenhouse operating with the natural water cycle (scaled down 1:10) on the other side. The various processes of evaporation and condensation are hosted in low-tech utilities, partly underground and the architectural shape of the building itself. This integrated biotectural system is most suitable for large scale desert greening as it has a km2 footprint for the water distillation and the same for landscape transformation in desert greening, respectively the regeneration of natural fresh water cycles.
Vacuum distillation
In vacuum distillation atmospheric pressure is reduced, thus lowering the temperature required to evaporate the water. Liquids boil when the vapor pressure equals the ambient pressure and vapor pressure increases with temperature. Effectively, liquids boil at a lower temperature, when the ambient atmospheric pressure is less than usual atmospheric pressure. Thus, because of the reduced pressure, low-temperature "waste" heat from electrical power generation or industrial processes can be employed.
Multi-stage flash distillation
Water is evaporated and separated from sea water through multi-stage flash distillation, which is a series of flash evaporations. Each subsequent flash process uses energy released from the condensation of the water vapor from the previous step.
Multiple-effect distillation
Multiple-effect distillation (MED) works through a series of steps called "effects". Incoming water is sprayed onto pipes which are then heated to generate steam. The steam is then used to heat the next batch of incoming sea water. To increase efficiency, the steam used to heat the sea water can be taken from nearby power plants. Although this method is the most thermodynamically efficient among methods powered by heat, a few limitations exist such as a max temperature and max number of effects.
Vapor-compression distillation
Vapor-compression evaporation involves using either a mechanical compressor or a jet stream to compress the vapor present above the liquid. The compressed vapor is then used to provide the heat needed for the evaporation of the rest of the sea water. Since this system only requires power, it is more cost effective if kept at a small scale.
Membrane distillation
Membrane distillation uses a temperature difference across a membrane to evaporate vapor from a brine solution and condense pure water on the colder side. The design of the membrane can have a significant effect on efficiency and durability. A study found that a membrane created via co-axial electrospinning of PVDF-HFP and silica aerogel was able to filter 99.99% of salt after continuous 30-day usage.
Osmosis
Reverse osmosis
The leading process for desalination in terms of installed capacity and yearly growth is reverse osmosis (RO). The RO membrane processes use semipermeable membranes and applied pressure (on the membrane feed side) to preferentially induce water permeation through the membrane while rejecting salts. Reverse osmosis plant membrane systems typically use less energy than thermal desalination processes. Energy cost in desalination processes varies considerably depending on water salinity, plant size and process type. At present the cost of seawater desalination, for example, is higher than traditional water sources, but it is expected that costs will continue to decrease with technology improvements that include, but are not limited to, improved efficiency, reduction in plant footprint, improvements to plant operation and optimization, more effective feed pretreatment, and lower cost energy sources.
Reverse osmosis uses a thin-film composite membrane, which comprises an ultra-thin, aromatic polyamide thin-film. This polyamide film gives the membrane its transport properties, whereas the remainder of the thin-film composite membrane provides mechanical support. The polyamide film is a dense, void-free polymer with a high surface area, allowing for its high water permeability. A recent study has found that the water permeability is primarily governed by the internal nanoscale mass distribution of the polyamide active layer.
The reverse osmosis process requires maintenance. Various factors interfere with efficiency: ionic contamination (calcium, magnesium etc.); dissolved organic carbon (DOC); bacteria; viruses; colloids and insoluble particulates; biofouling and scaling. In extreme cases, the RO membranes are destroyed. To mitigate damage, various pretreatment stages are introduced. Anti-scaling inhibitors include acids and other agents such as the organic polymers polyacrylamide and polymaleic acid, phosphonates and polyphosphates. Inhibitors for fouling are biocides (as oxidants against bacteria and viruses), such as chlorine, ozone, sodium or calcium hypochlorite. At regular intervals, depending on the membrane contamination; fluctuating seawater conditions; or when prompted by monitoring processes, the membranes need to be cleaned, known as emergency or shock-flushing. Flushing is done with inhibitors in a fresh water solution and the system must go offline. This procedure is environmentally risky, since contaminated water is diverted into the ocean without treatment. Sensitive marine habitats can be irreversibly damaged.
Off-grid solar-powered desalination units use solar energy to fill a buffer tank on a hill with seawater. The reverse osmosis process receives its pressurized seawater feed in non-sunlight hours by gravity, resulting in sustainable drinking water production without the need for fossil fuels, an electricity grid or batteries. Nano-tubes are also used for the same function (i.e., Reverse Osmosis).
Forward osmosis
Forward osmosis uses a semi-permeable membrane to effect separation of water from dissolved solutes. The driving force for this separation is an osmotic pressure gradient, such as a "draw" solution of high concentration.
Freeze–thaw
Freeze–thaw desalination (or freezing desalination) uses freezing to remove fresh water from salt water. Salt water is sprayed during freezing conditions into a pad where an ice-pile builds up. When seasonal conditions warm, naturally desalinated melt water is recovered. This technique relies on extended periods of natural sub-freezing conditions.
A different freeze–thaw method, not weather dependent and invented by Alexander Zarchin, freezes seawater in a vacuum. Under vacuum conditions the ice, desalinated, is melted and diverted for collection and the salt is collected.
Electrodialysis
Electrodialysis uses electric potential to move the salts through pairs of charged membranes, which trap salt in alternating channels. Several variances of electrodialysis exist such as conventional electrodialysis, electrodialysis reversal.
Electrodialysis can simultaneously remove salt and carbonic acid from seawater. Preliminary estimates suggest that the cost of such carbon removal can be paid for in large part if not entirely from the sale of the desalinated water produced as a byproduct.
Microbial desalination
Microbial desalination cells are biological electrochemical systems that implements the use of electro-active bacteria to power desalination of water in situ, resourcing the natural anode and cathode gradient of the electro-active bacteria and thus creating an internal supercapacitor.
Wave-powered desalination
Wave powered desalination systems generally convert mechanical wave motion directly to hydraulic power for reverse osmosis. Such systems aim to maximize efficiency and reduce costs by avoiding conversion to electricity, minimizing excess pressurization above the osmotic pressure, and innovating on hydraulic and wave power components.
One such approach is desalinating using submerged buoys, a wave power approach done by CETO and Oneka. Wave-powered desalination plants began operating by CETO on Garden Island in Western Australia in 2013 and in Perth in 2015 , and Oneka has installations in Chile, Florida, California, and the Caribbean.
Wind-powered desalination
Wind energy can also be coupled to desalination. Similar to wave power, a direct conversion of mechanical energy to hydraulic power can reduce components and losses in powering reverse osmosis. Wind power has also been considered for coupling with thermal desalination technologies.
Other techniques
In a April 2024, researchers from the Australian National University published experimental results of a novel technique for desalination. This technique, thermodiffusive desalination, passes saline water through a channel with a temperature gradient. Species migrate under this temperature gradient in a process known a thermodiffusion. Researchers then separated the water into fractions. After multiple passes through the channel, the researchers were able to achieve NaCL concentration drop of 25000 ppm with a recovery rate of 10% of the original water volume.
Design aspects
Energy consumption
The desalination process's energy consumption depends on the water's salinity. Brackish water desalination requires less energy than seawater desalination.
The energy intensity of seawater desalination has improved: It is now about 3 kWh/m3 (in 2018), down by a factor of 10 from 20-30 kWh/m3 in 1970. This is similar to the energy consumption of other freshwater supplies transported over large distances, but much higher than local fresh water supplies that use 0.2 kWh/m3 or less.
A minimum energy consumption for seawater desalination of around 1 kWh/m3 has been determined, excluding prefiltering and intake/outfall pumping. Under 2 kWh/m3 has been achieved with reverse osmosis membrane technology, leaving limited scope for further energy reductions as the reverse osmosis energy consumption in the 1970s was 16 kWh/m3.
Supplying all US domestic water by desalination would increase domestic energy consumption by around 10%, about the amount of energy used by domestic refrigerators. Domestic consumption is a relatively small fraction of the total water usage.
Note: "Electrical equivalent" refers to the amount of electrical energy that could be generated using a given quantity of thermal energy and an appropriate turbine generator. These calculations do not include the energy required to construct or refurbish items consumed.
Given the energy-intensive nature of desalination and the associated economic and environmental costs, desalination is generally considered a last resort after water conservation. But this is changing as prices continue to fall.
Cogeneration
Cogeneration is generating useful heat energy and electricity from a single process. Cogeneration can provide usable heat for desalination in an integrated, or "dual-purpose", facility where a power plant provides the energy for desalination. Alternatively, the facility's energy production may be dedicated to the production of potable water (a stand-alone facility), or excess energy may be produced and incorporated into the energy grid. Cogeneration takes various forms, and theoretically any form of energy production could be used. However, the majority of current and planned cogeneration desalination plants use either fossil fuels or nuclear power as their source of energy. Most plants are located in the Middle East or North Africa, which use their petroleum resources to offset limited water resources. The advantage of dual-purpose facilities is they can be more efficient in energy consumption, thus making desalination more viable.
The current trend in dual-purpose facilities is hybrid configurations, in which the permeate from reverse osmosis desalination is mixed with distillate from thermal desalination. Basically, two or more desalination processes are combined along with power production. Such facilities have been implemented in Saudi Arabia at Jeddah and Yanbu.
A typical supercarrier in the US military is capable of using nuclear power to desalinate of water per day.
Alternatives to desalination
Increased water conservation and efficiency remain the most cost-effective approaches in areas with a large potential to improve the efficiency of water use practices. Wastewater reclamation provides multiple benefits over desalination of saline water, although it typically uses desalination membranes. Urban runoff and storm water capture also provide benefits in treating, restoring and recharging groundwater.
A proposed alternative to desalination in the American Southwest is the commercial importation of bulk water from water-rich areas either by oil tankers converted to water carriers, or pipelines. The idea is politically unpopular in Canada, where governments imposed trade barriers to bulk water exports as a result of a North American Free Trade Agreement (NAFTA) claim.
The California Department of Water Resources and the California State Water Resources Control Board submitted a report to the state legislature recommending that urban water suppliers achieve an indoor water use efficiency standard of per capita per day by 2023, declining to per day by 2025, and by 2030 and beyond.
Costs
Factors that determine the costs for desalination include capacity and type of facility, location, feed water, labor, energy, financing, and concentrate disposal. Costs of desalinating sea water (infrastructure, energy, and maintenance) are generally higher than fresh water from rivers or groundwater, water recycling, and water conservation, but alternatives are only sometimes available. Desalination costs in 2013 ranged from US$0.45 to US$1.00/m3. More than half of the cost comes directly from energy costs, and since energy prices are very volatile, actual costs can vary substantially.
The cost of untreated fresh water in the developing world can reach US$5/cubic metre.
Since 1975, desalination technology has seen significant advancements, decreasing the average cost of producing one cubic meter of freshwater from seawater from $1.10 in 2000 to approximately $0.50 today. Improved desalination efficiency is a primary factor contributing to this reduction. Energy consumption remains a significant cost component, accounting for up to half the total cost of the desalination process.
Desalination can substantially increase energy intensity, particularly for regions with limited energy resources. For instance, in the island nation of Cyprus, desalination accounts for approximately 5% of the country's total power consumption.
The global desalination market was valued at $20 billion in 2023. With growing populations in arid coastal regions, this market is projected to double by 2032. In 2023, global desalination capacity reached 99 million cubic meters per day, a significant increase from 27 million cubic meters per day in 2003.
Desalination stills control pressure, temperature and brine concentrations to optimize efficiency. Nuclear-powered desalination might be economical on a large scale.
In 2014, the Israeli facilities of Hadera, Palmahim, Ashkelon, and Sorek were desalinizing water for less than US$0.40 per cubic meter. As of 2006, Singapore was desalinating water for US$0.49 per cubic meter.
Environmental concerns
Intake
In the United States, cooling water intake structures are regulated by the Environmental Protection Agency (EPA). These structures can have the same impacts on the environment as desalination facility intakes. According to EPA, water intake structures cause adverse environmental impact by sucking fish and shellfish or their eggs into an industrial system. There, the organisms may be killed or injured by heat, physical stress, or chemicals. Larger organisms may be killed or injured when they become trapped against screens at the front of an intake structure. Alternative intake types that mitigate these impacts include beach wells, but they require more energy and higher costs.
The Kwinana Desalination Plant opened in the Australian city of Perth, in 2007. Water there and at Queensland's Gold Coast Desalination Plant and Sydney's Kurnell Desalination Plant is withdrawn at , which is slow enough to let fish escape. The plant provides nearly of clean water per day.
Outflow
Desalination processes produce large quantities of brine, possibly at above ambient temperature, and contain residues of pretreatment and cleaning chemicals, their reaction byproducts and heavy metals due to corrosion (especially in thermal-based plants). Chemical pretreatment and cleaning are a necessity in most desalination plants, which typically includes prevention of biofouling, scaling, foaming and corrosion in thermal plants, and of biofouling, suspended solids and scale deposits in membrane plants.
To limit the environmental impact of returning the brine to the ocean, it can be diluted with another stream of water entering the ocean, such as the outfall of a wastewater treatment or power plant. With medium to large power plant and desalination plants, the power plant's cooling water flow is likely to be several times larger than that of the desalination plant, reducing the salinity of the combination. Another method to dilute the brine is to mix it via a diffuser in a mixing zone. For example, once a pipeline containing the brine reaches the sea floor, it can split into many branches, each releasing brine gradually through small holes along its length. Mixing can be combined with power plant or wastewater plant dilution. Furthermore, zero liquid discharge systems can be adopted to treat brine before disposal.
Another possibility is making the desalination plant movable, thus avoiding that the brine builds up into a single location (as it keeps being produced by the desalination plant). Some such movable (ship-connected) desalination plants have been constructed.
Brine is denser than seawater and therefore sinks to the ocean bottom and can damage the ecosystem. Brine plumes have been seen to diminish over time to a diluted concentration, to where there was little to no effect on the surrounding environment. However studies have shown the dilution can be misleading due to the depth at which it occurred. If the dilution was observed during the summer season, there is possibility that there could have been a seasonal thermocline event that could have prevented the concentrated brine to sink to sea floor. This has the potential to not disrupt the sea floor ecosystem and instead the waters above it. Brine dispersal from the desalination plants has been seen to travel several kilometers away, meaning that it has the potential to cause harm to ecosystems far away from the plants. Careful reintroduction with appropriate measures and environmental studies can minimize this problem.
Energy use
The energy demand for desalination in the Middle East, driven by severe water scarcity, is expected to double by 2030. Currently, this process primarily uses fossil fuels, comprising over 95% of its energy source. In 2023, desalination consumed nearly half of the residential sector's energy in the region.
Other issues
Due to the nature of the process, there is a need to place the plants on approximately 25 acres of land on or near the shoreline. In the case of a plant built inland, pipes have to be laid into the ground to allow for easy intake and outtake. However, once the pipes are laid into the ground, they have a possibility of leaking into and contaminating nearby aquifers. Aside from environmental risks, the noise generated by certain types of desalination plants can be loud.
Health aspects
Iodine deficiency
Desalination removes iodine from water and could increase the risk of iodine deficiency disorders. Israeli researchers claimed a possible link between seawater desalination and iodine deficiency, finding iodine deficits among adults exposed to iodine-poor water concurrently with an increasing proportion of their area's drinking water from seawater reverse osmosis (SWRO). They later found probable iodine deficiency disorders in a population reliant on desalinated seawater.
A possible link of heavy desalinated water use and national iodine deficiency was suggested by Israeli researchers. They found a high burden of iodine deficiency in the general population of Israel: 62% of school-age children and 85% of pregnant women fall below the WHO's adequacy range. They also pointed out the national reliance on iodine-depleted desalinated water, the absence of a universal salt iodization program and reports of increased use of thyroid medication in Israel as a possible reasons that the population's iodine intake is low. In the year that the survey was conducted, the amount of water produced from the desalination plants constitutes about 50% of the quantity of fresh water supplied for all needs and about 80% of the water supplied for domestic and industrial needs in Israel.
Experimental techniques
Other desalination techniques include:
Waste heat
Thermally-driven desalination technologies are frequently suggested for use with low-temperature waste heat sources, as the low temperatures are not useful for process heat needed in many industrial processes, but ideal for the lower temperatures needed for desalination. In fact, such pairing with waste heat can even improve electrical process:
Diesel generators commonly provide electricity in remote areas. About 40–50% of the energy output is low-grade heat that leaves the engine via the exhaust. Connecting a thermal desalination technology such as membrane distillation system to the diesel engine exhaust repurposes this low-grade heat for desalination. The system actively cools the diesel generator, improving its efficiency and increasing its electricity output. This results in an energy-neutral desalination solution. An example plant was commissioned by Dutch company Aquaver in March 2014 for Gulhi, Maldives.
Low-temperature thermal
Originally stemming from ocean thermal energy conversion research, low-temperature thermal desalination (LTTD) takes advantage of water boiling at low pressure, even at ambient temperature. The system uses pumps to create a low-pressure, low-temperature environment in which water boils at a temperature gradient of between two volumes of water. Cool ocean water is supplied from depths of up to . This water is pumped through coils to condense the water vapor. The resulting condensate is purified water. LTTD may take advantage of the temperature gradient available at power plants, where large quantities of warm wastewater are discharged from the plant, reducing the energy input needed to create a temperature gradient.
Experiments were conducted in the US and Japan to test the approach. In Japan, a spray-flash evaporation system was tested by Saga University. In Hawaii, the National Energy Laboratory tested an open-cycle OTEC plant with fresh water and power production using a temperature difference of between surface water and water at a depth of around . LTTD was studied by India's National Institute of Ocean Technology (NIOT) in 2004. Their first LTTD plant opened in 2005 at Kavaratti in the Lakshadweep islands. The plant's capacity is /day, at a capital cost of INR 50 million (€922,000). The plant uses deep water at a temperature of . In 2007, NIOT opened an experimental, floating LTTD plant off the coast of Chennai, with a capacity of /day. A smaller plant was established in 2009 at the North Chennai Thermal Power Station to prove the LTTD application where power plant cooling water is available.
Thermoionic process
In October 2009, Saltworks Technologies announced a process that uses solar or other thermal heat to drive an ionic current that removes all sodium and chlorine ions from the water using ion-exchange membranes.
Evaporation and condensation for crops
The Seawater greenhouse uses natural evaporation and condensation processes inside a greenhouse powered by solar energy to grow crops in arid coastal land.
Ion concentration polarisation (ICP)
In 2022, using a technique that used multiple stages of ion concentration polarisation followed by a single stage of electrodialysis, researchers from MIT manage to create a filterless portable desalination unit, capable of removing both dissolved salts and suspended solids. Designed for use by non-experts in remote areas or natural disasters, as well as on military operations, the prototype is the size of a suitcase, measuring 42 × 33.5 × 19 cm3 and weighing 9.25 kg. The process is fully automated, notifying the user when the water is safe to drink, and can be controlled by a single button or smartphone app. As it does not require a high pressure pump the process is highly energy efficient, consuming only 20 watt-hours per liter of drinking water produced, making it capable of being powered by common portable solar panels. Using a filterless design at low pressures or replaceable filters significantly reduces maintenance requirements, while the device itself is self cleaning. However, the device is limited to producing 0.33 liters of drinking water per minute. There are also concerns that fouling will impact the long-term reliability, especially in water with high turbidity. The researchers are working to increase the efficiency and production rate with the intent to commercialise the product in the future, however a significant limitation is the reliance on expensive materials in the current design.
Other approaches
Adsorption-based desalination (AD) relies on the moisture absorption properties of certain materials such as Silica Gel.
Forward osmosis
One process was commercialized by Modern Water PLC using forward osmosis, with a number of plants reported to be in operation.
Hydrogel based desalination
The idea of the method is in the fact that when the hydrogel is put into contact with aqueous salt solution, it swells absorbing a solution with the ion composition different from the original one. This solution can be easily squeezed out from the gel by means of sieve or microfiltration membrane. The compression of the gel in closed system lead to change in salt concentration, whereas the compression in open system, while the gel is exchanging ions with bulk, lead to the change in the number of ions. The consequence of the compression and swelling in open and closed system conditions mimics the reverse Carnot Cycle of refrigerator machine. The only difference is that instead of heat this cycle transfers salt ions from the bulk of low salinity to a bulk of high salinity. Similarly to the Carnot cycle this cycle is fully reversible, so can in principle work with an ideal thermodynamic efficiency. Because the method is free from the use of osmotic membranes it can compete with reverse osmosis method. In addition, unlike the reverse osmosis, the approach is not sensitive to the quality of feed water and its seasonal changes, and allows the production of water of any desired concentration.
Small-scale solar
The United States, France and the United Arab Emirates are working to develop practical solar desalination. AquaDania's WaterStillar has been installed at Dahab, Egypt, and in Playa del Carmen, Mexico. In this approach, a solar thermal collector measuring two square metres can distill from 40 to 60 litres per day from any local water source – five times more than conventional stills. It eliminates the need for plastic PET bottles or energy-consuming water transport. In Central California, a startup company WaterFX is developing a solar-powered method of desalination that can enable the use of local water, including runoff water that can be treated and used again. Salty groundwater in the region would be treated to become freshwater, and in areas near the ocean, seawater could be treated.
Passarell
The Passarell process uses reduced atmospheric pressure rather than heat to drive evaporative desalination. The pure water vapor generated by distillation is then compressed and condensed using an advanced compressor. The compression process improves distillation efficiency by creating the reduced pressure in the evaporation chamber. The compressor centrifuges the pure water vapor after it is drawn through a demister (removing residual impurities) causing it to compress against tubes in the collection chamber. The compression of the vapor increases its temperature. The heat is transferred to the input water falling in the tubes, vaporizing the water in the tubes. Water vapor condenses on the outside of the tubes as product water. By combining several physical processes, Passarell enables most of the system's energy to be recycled through its evaporation, demisting, vapor compression, condensation, and water movement processes.
Geothermal
Geothermal energy can drive desalination. In most locations, geothermal desalination beats using scarce groundwater or surface water, environmentally and economically.
Nanotechnology
Nanotube membranes of higher permeability than current generation of membranes may lead to eventual reduction in the footprint of RO desalination plants. It has also been suggested that the use of such membranes will lead to reduction in the energy needed for desalination.
Hermetic, sulphonated nano-composite membranes have shown to be capable of removing various contaminants to the parts per billion level, and have little or no susceptibility to high salt concentration levels.
Biomimesis
Biomimetic membranes are another approach.
Electrochemical
In 2008, Siemens Water Technologies announced technology that applied electric fields to desalinate one cubic meter of water while using only a purported 1.5 kWh of energy. If accurate, this process would consume one-half the energy of other processes. As of 2012 a demonstration plant was operating in Singapore. Researchers at the University of Texas at Austin and the University of Marburg are developing more efficient methods of electrochemically mediated seawater desalination.
Electrokinetic shocks
A process employing electrokinetic shock waves can be used to accomplish membraneless desalination at ambient temperature and pressure. In this process, anions and cations in salt water are exchanged for carbonate anions and calcium cations, respectively using electrokinetic shockwaves. Calcium and carbonate ions react to form calcium carbonate, which precipitates, leaving fresh water. The theoretical energy efficiency of this method is on par with electrodialysis and reverse osmosis.
Temperature swing solvent extraction
Temperature Swing Solvent Extraction (TSSE) uses a solvent instead of a membrane or high temperatures.
Solvent extraction is a common technique in chemical engineering. It can be activated by low-grade heat (less than , which may not require active heating. In a study, TSSE removed up to 98.4 percent of the salt in brine. A solvent whose solubility varies with temperature is added to saltwater. At room temperature the solvent draws water molecules away from the salt. The water-laden solvent is then heated, causing the solvent to release the now salt-free water.
It can desalinate extremely salty brine up to seven times as salty as the ocean. For comparison, the current methods can only handle brine twice as salty.
Wave energy
A small-scale offshore system uses wave energy to desalinate 30–50 m3/day. The system operates with no external power, and is constructed of recycled plastic bottles.
Plants
Trade Arabia claims Saudi Arabia to be producing 7.9 million cubic meters of desalinated water daily, or 22% of world total as of 2021 yearend.
Perth began operating a reverse osmosis seawater desalination plant in 2006. The Perth desalination plant is powered partially by renewable energy from the Emu Downs Wind Farm.
A desalination plant now operates in Sydney, and the Wonthaggi desalination plant was under construction in Wonthaggi, Victoria. A wind farm at Bungendore in New South Wales was purpose-built to generate enough renewable energy to offset the Sydney plant's energy use, mitigating concerns about harmful greenhouse gas emissions.
A January 17, 2008, article in The Wall Street Journal stated, "In November, Connecticut-based Poseidon Resources Corp. won a key regulatory approval to build the $300 million water-desalination plant in Carlsbad, north of San Diego. The facility would produce 190,000 cubic metres of drinking water per day, enough to supply about 100,000 homes. As of June 2012, the cost for the desalinated water had risen to $2,329 per acre-foot. Each $1,000 per acre-foot works out to $3.06 for 1,000 gallons, or $0.81 per cubic meter.
As new technological innovations continue to reduce the capital cost of desalination, more countries are building desalination plants as a small element in addressing their water scarcity problems.
Israel desalinizes water for a cost of 53 cents per cubic meter
Singapore desalinizes water for 49 cents per cubic meter and also treats sewage with reverse osmosis for industrial and potable use (NEWater).
China and India, the world's two most populous countries, are turning to desalination to provide a small part of their water needs
In 2007 Pakistan announced plans to use desalination
All Australian capital cities (except Canberra, Darwin, Northern Territory and Hobart) are either in the process of building desalination plants, or are already using them. In late 2011, Melbourne will begin using Australia's largest desalination plant, the Wonthaggi desalination plant to raise low reservoir levels.
In 2007 Bermuda signed a contract to purchase a desalination plant
Before 2015, the largest desalination plant in the United States was at Tampa Bay, Florida, which began desalinizing 25 million gallons (95000 m3) of water per day in December 2007. In the United States, the cost of desalination is $3.06 for 1,000 gallons, or 81 cents per cubic meter. In the United States, California, Arizona, Texas, and Florida use desalination for a very small part of their water supply. Since 2015, the Claude "Bud" Lewis Carlsbad Desalination Plant has been producing 50 million gallons of drinking water daily.
After being desalinized at Jubail, Saudi Arabia, water is pumped inland though a pipeline to the capital city of Riyadh.
As of 2008, "World-wide, 13,080 desalination plants produce more than 12 billion gallons of water a day, according to the International Desalination Association." An estimate in 2009 found that the worldwide desalinated water supply will triple between 2008 and 2020.
One of the world's largest desalination hubs is the Jebel Ali Power Generation and Water Production Complex in the United Arab Emirates. It is a site featuring multiple plants using different desalination technologies and is capable of producing 2.2 million cubic meters of water per day.
A typical aircraft carrier in the U.S. military uses nuclear power to desalinize of water per day.
In nature
Evaporation of water over the oceans in the water cycle is a natural desalination process.
The formation of sea ice produces ice with little salt, much lower than in seawater.
Seabirds distill seawater using countercurrent exchange in a gland with a rete mirabile. The gland secretes highly concentrated brine stored near the nostrils above the beak. The bird then "sneezes" the brine out. As freshwater is not usually available in their environments, some seabirds, such as pelicans, petrels, albatrosses, gulls and terns, possess this gland, which allows them to drink the salty water from their environments while they are far from land.
Mangrove trees grow in seawater; they secrete salt by trapping it in parts of the root, which are then eaten by animals (usually crabs). Additional salt is removed by storing it in leaves that fall off. Some types of mangroves have glands on their leaves, which work in a similar way to the seabird desalination gland. Salt is extracted to the leaf exterior as small crystals, which then fall off the leaf.
Willow trees and reeds absorb salt and other contaminants, effectively desalinating the water. This is used in artificial constructed wetlands, for treating sewage.
Society and culture
Despite the issues associated with desalination processes, public support for its development can be very high. One survey of a Southern California community saw 71.9% of all respondents being in support of desalination plant development in their community. In many cases, high freshwater scarcity corresponds to higher public support for desalination development whereas areas with low water scarcity tend to have less public support for its development.
| Technology | Food, water and health | null |
156891 | https://en.wikipedia.org/wiki/Ford%20Model%20T | Ford Model T | The Ford Model T is an automobile that was produced by the Ford Motor Company from October 1, 1908, to May 26, 1927. It is generally regarded as the first mass-affordable automobile, which made car travel available to middle-class Americans. The relatively low price was partly the result of Ford's efficient fabrication, including assembly line production instead of individual handcrafting. The savings from mass production allowed the price to decline from $780 in 1910 () to $290 in 1924 ($ in dollars). It was mainly designed by three engineers, Joseph A. Galamb (the main engineer), Eugene Farkas, and Childe Harold Wills. The Model T was colloquially known as the "Tin Lizzie".
The Ford Model T was named the most influential car of the 20th century in the 1999 Car of the Century competition, ahead of the BMC Mini, Citroën DS, and Volkswagen Beetle. Ford's Model T was successful not only because it provided inexpensive transportation on a massive scale, but also because the car signified innovation for the rising middle class and became a powerful symbol of the United States' age of modernization. With over 15 million sold, it was the most sold car in history before being surpassed by the Volkswagen Beetle in 1972.
Introduction
Early automobiles, which were produced from the 1880s, were mostly scarce, expensive, and often unreliable. Being the first reliable, easily maintained, mass-market motorized transportation made the Model T into a great success: Within a few days after release, 15,000 orders were placed.
The first production Model T was built on August 12, 1908, and left the factory on September 27, 1908, at the Ford Piquette Avenue Plant in Detroit, Michigan. On May 26, 1927, Henry Ford watched the 15 millionth Model T Ford roll off the assembly line at his factory in Highland Park, Michigan.
Henry Ford conceived a series of cars between the founding of the company in 1903 and the introduction of the Model T. Ford named his first car the Model A and proceeded through the alphabet up through the Model T. Twenty models in all, not all of which went into production. The production model immediately before the Model T was the Model S, an upgraded version of the company's largest success to that point, the Model N. The follow-up to the Model T was another Ford Model A, rather than the "Model U". The company publicity said this was because the new car was such a departure from the old that Ford wanted to start all over again with the letter A.
The Model T was Ford's first automobile mass-produced on moving assembly lines with completely interchangeable parts, marketed to the middle class. Henry Ford said of the vehicle:
I will build a motor car for the great multitude. It will be large enough for the family, but small enough for the individual to run and care for. It will be constructed of the best materials, by the best men to be hired, after the simplest designs that modern engineering can devise. But it will be so low in price that no man making a good salary will be unable to own one – and enjoy with his family the blessing of hours of pleasure in God's great open spaces.
Although credit for the development of the assembly line belongs to Ransom E. Olds, with the first mass-produced automobile, the Oldsmobile Curved Dash, having begun in 1901, the tremendous advances in the efficiency of the system over the life of the Model T can be credited almost entirely to Ford and his engineers.
Characteristics
The Model T was designed by Childe Harold Wills, and Hungarian immigrants Joseph A. Galamb (main engineer) and Eugene Farkas. Henry Love, C. J. Smith, Gus Degner and Peter E. Martin were also part of the team, as were Galamb's fellow Hungarian immigrants Gyula Hartenberger and Károly Balogh. Production of the Model T began in the third quarter of 1908. Collectors today sometimes classify Model Ts by build years and refer to these as "model years", thus labeling the first Model Ts as 1909 models. This is a retroactive classification scheme; the concept of model years as understood today did not exist at the time. Even though design revisions occurred during the car's two decades of production, the company gave no particular name to any of the revised designs; all of them were called simply "Model T".
Engine
The Model T has a front-mounted inline four-cylinder engine, producing , for a top speed of . According to Ford Motor Company, the Model T had fuel economy of . The engine was designed to run on gasoline, although it may also have been able to run on kerosene or ethanol, although the decreasing cost of gasoline and the later introduction of Prohibition made ethanol an impractical fuel for most users. The engines of the first 2,447 units were cooled with water pumps; the engines of unit 2,448 and onward, with a few exceptions prior to around unit 2,500, were cooled by thermosiphon action.
The ignition system used in the Model T was an unusual one, with a low-voltage magneto incorporated in the flywheel, supplying alternating current to trembler coils to drive the spark plugs. This was closer to that used for stationary gas engines than the expensive high-voltage ignition magnetos that were used on some other cars. This ignition also made the Model T more flexible as to the quality or type of fuel it used. The system did not need a starting battery, since proper hand-cranking would generate enough current for starting. Electric lighting powered by the magneto was adopted in 1915, replacing acetylene gas flame lamp and oil lamps, but electric starting was not offered until 1919.
The Model T engine was produced for replacement needs as well as stationary and marine applications until 1941, well after production of the Model T ended.
The Fordson Model F tractor engine, that was designed about a decade later, was very similar to, but larger than, the Model T engine.
Transmission and drive train
The Model T is a rear-wheel drive vehicle. Its transmission is a planetary gear type known (at the time) as "three speed". In today's terms it is considered a two-speed, because one of the three speeds is reverse.
The Model T's transmission is controlled with three floor-mounted pedals, a revolutionary feature for its time, and a lever mounted to the road side of the driver's seat. The throttle is controlled with a lever on the steering wheel. The left-hand pedal is used to engage the transmission. With the floor lever in either the mid position or fully forward and the pedal pressed and held forward, the car enters low gear. When held in an intermediate position, the car is in neutral. If the left pedal is released, the Model T enters high gear, but only when the lever is fully forward – in any other position, the pedal only moves up as far as the central neutral position. This allows the car to be held in neutral while the driver cranks the engine by hand. The car can thus cruise without the driver having to press any of the pedals.
In the first 800 units, reverse is engaged with a lever; all units after that use the central pedal, which is used to engage reverse gear when the car is in neutral. The right-hand pedal operates the transmission brake – there are no brakes on the wheels. The floor lever also controls the parking brake, which is activated by pulling the lever all the way back. This doubles as an emergency brake.
Although it was uncommon, the drive bands could fall out of adjustment, allowing the car to creep, particularly when cold, adding another hazard to attempting to start the car: a person cranking the engine could be forced backward while still holding the crank as the car crept forward, although it was nominally in neutral. As the car utilizes a wet clutch, this condition could also occur in cold weather, when the thickened oil prevents the clutch discs from slipping freely. Power reaches the differential through a single universal joint attached to a torque tube which drives the rear axle; some models (typically trucks, but available for cars, as well) could be equipped with an optional two-speed rear Ruckstell axle, shifted by a floor-mounted lever which provides an underdrive gear for easier hill climbing.
Chassis / frame
The heavy-duty Model TT truck chassis came with a special worm gear rear differential with lower gearing than the normal car and truck, giving more pulling power but a lower top speed (the frame is also stronger; the cab and engine are the same). A Model TT is easily identifiable by the cylindrical housing for the worm-drive over the axle differential. All gears are vanadium steel running in an oil bath.
Transmission bands and linings
Two main types of band lining material were used:
Cotton – Cotton woven linings were the original type fitted and specified by Ford. Generally, the cotton lining is "kinder" to the drum surface, with damage to the drum caused only by the retaining rivets scoring the drum surface. Although this in itself did not pose a problem, a dragging band resulting from improper adjustment caused overheating of the transmission and engine, diminished power, and – in the case of cotton linings – rapid destruction of the band lining.
Wood – Wooden linings were originally offered as a "longer life" accessory part during the life of the Model T. They were a single piece of steam-bent wood and metal wire, fitted to the normal Model T transmission band. These bands give a very different feel to the pedals, with much more of a "bite" feel. The sensation is of a definite "grip" of the drum and seemed to noticeably increase the feel, in particular of the brake drum.
Aftermarket transmissions and drives
During the Model T's production run, particularly after 1916, more than 30 manufacturers offered auxiliary transmissions or drives to substitute for, or enhance, the Model T's drivetrain gears. Some offered overdrive for greater speed and efficiency, while others offered underdrives for more torque (often incorrectly described as "power") to enable hauling or pulling greater loads. Among the most noted were the Ruckstell two-speed rear axle, and transmissions by Muncie, Warford, and Jumbo.
Aftermarket transmissions generally fit one of four categories:
Replacement transmission – usually a sliding gear/selective transmission, intended as a direct replacement for Ford's planetary-gear transmission.
Front-mounted auxiliary transmission – designed to fit between the engine and Ford's transmission, to add additional gear ratios.
Rear-mounted auxiliary transmission – mounted at the rear axle housing, and attached between it and the driveshaft, to add additional gear ratios.
Multi-speed axle – designed to fit inside the differential's housing, to add additional gear ratios.
Murray Fahnestock, a Ford expert in the era of the Model T, particularly advised the use of auxiliary transmissions for the enclosed Model T's, such as the Ford Sedan and Coupelet, for three reasons: their greater weight put more strain on the drivetrain and engine, which auxiliary transmissions could smooth out; their bodies acted as sounding boards, echoing engine noise and vibration at higher engine speeds, which could be lessened with intermediate gears; and owners of the enclosed cars spent more to buy them, and thus likely had more money to enhance them.
He also noted that auxiliary transmissions were valuable for Ford Ton-Trucks in commercial use, allowing for driving speeds to vary with their widely variable loads – particularly when returning empty – possibly saving as much as 50% of returning drive time.
Suspension and wheels
Model T suspension employed a transversely mounted semi-elliptical spring for each of the front and rear beam axles, which allowed a great deal of wheel movement to cope with the dirt roads of the time.
The front axle was drop forged as a single piece of vanadium steel. Ford twisted many axles through eight full rotations (2880 degrees) and sent them to dealers to be put on display to demonstrate its superiority.
The Model T did not have a modern service brake. The right foot pedal applied a band around a drum in the transmission, thus stopping the rear wheels from turning. The previously mentioned parking brake lever operated band brakes acting on the inside of the rear brake drums, which were an integral part of the rear wheel hubs. Optional brakes that acted on the outside of the brake drums were available from aftermarket suppliers.
Wheels were wooden artillery wheels, with steel welded-spoke wheels available in 1926 and 1927.
Tires were pneumatic clincher type, in diameter, wide in the rear, in the front. Clinchers needed much higher pressure than today's tires, typically , to prevent them from leaving the rim at speed. Flat tires were a common problem.
Balloon tires became available in 1925. They were all around. Balloon tires were closer in design to today's tires, with steel wires reinforcing the tire bead, making lower pressure possible – typically – giving a softer ride. The steering gear ratio was changed from 4:1 to 5:1 with the introduction of balloon tires. The old nomenclature for tire size changed from measuring the outer diameter to measuring the rim diameter so (rim diameter) × (tire width) wheels has about the same outer diameter as clincher tires. All tires in this time period used an inner tube to hold the pressurized air; tubeless tires were not generally in use until much later.
Wheelbase is and standard track width was – track could be obtained on special order, "for Southern roads," identical to the pre-Civil War track gauge for many railroads in the former Confederacy. The standard 56-inch track being very near the inch standard railroad track gauge, meant that Model Ts could be and frequently were, fitted with flanged wheels and used as motorized railway vehicles or "speeders". The availability of a version meant the same could be done on the few remaining Southern railways – these being the only nonstandard lines remaining, except for a few narrow-gauge lines of various sizes. Although a Model T could be adapted to run on track as narrow as gauge (Wiscasset, Waterville and Farmington RR, Maine has one), this was a more complex alteration.
Colors
By 1918, half of all the cars in the U.S. were Model Ts. In his autobiography, Ford reported that in 1909 he told his management team, "Any customer can have a car painted any color that he wants so long as it is black."
However, in the first years of production from 1908 to 1913, the Model T was not available in black, but rather only in gray, green, blue, and red. Green was available for the touring cars, town cars, coupes, and Landaulets. Gray was available for the town cars only and red only for the touring cars. By 1912, all cars were being painted midnight blue with black fenders. Only in 1914 was the "any color so long as it is black" policy finally implemented.
It is often stated Ford suggested the use of black from 1914 to 1925 due to the low cost, durability, and faster drying time of black paint in that era. There is no evidence that black dried any faster than any other dark varnishes used at the time for painting, but carbon black pigment was indeed one of the cheapest (if not the cheapest) available, and dark color of gilsonite, a form of bitumen making cheap metal paints of the time durable, limited the (final) color options to dark shades of maroon, blue, green or black. At that period Ford used two similar types of the so-called Japan black paint, one as a basic coat applied directly to the metal and another as a final finish.
Paint choices in the American automotive industry, as well as in others (including locomotives, furniture, bicycles, and the rapidly expanding field of electrical appliances), were shaped by the development of the chemical industry. These included the disruption of dye sources during World War I and the advent, by the mid-1920s, of new nitrocellulose lacquers that were faster-drying and more scratch-resistant and obviated the need for multiple coats. Understanding the choice of paints for the Model T era and the years immediately following requires an understanding of the contemporaneous chemical industry.
During the lifetime production of the Model T, over 30 types of black paint were used on various parts of the car. These were formulated to satisfy the different means of applying the paint to the various parts, and had distinct drying times, depending on the part, paint, and method of drying.
Body
Although Ford classified the Model T with a single letter designation throughout its entire life and made no distinction by model years, enough significant changes to the body were made over the production life that the car may be classified into several style generations. The most immediately visible and identifiable changes were in the hood and cowl areas, although many other modifications were made to the vehicle.
1909–1914 – Characterized by a nearly straight, five-sided hood, with a flat top containing a center hinge and two side sloping sections containing the folding hinges. The firewall is flat from the windshield down with no distinct cowl. For these years, acetylene gas flame headlights were used because the flame is resistant to wind and rain. Thick concave mirrors combined with magnifying lenses projected the acetylene flame light. The fuel tank is placed under the front seat.
1915–1916 – The hood design is nearly the same five-sided design with the only obvious change being the addition of louvers to the vertical sides. A significant change to the cowl area occurred with the windshield relocated significantly behind the firewall and joined with a compound-contoured cowl panel. In these years electric headlights replaced carbide headlights.
1917–1923 – The hood design was changed to a tapered design with a curved top. The folding hinges were now located at the joint between the flat sides and the curved top. This is sometimes referred to as the "low hood" to distinguish it from the later hoods. The back edge of the hood now met the front edge of the cowl panel so that no part of the flat firewall was visible outside of the hood. This design was used the longest and during the highest production years, accounting for about half of the total number of Model Ts built.
1923–1925 – This change was made during the 1923 calendar year, so models built earlier in the year have the older design, while later vehicles have the newer design. The taper of the hood was increased and the rear section at the firewall is about an inch taller and several inches wider than the previous design. While this is a relatively minor change, the parts between the third and fourth generations are not interchangeable.
1926–1927 – This design change made the greatest difference in the appearance of the car. The hood was again enlarged, with the cowl panel no longer a compound curve and blended much more with the line of the hood. The distance between the firewall and the windshield was also increased significantly. This style is sometimes referred to as the "high hood".
The styling on the last "generation" was a preview for the following Model A, but the two models are visually quite different, as the body on the A is much wider and has curved doors as opposed to the flat doors on the T.
Diverse applications
When the Model T was designed and introduced, the infrastructure of the world was quite different from today's. Pavement was a rarity except for sidewalks and a few big-city streets. (The meaning of the term "pavement" as opposed to "sidewalk" comes from that era, when streets and roads were generally dirt and sidewalks were a paved way to walk along them.) Agriculture was the occupation of many people. Power tools were scarce outside factories, as were power sources for them; electrification, like pavement, was found usually only in larger towns. Rural electrification and motorized mechanization were embryonic in some regions and nonexistent in most. Henry Ford oversaw the requirements and design of the Model T based on contemporary realities. Consequently, the Model T was (intentionally) almost as much a tractor and portable engine as it was an automobile. It has always been well regarded for its all-terrain abilities and ruggedness. It could travel a rocky, muddy farm lane, cross a shallow stream, climb a steep hill, and be parked on the other side to have one of its wheels removed and a pulley fastened to the hub for a flat belt to drive a bucksaw, thresher, silo blower, conveyor for filling corn cribs or haylofts, baler, water pump, electrical generator, and many other applications. One unique application of the Model T was shown in the October 1922 issue of Fordson Farmer magazine. It showed a minister who had transformed his Model T into a mobile church, complete with small organ.
During this era, entire automobiles (including thousands of Model Ts) were hacked apart by their owners and reconfigured into custom machinery permanently dedicated to a purpose, such as homemade tractors and ice saws. Dozens of aftermarket companies sold prefab kits to facilitate the T's conversion from car to tractor. The Model T had been around for a decade before the Fordson tractor became available (1917–18), and many Ts were converted for field use. (For example, Harry Ferguson, later famous for his hitches and tractors, worked on Eros Model T tractor conversions before he worked with Fordsons and others.) During the next decade, Model T tractor conversion kits were harder to sell, as the Fordson and then the Farmall (1924), as well as other light and affordable tractors, served the farm market. But during the Depression (1930s), Model T tractor conversion kits had a resurgence, because by then used Model Ts and junkyard parts for them were plentiful and cheap.
Like many popular car engines of the era, the Model T engine was also used on home-built aircraft (such as the Pietenpol Sky Scout) and motorboats.
During World War I, the Model T was heavily used by the Allies in different roles and configurations, such as staff cars, light cargo trucks, light vans, light patrol cars, liason vehicles and even as rail tractors. The ambulance version proved to be well-suited for use in the combat areas. The ambulances could carry three stretcher patients or four seated patients, and two others could sit with the driver. Besides those made in the United States, ambulance bodies were also made by of Boulogne, near Paris. The Romanian Army also made use of converted Model T ambulances. These ambulances, named "Regina Maria" ambulances, were capable of carrying four stretcher patients. Conversion work was done by the Leonida Workshops of Bucharest. An armored-car variant (called the "FT-B") was developed in Poland in 1920 due to the high demand during the Polish-Soviet war in 1920.
Many Model Ts were converted into vehicles that could travel across heavy snows with kits on the rear wheels (sometimes with an extra pair of rear-mounted wheels and two sets of continuous track to mount on the now-tandemed rear wheels, essentially making it a half-track) and skis replacing the front wheels. They were popular for rural mail delivery for a time. The common name for these conversions of cars and small trucks was "snowflyers". These vehicles were extremely popular in the northern reaches of Canada, where factories were set up to produce them.
A number of companies built Model T–based railcars. In The Great Railway Bazaar, Paul Theroux mentions a rail journey in India on such a railcar. The New Zealand Railways Department's RM class included a few.
The American LaFrance company modified more than 900 Model Ts for use in firefighting, adding tanks, hoses, tools and a bell. Model T fire engines were in service in North America, Europe, and Australia. A 1919 Model T equipped to fight chemical fires has been restored and is on display at the North Charleston Fire Museum in South Carolina.
Production
Mass production
The knowledge and skills needed by a factory worker were reduced to 84 areas. When introduced, the T used the building methods typical at the time, assembly by hand, and production was small. The Ford Piquette Avenue Plant could not keep up with demand for the Model T, and only 11 cars were built there during the first full month of production. More and more machines were used to reduce the complexity within the 84 defined areas. In 1910, after assembling nearly 12,000 Model Ts, Henry Ford moved the company to the new Highland Park complex. During this time the Model T production system (including the supply chain) transitioned into an iconic example of assembly-line production. In subsequent decades it would also come to be viewed as the classic example of the rigid, first-generation version of assembly line production, as opposed to flexible mass production of higher quality products.
As a result, Ford's cars came off the line in three-minute intervals, much faster than previous methods, reducing production time from hours before to 93 minutes by 1914, while using less manpower. In 1914, Ford produced more cars than all other automakers combined. The Model T was a great commercial success, and by the time Ford made its 10 millionth car, half of all cars in the world were Fords. It was so successful Ford did not purchase any advertising between 1917 and 1923; instead, the Model T became so famous, people considered it a norm. More than 15 million Model Ts were manufactured in all, reaching a rate of 9,000 to 10,000 cars a day in 1925, or 2 million annually, more than any other model of its day, at a price of just $260 ($ today). Total Model T production was finally surpassed by the Volkswagen Beetle on February 17, 1972, while the Ford F-Series (itself directly descended from the Model T roadster pickup) has surpassed the Model T as Ford's all-time best-selling model.
Henry Ford's ideological approach to Model T design was one of getting it right and then keeping it the same; he believed the Model T was all the car a person would, or could, ever need. As other companies offered comfort and styling advantages, at competitive prices, the Model T lost market share and became barely profitable. Design changes were not as few as the public perceived, but the idea of an unchanging model was kept intact. Eventually, on May 26, 1927, Ford Motor Company ceased US production and began the changeovers required to produce the Model A. Some of the other Model T factories in the world continued for a short while, with the final Model T produced at the Cork, Ireland plant in December 1927.
Model T engines continued to be produced until August 4, 1941. Almost 170,000 were built after car production stopped, as replacement engines were required to service the many existing vehicles. Racers and enthusiasts, forerunners of modern hot rodders, used the Model Ts' blocks to build popular and cheap racing engines, including Cragar, Navarro, and, famously, the Frontenacs ("Fronty Fords") of the Chevrolet brothers, among many others.
The Model T employed some advanced technology, for example, its use of vanadium steel alloy. Its durability was phenomenal, and some Model Ts and their parts are in running order over a century later. Although Henry Ford resisted some kinds of change, he always championed the advancement of materials engineering, and often mechanical engineering and industrial engineering.
In 2002, Ford built a final batch of six Model Ts as part of their 2003 centenary celebrations. These cars were assembled from remaining new components and other parts produced from the original drawings. The last of the six was used for publicity purposes in the UK.
Although Ford no longer manufactures parts for the Model T, many parts are still manufactured through private companies as replicas to service the thousands of Model Ts still in operation today.
On May 26, 1927, Henry Ford and his son Edsel drove the 15-millionth Model T out of the factory. This marked the famous automobile's official last day of production at the main factory.
Price and production
The moving assembly line system, which started on October 7, 1913, allowed Ford to reduce the price of his cars. As he continued to fine-tune the system, Ford was able to keep reducing costs significantly. As volume increased, he was able to also lower the prices due to some of the fixed costs being spread over a larger number of vehicles as large supply chain investments increased assets per vehicle. Other factors reduced the price such as material costs and design changes. As Ford had market dominance in North America during the 1910s, other competitors reduced their prices to stay competitive, while offering features that were not available on the ModelT such as a wide choice of colors, body styles and interior appearance and choices, and competitors also benefited from the reduced costs of raw materials and infrastructure benefits to supply chain and ancillary manufacturing businesses.
In 1909, the cost of the Runabout started at . By 1925 it had been lowered to .
The figures below are US production numbers compiled by R. E. Houston, Ford Production Department, August 3, 1927. The figures between 1909 and 1920 are for Ford's fiscal year. From 1909 to 1913, the fiscal year was from October 1 to September 30 the following calendar year with the year number being the year in which it ended. For the 1914 fiscal year, the year was October 1, 1913, through July 31, 1914. Starting in August 1914, and through the end of the ModelT era, the fiscal year was August 1 through July 31. Beginning with January 1920, the figures are for the calendar year.
The above tally includes a total of 14,689,525 vehicles. Ford said the last ModelT was the 15 millionth vehicle produced.
Recycling
Henry Ford used wood scraps from the production of Model Ts to make charcoal briquettes. Originally named Ford Charcoal, the name was changed to Kingsford Charcoal after the Iron Mountain Ford Plant closed in 1951 and the Kingsford Chemical Company was formed and continued the wood distillation process. E. G. Kingsford, Ford's cousin by marriage, brokered the selection of the new sawmill and wood distillation plant site. Lumber for production of the Model T came from the same location, built-in 1920 called the Iron Mountain Ford which incorporated a sawmill where lumber from Ford purchased land in the Upper Peninsula of Michigan was cut and dried. Scrap wood was distilled at the Iron Mountain plant for its wood chemicals, including methanol (wood alcohol), with the end by-product being lump charcoal. This lump charcoal was modified and pressed into briquettes and mass-marketed by Ford.
First global car
The Ford Model T was the first automobile built by multiple countries simultaneously, since they were being produced in Walkerville, Canada, and in Trafford Park, Greater Manchester, England, starting in 1911. After World War I ended in 1918, they were assembled in Germany, Argentina, France, Spain, Denmark, Norway, Belgium, Brazil, Mexico, Australia and Japan. Furthermore, exports from the American factories reached 303,000 in 1925. The heavy losses of horses during the World War made the Model T attractive as a new power source for European farmers. They used the Model T to pull plows, tow wagons, and power farm machinery. It enabled them to transport their products to markets more efficiently.
The Aeroford was an English automobile manufactured in Bayswater, London, from 1920 to 1925. It was a Model T with a distinct hood and grille to make it appear to be a totally different de––sign, what later was called badge engineering. The Aeroford sold from £288 in 1920, dropping to £168–214 by 1925. It was available as a two-seater, four-seater, or coupé.
Advertising and marketing
Ford created a massive publicity machine in Detroit to ensure every newspaper carried stories and advertisements about the new product. Ford's network of local dealers made the car ubiquitous in virtually every city in North America. A large part of the success of Ford's Model T stems from the innovative strategy which introduced a large network of sales hubs making it easy to purchase the car. As independent dealers, the franchisees grew rich and publicized not just the Ford but the very concept of automobiling; local motor clubs sprang up to help new drivers and to explore the countryside. Ford was always eager to sell to farmers, who looked on the vehicle as a commercial device to help their business. Sales skyrocketed – several years posted around 100 percent gains on the previous year.
24 Hours of Le Mans
Parisian Ford dealer Charles Montier and his brother-in-law Albert Ouriou entered a heavily modified version of the Model T (the "Montier Special") in the first three 24 Hours of Le Mans. They finished 14th in the inaugural 1923 race.
Car clubs
Today, four main clubs exist to support the preservation and restoration of these cars: the Model T Ford Club International, the Model T Ford Club of America and the combined clubs of Australia. With many chapters of clubs around the world, the Model T Ford Club of Victoria has a membership with a considerable number of uniquely Australian cars. (Australia produced its own car bodies, and therefore many differences occurred between the Australian bodied tourers and the US/Canadian cars.) In the UK, the Model T Ford Register of Great Britain celebrated its 50th anniversary in 2010. Many steel Model T parts are still manufactured today, and even fiberglass replicas of their distinctive bodies are produced, which are popular for T-bucket style hot rods (as immortalized in the Jan and Dean surf music song "Bucket T", which was later recorded by The Who). In 1949, more than twenty years after the end of production, 200,000 Model Ts were registered in the United States. In 2008, it was estimated that about 50,000 to 60,000 Ford Model Ts remain roadworthy.
Gallery
| Technology | Specific automobiles | null |
156964 | https://en.wikipedia.org/wiki/MicroRNA | MicroRNA | Micro ribonucleic acid (microRNA, miRNA, μRNA) are small, single-stranded, non-coding RNA molecules containing 21–23 nucleotides. Found in plants, animals, and even some viruses, miRNAs are involved in RNA silencing and post-transcriptional regulation of gene expression. miRNAs base-pair to complementary sequences in messenger RNA (mRNA) molecules, then silence said mRNA molecules by one or more of the following processes:
Cleaving the mRNA strand into two pieces.
Destabilizing the mRNA by shortening its poly(A) tail.
Reducing translation of the mRNA into proteins.
In cells of humans and other animals, miRNAs primarily act by destabilizing the mRNA.
miRNAs resemble the small interfering RNAs (siRNAs) of the RNA interference (RNAi) pathway, except miRNAs derive from regions of RNA transcripts that fold back on themselves to form short hairpins, whereas siRNAs derive from longer regions of double-stranded RNA. The human genome may encode over 1900 miRNAs, However, only about 500 human miRNAs represent bona fide miRNAs in the manually curated miRNA gene database MirGeneDB.
miRNAs are abundant in many mammalian cell types. They appear to target about 60% of the genes of humans and other mammals. Many miRNAs are evolutionarily conserved, which implies that they have important biological functions. For example, 90 families of miRNAs have been conserved since at least the common ancestor of mammals and fish, and most of these conserved miRNAs have important functions, as shown by studies in which genes for one or more members of a family have been knocked out in mice.
In 2024, American scientists Victor Ambros and Gary Ruvkun were awarded the Nobel Prize in Physiology or Medicine for their work on the discovery of miRNA and its role in post-transcriptional gene regulation.
History
The first miRNA was discovered in the early 1990s. However, they were not recognized as a distinct class of biological regulators until the early 2000s. Research revealed different sets of miRNAs expressed in different cell types and tissues and multiple roles for miRNAs in plant and animal development and in many other biological processes. Aberrant miRNA expression are implicated in disease states. MiRNA-based therapies are under investigation.
The first miRNA was discovered in 1993 by a group led by Victor Ambros and including Lee and Feinbaum. However, additional insight into its mode of action required simultaneously published work by Gary Ruvkun's team, including Wightman and Ha. These groups published back-to-back papers on the lin-4 gene, which was known to control the timing of C. elegans larval development by repressing the lin-14 gene. When Lee et al. isolated the lin-4 miRNA, they found that instead of producing an mRNA encoding a protein, it produced short non-coding RNAs, one of which was a ~22-nucleotide RNA that contained sequences partially complementary to multiple sequences in the 3' UTR of the lin-14 mRNA. This complementarity was proposed to inhibit the translation of the lin-14 mRNA into the LIN-14 protein. At the time, the lin-4 small RNA was thought to be a nematode idiosyncrasy.
In 2000, a second small RNA was characterized: let-7 RNA, which represses lin-41 to promote a later developmental transition in C. elegans. The let-7 RNA was found to be conserved in many species, leading to the suggestion that let-7 RNA and additional "small temporal RNAs" might regulate the timing of development in diverse animals, including humans.
A year later, the lin-4 and let-7 RNAs were found to be part of a large class of small RNAs present in C. elegans, Drosophila and human cells. The many RNAs of this class resembled the lin-4 and let-7 RNAs, except their expression patterns were usually inconsistent with a role in regulating the timing of development. This suggested that most might function in other types of regulatory pathways. At this point, researchers started using the term "microRNA" to refer to this class of small regulatory RNAs.
The first human disease associated with deregulation of miRNAs was chronic lymphocytic leukemia. In this disorder, the miRNAs have a dual role working as both tumor suppressors and oncogenes.
Nomenclature
Under a standard nomenclature system, names are assigned to experimentally confirmed miRNAs before publication. The prefix "miR" is followed by a dash and a number, the latter often indicating order of naming. For example, miR-124 was named and likely discovered prior to miR-456. A capitalized "miR-" refers to the mature form of the miRNA, while the uncapitalized "mir-" refers to the pre-miRNA and the -miRNA. The genes encoding miRNAs are also named using the same three-letter prefix according to the conventions of the organism gene nomenclature. For examples, the official miRNAs gene names in some organisms are "mir-1 in C. elegans and Drosophila, Mir1 in Rattus norvegicus and MIR25 in human.
miRNAs with nearly identical sequences except for one or two nucleotides are annotated with an additional lower case letter. For example, miR-124a is closely related to miR-124b. For example:
:
:
Pre-miRNAs, -miRNAs and genes that lead to 100% identical mature miRNAs but that are located at different places in the genome are indicated with an additional dash-number suffix. For example, the pre-miRNAs -mir-194-1 and -mir-194-2 lead to an identical mature miRNA (-miR-194) but are from genes located in different genome regions.
Species of origin is designated with a three-letter prefix, e.g., -miR-124 is a human (Homo sapiens) miRNA and oar-miR-124 is a sheep (Ovis aries) miRNA. Other common prefixes include "v" for viral (miRNA encoded by a viral genome) and "d" for Drosophila miRNA (a fruit fly commonly studied in genetic research).
When two mature microRNAs originate from opposite arms of the same pre-miRNA and are found in roughly similar amounts, they are denoted with a -3p or -5p suffix. (In the past, this distinction was also made with "s" (sense) and "as" (antisense)). However, the mature microRNA found from one arm of the hairpin is usually much more abundant than that found from the other arm, in which case, an asterisk following the name indicates the mature species found at low levels from the opposite arm of a hairpin. For example, miR-124 and miR-124* share a pre-miRNA hairpin, but much more miR-124 is found in the cell.
Targets
Plant miRNAs usually have near-perfect pairing with their mRNA targets, which induces gene repression through cleavage of the target transcripts. In contrast, animal miRNAs are able to recognize their target mRNAs by using as few as 6–8 nucleotides (the seed region) at the 5' end of the miRNA, which is not enough pairing to induce cleavage of the target mRNAs. Combinatorial regulation is a feature of miRNA regulation in animals. A given miRNA may have hundreds of different mRNA targets, and a given target might be regulated by multiple miRNAs.
Estimates of the average number of unique messenger RNAs that are targets for repression by a typical miRNA vary, depending on the estimation method, but multiple approaches show that mammalian miRNAs can have many unique targets. For example, an analysis of the miRNAs highly conserved in vertebrates shows that each has, on average, roughly 400 conserved targets. Likewise, experiments show that a single miRNA species can reduce the stability of hundreds of unique messenger RNAs. Other experiments show that a single miRNA species may repress the production of hundreds of proteins, but that this repression often is relatively mild (much less than 2-fold).
Biogenesis
As many as 40% of miRNA genes may lie in the introns or even exons of other genes. These are usually, though not exclusively, found in a sense orientation, and thus usually are regulated together with their host genes.
The DNA template is not the final word on mature miRNA production: 6% of human miRNAs show RNA editing (IsomiRs), the site-specific modification of RNA sequences to yield products different from those encoded by their DNA. This increases the diversity and scope of miRNA action beyond that implicated from the genome alone.
Transcription
miRNA genes are usually transcribed by RNA polymerase II (Pol II). The polymerase often binds to a promoter found near the DNA sequence, encoding what will become the hairpin loop of the pre-miRNA. The resulting transcript is capped with a specially modified nucleotide at the 5' end, polyadenylated with multiple adenosines (a poly(A) tail), and spliced. Animal miRNAs are initially transcribed as part of one arm of an ~80 nucleotide RNA stem-loop that in turn forms part of a several hundred nucleotide-long miRNA precursor termed a pri-miRNA. When a stem-loop precursor is found in the 3' UTR, a transcript may serve as a pri-miRNA and a mRNA. RNA polymerase III (Pol III) transcribes some miRNAs, especially those with upstream Alu sequences, transfer RNAs (tRNAs), and mammalian wide interspersed repeat (MWIR) promoter units.
Nuclear processing
A single pri-miRNA may contain from one to six miRNA precursors. These hairpin loop structures are composed of about 70 nucleotides each. Each hairpin is flanked by sequences necessary for efficient processing.
The double-stranded RNA (dsRNA) structure of the hairpins in a pri-miRNA is recognized by a nuclear protein known as DiGeorge Syndrome Critical Region 8 (DGCR8 or "Pasha" in invertebrates), named for its association with DiGeorge Syndrome. DGCR8 associates with the enzyme Drosha, a protein that cuts RNA, to form the Microprocessor complex. In this complex, DGCR8 orients the catalytic RNase III domain of Drosha to liberate hairpins from pri-miRNAs by cleaving RNA about eleven nucleotides from the hairpin base (one helical dsRNA turn into the stem). The product resulting has a two-nucleotide overhang at its 3' end; it has 3' hydroxyl and 5' phosphate groups. It is often termed as a pre-miRNA (precursor-miRNA). Sequence motifs downstream of the pre-miRNA that are important for efficient processing have been identified.
Pre-miRNAs that are spliced directly out of introns, bypassing the Microprocessor complex, are known as "mirtrons." Mirtrons have been found in Drosophila, C. elegans, and mammals.
As many as 16% of pre-miRNAs may be altered through nuclear RNA editing. Most commonly, enzymes known as adenosine deaminases acting on RNA (ADARs) catalyze adenosine to inosine (A to I) transitions. RNA editing can halt nuclear processing (for example, of pri-miR-142, leading to degradation by the ribonuclease Tudor-SN) and alter downstream processes including cytoplasmic miRNA processing and target specificity (e.g., by changing the seed region of miR-376 in the central nervous system).
Nuclear export
Pre-miRNA hairpins are exported from the nucleus in a process involving the nucleocytoplasmic shuttler Exportin-5. This protein, a member of the karyopherin family, recognizes a two-nucleotide overhang left by the RNase III enzyme Drosha at the 3' end of the pre-miRNA hairpin. Exportin-5-mediated transport to the cytoplasm is energy-dependent, using guanosine triphosphate (GTP) bound to the Ran protein.
Cytoplasmic processing
In the cytoplasm, the pre-miRNA hairpin is cleaved by the RNase III enzyme Dicer. This endoribonuclease interacts with 5' and 3' ends of the hairpin and cuts away the loop joining the 3' and 5' arms, yielding an imperfect miRNA:miRNA* duplex about 22 nucleotides in length. Overall hairpin length and loop size influence the efficiency of Dicer processing. The imperfect nature of the miRNA:miRNA* pairing also affects cleavage. Some of the G-rich pre-miRNAs can potentially adopt the G-quadruplex structure as an alternative to the canonical stem-loop structure. For example, human pre-miRNA 92b adopts a G-quadruplex structure which is resistant to the Dicer mediated cleavage in the cytoplasm. Although either strand of the duplex may potentially act as a functional miRNA, only one strand is usually incorporated into the RNA-induced silencing complex (RISC) where the miRNA and its mRNA target interact.
While the majority of miRNAs are located within the cell, some miRNAs, commonly known as circulating miRNAs or extracellular miRNAs, have also been found in extracellular environment, including various biological fluids and cell culture media.
Biogenesis in plants
miRNA biogenesis in plants differs from animal biogenesis mainly in the steps of nuclear processing and export. Instead of being cleaved by two different enzymes, once inside and once outside the nucleus, both cleavages of the plant miRNA are performed by a Dicer homolog, called Dicer-like1 (DL1). DL1 is expressed only in the nucleus of plant cells, which indicates that both reactions take place inside the nucleus. Before plant miRNA:miRNA* duplexes are transported out of the nucleus, its 3' overhangs are methylated by a RNA methyltransferaseprotein called Hua-Enhancer1 (HEN1). The duplex is then transported out of the nucleus to the cytoplasm by a protein called Hasty (HST), an Exportin 5 homolog, where they disassemble and the mature miRNA is incorporated into the RISC.
RNA-induced silencing complex
The mature miRNA is part of an active RNA-induced silencing complex (RISC) containing Dicer and many associated proteins. RISC is also known as a microRNA ribonucleoprotein complex (miRNP); A RISC with incorporated miRNA is sometimes referred to as a "miRISC."
Dicer processing of the pre-miRNA is thought to be coupled with unwinding of the duplex. Generally, only one strand is incorporated into the miRISC, selected on the basis of its thermodynamic instability and weaker base-pairing on the 5' end relative to the other strand. The position of the stem-loop may also influence strand choice. The other strand, called the passenger strand due to its lower levels in the steady state, is denoted with an asterisk (*) and is normally degraded. In some cases, both strands of the duplex are viable and become functional miRNA that target different mRNA populations.
Members of the Argonaute (Ago) protein family are central to RISC function. Argonautes are needed for miRNA-induced silencing and contain two conserved RNA binding domains: a PAZ domain that can bind the single stranded 3' end of the mature miRNA and a PIWI domain that structurally resembles ribonuclease-H and functions to interact with the 5' end of the guide strand. They bind the mature miRNA and orient it for interaction with a target mRNA. Some argonautes, for example human Ago2, cleave target transcripts directly; argonautes may also recruit additional proteins to achieve translational repression. The human genome encodes eight argonaute proteins divided by sequence similarities into two families: AGO (with four members present in all mammalian cells and called E1F2C/hAgo in humans), and PIWI (found in the germline and hematopoietic stem cells).
Additional RISC components include TRBP [human immunodeficiency virus (HIV) transactivating response RNA (TAR) binding protein], PACT (protein activator of the interferon-induced protein kinase), the SMN complex, fragile X mental retardation protein (FMRP), Tudor staphylococcal nuclease-domain-containing protein (Tudor-SN), the putative DNA helicase MOV10, and the RNA recognition motif containing protein TNRC6B.
Mode of silencing and regulatory loops
Gene silencing may occur either via mRNA degradation or preventing mRNA from being translated. For example, miR16 contains a sequence complementary to the AU-rich element found in the 3'UTR of many unstable mRNAs, such as TNF alpha or GM-CSF. It has been demonstrated that given complete complementarity between the miRNA and target mRNA sequence, Ago2 can cleave the mRNA and lead to direct mRNA degradation. In the absence of complementarity, silencing is achieved by preventing translation. The relation of miRNA and its target mRNA can be based on the simple negative regulation of a target mRNA, but it seems that a common scenario is the use of a "coherent feed-forward loop", "mutual negative feedback loop" (also termed double negative loop) and "positive feedback/feed-forward loop". Some miRNAs work as buffers of random gene expression changes arising due to stochastic events in transcription, translation and protein stability. Such regulation is typically achieved by the virtue of negative feedback loops or incoherent feed-forward loop uncoupling protein output from mRNA transcription.
Turnover
Turnover of mature miRNA is needed for rapid changes in miRNA expression profiles. During miRNA maturation in the cytoplasm, uptake by the Argonaute protein is thought to stabilize the guide strand, while the opposite (* or "passenger") strand is preferentially destroyed. In what has been called a "Use it or lose it" strategy, Argonaute may preferentially retain miRNAs with many targets over miRNAs with few or no targets, leading to degradation of the non-targeting molecules.
Decay of mature miRNAs in Caenorhabditis elegans is mediated by the 5'-to-3' exoribonuclease XRN2, also known as Rat1p. In plants, SDN (small RNA degrading nuclease) family members degrade miRNAs in the opposite (3'-to-5') direction. Similar enzymes are encoded in animal genomes, but their roles have not been described.
Several miRNA modifications affect miRNA stability. As indicated by work in the model organism Arabidopsis thaliana (thale cress), mature plant miRNAs appear to be stabilized by the addition of methyl moieties at the 3' end. The 2'-O-conjugated methyl groups block the addition of uracil (U) residues by uridyltransferase enzymes, a modification that may be associated with miRNA degradation. However, uridylation may also protect some miRNAs; the consequences of this modification are incompletely understood. Uridylation of some animal miRNAs has been reported. Both plant and animal miRNAs may be altered by addition of adenine (A) residues to the 3' end of the miRNA. An extra A added to the end of mammalian miR-122, a liver-enriched miRNA important in hepatitis C, stabilizes the molecule and plant miRNAs ending with an adenine residue have slower decay rates.
Cellular functions
The function of miRNAs appears to be in gene regulation. For that purpose, a miRNA is complementary to a part of one or more messenger RNAs (mRNAs). Animal miRNAs are usually complementary to a site in the 3' UTR whereas plant miRNAs are usually complementary to coding regions of mRNAs. Perfect or near perfect base pairing with the target RNA promotes cleavage of the RNA. This is the primary mode of plant miRNAs. In animals the match-ups are imperfect.
For partially complementary microRNAs to recognise their targets, nucleotides 2–7 of the miRNA (its 'seed region') must be perfectly complementary. Animal miRNAs inhibit protein translation of the target mRNA (this is present but less common in plants). Partially complementary microRNAs can also speed up deadenylation, causing mRNAs to be degraded sooner. While degradation of miRNA-targeted mRNA is well documented, whether or not translational repression is accomplished through mRNA degradation, translational inhibition, or a combination of the two is hotly debated. Recent work on miR-430 in zebrafish, as well as on bantam-miRNA and miR-9 in Drosophila cultured cells, shows that translational repression is caused by the disruption of translation initiation, independent of mRNA deadenylation.
miRNAs occasionally also cause histone modification and DNA methylation of promoter sites, which affects the expression of target genes.
Nine mechanisms of miRNA action are described and assembled in a unified mathematical model:
Cap-40S initiation inhibition;
60S Ribosomal unit joining inhibition;
Elongation inhibition;
Ribosome drop-off (premature termination);
Co-translational nascent protein degradation;
Sequestration in P-bodies;
mRNA decay (destabilisation);
mRNA cleavage;
Transcriptional inhibition through microRNA-mediated chromatin reorganization followed by gene silencing.
It is often impossible to discern these mechanisms using experimental data about stationary reaction rates. Nevertheless, they are differentiated in dynamics and have different kinetic signatures.
Unlike plant microRNAs, the animal microRNAs target diverse genes. However, genes involved in functions common to all cells, such as gene expression, have relatively fewer microRNA target sites and seem to be under selection to avoid targeting by microRNAs. There is a strong correlation between ITPR gene regulations and mir-92 and mir-19.
dsRNA can also activate gene expression, a mechanism that has been termed "small RNA-induced gene activation" or RNAa. dsRNAs targeting gene promoters can induce potent transcriptional activation of associated genes. This was demonstrated in human cells using synthetic dsRNAs termed small activating RNAs (saRNAs), but has also been demonstrated for endogenous microRNA.
Interactions between microRNAs and complementary sequences on genes and even pseudogenes that share sequence homology are thought to be a back channel of communication regulating expression levels between paralogous genes (genes having a similar structure indicating divergence from a common ancestral gene). Given the name "competing endogenous RNAs" (ceRNAs), these microRNAs bind to "microRNA response elements" on genes and pseudogenes and may provide another explanation for the persistence of non-coding DNA.
miRNAs are also found as extracellular circulating miRNAs. Circulating miRNAs are released into body fluids including blood and cerebrospinal fluid and have the potential to be available as biomarkers in a number of diseases. Some researches show that mRNA cargo of exosomes may have a role in implantation, they can savage an adhesion between trophoblast and endometrium or support the adhesion by down regulating or up regulating expression of genes involved in adhesion/invasion.
Moreover, miRNA as miR-183/96/182 seems to play a key role in circadian rhythm.
Evolution
miRNAs are well conserved in both plants and animals, and are thought to be a vital and evolutionarily ancient component of gene regulation. While core components of the microRNA pathway are conserved between plants and animals, miRNA repertoires in the two kingdoms appear to have emerged independently with different primary modes of action.
microRNAs are useful phylogenetic markers because of their apparently low rate of evolution. microRNAs' origin as a regulatory mechanism developed from previous RNAi machinery that was initially used as a defense against exogenous genetic material such as viruses. Their origin may have permitted the development of morphological innovation, and by making gene expression more specific and 'fine-tunable', permitted the genesis of complex organs and perhaps, ultimately, complex life. Rapid bursts of morphological innovation are generally associated with a high rate of microRNA accumulation.
New microRNAs are created in multiple ways. Novel microRNAs can originate from the random formation of hairpins in "non-coding" sections of DNA (i.e. introns or intergene regions), but also by the duplication and modification of existing microRNAs. microRNAs can also form from inverted duplications of protein-coding sequences, which allows for the creation of a foldback hairpin structure. The rate of evolution (i.e. nucleotide substitution) in recently originated microRNAs is comparable to that elsewhere in the non-coding DNA, implying evolution by neutral drift; however, older microRNAs have a much lower rate of change (often less than one substitution per hundred million years), suggesting that once a microRNA gains a function, it undergoes purifying selection. Individual regions within an miRNA gene face different evolutionary pressures, where regions that are vital for processing and function have higher levels of conservation. At this point, a microRNA is rarely lost from an animal's genome, although newer microRNAs (thus presumably non-functional) are frequently lost. In Arabidopsis thaliana, the net flux of miRNA genes has been predicted to be between 1.2 and 3.3 genes per million years. This makes them a valuable phylogenetic marker, and they are being looked upon as a possible solution to outstanding phylogenetic problems such as the relationships of arthropods. On the other hand, in multiple cases microRNAs correlate poorly with phylogeny, and it is possible that their phylogenetic concordance largely reflects a limited sampling of microRNAs.
microRNAs feature in the genomes of most eukaryotic organisms, from the brown algae to the animals. However, the difference in how these microRNAs function and the way they are processed suggests that microRNAs arose independently in plants and animals.
Focusing on the animals, the genome of Mnemiopsis leidyi appears to lack recognizable microRNAs, as well as the nuclear proteins Drosha and Pasha, which are critical to canonical microRNA biogenesis. It is the only animal thus far reported to be missing Drosha. MicroRNAs play a vital role in the regulation of gene expression in all non-ctenophore animals investigated thus far except for Trichoplax adhaerens, the first known member of the phylum Placozoa.
Across all species, in excess of 5000 different miRNAs had been identified by March 2010. Whilst short RNA sequences (50 – hundreds of base pairs) of a broadly comparable function occur in bacteria, bacteria lack true microRNAs.
Experimental detection and manipulation
While researchers focused on miRNA expression in physiological and pathological processes, various technical variables related to microRNA isolation emerged. The stability of stored miRNA samples has been questioned. microRNAs degrade much more easily than mRNAs, partly due to their length, but also because of ubiquitously present RNases. This makes it necessary to cool samples on ice and use RNase-free equipment.
microRNA expression can be quantified in a two-step polymerase chain reaction process of modified RT-PCR followed by quantitative PCR. Variations of this method achieve absolute or relative quantification. miRNAs can also be hybridized to microarrays, slides or chips with probes to hundreds or thousands of miRNA targets, so that relative levels of miRNAs can be determined in different samples. microRNAs can be both discovered and profiled by high-throughput sequencing methods (microRNA sequencing). The activity of an miRNA can be experimentally inhibited using a locked nucleic acid (LNA) oligo, a Morpholino oligo or a 2'-O-methyl RNA oligo. A specific miRNA can be silenced by a complementary antagomir. microRNA maturation can be inhibited at several points by steric-blocking oligos. The miRNA target site of an mRNA transcript can also be blocked by a steric-blocking oligo. For the "in situ" detection of miRNA, LNA or Morpholino probes can be used. The locked conformation of LNA results in enhanced hybridization properties and increases sensitivity and selectivity, making it ideal for detection of short miRNA.
High-throughput quantification of miRNAs is error prone, for the larger variance (compared to mRNAs) that comes with methodological problems. mRNA-expression is therefore often analyzed to check for miRNA-effects in their levels (e.g. in). Databases can be used to pair mRNA- and miRNA-data that predict miRNA-targets based on their base sequence. While this is usually done after miRNAs of interest have been detected (e. g. because of high expression levels), ideas for analysis tools that integrate mRNA- and miRNA-expression information have been proposed.
Human and animal diseases
Just as miRNA is involved in the normal functioning of eukaryotic cells, so has dysregulation of miRNA been associated with disease. A manually curated, publicly available database, miR2Disease, documents known relationships between miRNA dysregulation and human disease.
Inherited diseases
A mutation in the seed region of miR-96 causes hereditary progressive hearing loss.
A mutation in the seed region of miR-184 causes hereditary keratoconus with anterior polar cataract.
Deletion of the miR-17~92 cluster causes skeletal and growth defects.
Cancer
The first human disease known to be associated with miRNA deregulation was chronic lymphocytic leukemia. Many other miRNAs also have links with cancer and accordingly are sometimes referred to as "oncomirs". In malignant B cells miRNAs participate in pathways fundamental to B cell development like B-cell receptor (BCR) signalling, B-cell migration/adhesion, cell-cell interactions in immune niches and the production and class-switching of immunoglobulins. MiRNAs influence B cell maturation, generation of pre-, marginal zone, follicular, B1, plasma and memory B cells.
Another role for miRNA in cancers is to use their expression level for prognosis. In NSCLC samples, low miR-324a levels may serve as an indicator of poor survival. Either high miR-185 or low miR-133b levels may correlate with metastasis and poor survival in colorectal cancer.
Furthermore, specific miRNAs may be associated with certain histological subtypes of colorectal cancer. For instance, expression levels of miR-205 and miR-373 have been shown to be increased in mucinous colorectal cancers and mucin-producing Ulcerative Colitis-associated colon cancers, but not in sporadic colonic adenocarcinoma that lack mucinous components. In-vitro studies suggested that miR-205 and miR-373 may functionally induce different features of mucinous-associated neoplastic progression in intestinal epithelial cells.
Hepatocellular carcinoma cell proliferation may arise from miR-21 interaction with MAP2K3, a tumor repressor gene. Optimal treatment for cancer involves accurately identifying patients for risk-stratified therapy. Those with a rapid response to initial treatment may benefit from truncated treatment regimens, showing the value of accurate disease response measures. Cell-free circulating miRNAs (cimiRNAs) are highly stable in blood, are overexpressed in cancer and are quantifiable within the diagnostic laboratory. In classical Hodgkin lymphoma, plasma miR-21, miR-494, and miR-1973 are promising disease response biomarkers. Circulating miRNAs have the potential to assist clinical decision making and aid interpretation of positron emission tomography combined with computerized tomography. They can be performed at each consultation to assess disease response and detect relapse.
MicroRNAs have the potential to be used as tools or targets for treatment of different cancers. The specific microRNA, miR-506 has been found to work as a tumor antagonist in several studies. A significant number of cervical cancer samples were found to have downregulated expression of miR-506. Additionally, miR-506 works to promote apoptosis of cervical cancer cells, through its direct target hedgehog pathway transcription factor, Gli3.
DNA repair and cancer
Many miRNAs can directly target and inhibit cell cycle genes to control cell proliferation. A new strategy for tumor treatment is to inhibit tumor cell proliferation by repairing the defective miRNA pathway in tumors.
Cancer is caused by the accumulation of mutations from either DNA damage or uncorrected errors in DNA replication. Defects in DNA repair cause the accumulation of mutations, which can lead to cancer. Several genes involved in DNA repair are regulated by microRNAs.
Germline mutations in DNA repair genes cause only 2–5% of colon cancer cases. However, altered expression of microRNAs, causing DNA repair deficiencies, are frequently associated with cancers and may be an important causal factor. Among 68 sporadic colon cancers with reduced expression of the DNA mismatch repair protein MLH1, most were found to be deficient due to epigenetic methylation of the CpG island of the MLH1 gene. However, up to 15% of MLH1-deficiencies in sporadic colon cancers appeared to be due to over-expression of the microRNA miR-155, which represses MLH1 expression.
In 29–66% of glioblastomas, DNA repair is deficient due to epigenetic methylation of the MGMT gene, which reduces protein expression of MGMT. However, for 28% of glioblastomas, the MGMT protein is deficient, but the MGMT promoter is not methylated. In glioblastomas without methylated MGMT promoters, the level of microRNA miR-181d is inversely correlated with protein expression of MGMT and the direct target of miR-181d is the MGMT mRNA 3'UTR (the three prime untranslated region of MGMT mRNA). Thus, in 28% of glioblastomas, increased expression of miR-181d and reduced expression of DNA repair enzyme MGMT may be a causal factor.
HMGA proteins (HMGA1a, HMGA1b and HMGA2) are implicated in cancer, and expression of these proteins is regulated by microRNAs. HMGA expression is almost undetectable in differentiated adult tissues, but is elevated in many cancers. HMGA proteins are polypeptides of ~100 amino acid residues characterized by a modular sequence organization. These proteins have three highly positively charged regions, termed AT hooks, that bind the minor groove of AT-rich DNA stretches in specific regions of DNA. Human neoplasias, including thyroid, prostatic, cervical, colorectal, pancreatic and ovarian carcinomas, show a strong increase of HMGA1a and HMGA1b proteins. Transgenic mice with HMGA1 targeted to lymphoid cells develop aggressive lymphoma, showing that high HMGA1 expression is associated with cancers and that HMGA1 can act as an oncogene. HMGA2 protein specifically targets the promoter of ERCC1, thus reducing expression of this DNA repair gene. ERCC1 protein expression was deficient in 100% of 47 evaluated colon cancers (though the extent to which HGMA2 was involved is not known).
Single Nucleotide polymorphisms (SNPs) can alter the binding of miRNAs on 3'UTRs for example the case of hsa-mir181a and hsa-mir181b on the CDON tumor suppressor gene.
Heart disease
The global role of miRNA function in the heart has been addressed by conditionally inhibiting miRNA maturation in the murine heart. This revealed that miRNAs play an essential role during its development. miRNA expression profiling studies demonstrate that expression levels of specific miRNAs change in diseased human hearts, pointing to their involvement in cardiomyopathies. Furthermore, animal studies on specific miRNAs identified distinct roles for miRNAs both during heart development and under pathological conditions, including the regulation of key factors important for cardiogenesis, the hypertrophic growth response and cardiac conductance. Another role for miRNA in cardiovascular diseases is to use their expression levels for diagnosis, prognosis or risk stratification. miRNA's in animal models have also been linked to cholesterol metabolism and regulation.
miRNA-712
Murine microRNA-712 is a potential biomarker (i.e. predictor) for atherosclerosis, a cardiovascular disease of the arterial wall associated with lipid retention and inflammation. Non-laminar blood flow also correlates with development of atherosclerosis as mechanosenors of endothelial cells respond to the shear force of disturbed flow (d-flow). A number of pro-atherogenic genes including matrix metalloproteinases (MMPs) are upregulated by d-flow, mediating pro-inflammatory and pro-angiogenic signals. These findings were observed in ligated carotid arteries of mice to mimic the effects of d-flow. Within 24 hours, pre-existing immature miR-712 formed mature miR-712 suggesting that miR-712 is flow-sensitive. Coinciding with these results, miR-712 is also upregulated in endothelial cells exposed to naturally occurring d-flow in the greater curvature of the aortic arch.
Origin
Pre-mRNA sequence of miR-712 is generated from the murine ribosomal RN45s gene at the internal transcribed spacer region 2 (ITS2). XRN1 is an exonuclease that degrades the ITS2 region during processing of RN45s. Reduction of XRN1 under d-flow conditions therefore leads to the accumulation of miR-712.
Mechanism
MiR-712 targets tissue inhibitor of metalloproteinases 3 (TIMP3). TIMPs normally regulate activity of matrix metalloproteinases (MMPs) which degrade the extracellular matrix (ECM). Arterial ECM is mainly composed of collagen and elastin fibers, providing the structural support and recoil properties of arteries. These fibers play a critical role in regulation of vascular inflammation and permeability, which are important in the development of atherosclerosis. Expressed by endothelial cells, TIMP3 is the only ECM-bound TIMP. A decrease in TIMP3 expression results in an increase of ECM degradation in the presence of d-flow. Consistent with these findings, inhibition of pre-miR712 increases expression of TIMP3 in cells, even when exposed to turbulent flow.
TIMP3 also decreases the expression of TNFα (a pro-inflammatory regulator) during turbulent flow. Activity of TNFα in turbulent flow was measured by the expression of TNFα-converting enzyme (TACE) in blood. TNFα decreased if miR-712 was inhibited or TIMP3 overexpressed, suggesting that miR-712 and TIMP3 regulate TACE activity in turbulent flow conditions.
Anti-miR-712 effectively suppresses d-flow-induced miR-712 expression and increases TIMP3 expression. Anti-miR-712 also inhibits vascular hyperpermeability, thereby significantly reducing atherosclerosis lesion development and immune cell infiltration.
Human homolog microRNA-205
The human homolog of miR-712 was found on the RN45s homolog gene, which maintains similar miRNAs to mice. MiR-205 of humans share similar sequences with miR-712 of mice and is conserved across most vertebrates. MiR-205 and miR-712 also share more than 50% of the cell signaling targets, including TIMP3.
When tested, d-flow decreased the expression of XRN1 in humans as it did in mice endothelial cells, indicating a potentially common role of XRN1 in humans.
Kidney disease
Targeted deletion of Dicer in the FoxD1-derived renal progenitor cells in a murine model resulted in a complex renal phenotype including expansion of nephron progenitors, fewer renin cells, smooth muscle arterioles, progressive mesangial loss and glomerular aneurysms. High throughput whole transcriptome profiling of the FoxD1-Dicer knockout mouse model revealed ectopic upregulation of pro-apoptotic gene, Bcl2L11 (Bim) and dysregulation of the p53 pathway with increase in p53 effector genes including Bax, Trp53inp1, Jun, Cdkn1a, Mmp2, and Arid3a. p53 protein levels remained unchanged, suggesting that FoxD1 stromal miRNAs directly repress p53-effector genes. Using a lineage tracing approach followed by Fluorescent-activated cell sorting, miRNA profiling of the FoxD1-derived cells not only comprehensively defined the transcriptional landscape of miRNAs that are critical for vascular development, but also identified key miRNAs that are likely to modulate the renal phenotype in its absence. These miRNAs include miRs-10a, 18a, 19b, 24, 30c, 92a, 106a, 130a, 152, 181a, 214, 222, 302a, 370, and 381 that regulate Bcl2L11 (Bim) and miRs-15b, 18a, 21, 30c, 92a, 106a, 125b-5p, 145, 214, 222, 296-5p and 302a that regulate p53-effector genes. Consistent with the profiling results, ectopic apoptosis was observed in the cellular derivatives of the FoxD1 derived progenitor lineage and reiterates the importance of renal stromal miRNAs in cellular homeostasis.
Nervous system
MiRNAs are crucial for the healthy development and function of the nervous system. Previous studies demonstrate that miRNAs can regulate neuronal differentiation and maturation at various stages. MiRNAs also play important roles in synaptic development (such as dendritogenesis or spine morphogenesis) and synaptic plasticity (contributing to learning and memory). Elimination of miRNA formation in mice by experimental silencing of Dicer has led to pathological outcomes, such as reduced neuronal size, motor abnormalities (when silenced in striatal neurons), and neurodegeneration (when silenced in forebrain neurons). Altered miRNA expression has been found in neurodegenerative diseases (such as Alzheimer's disease, Parkinson's disease, and Huntington's disease) as well as many psychiatric disorders (including epilepsy, schizophrenia, major depression, bipolar disorder, and anxiety disorders).
Stroke
According to the Center for Disease Control and Prevention, Stroke is one of the leading causes of death and long-term disability in America. 87% of the cases are ischemic strokes, which results from blockage in the artery of the brain that carries oxygen-rich blood. The obstruction of the blood flow means the brain cannot receive necessary nutrients, such as oxygen and glucose, and remove wastes, such as carbon dioxide. miRNAs plays a role in posttranslational gene silencing by targeting genes in the pathogenesis of cerebral ischemia, such as the inflammatory, angiogenesis, and apoptotic pathway.
Alcoholism
The vital role of miRNAs in gene expression is significant to addiction, specifically alcoholism. Chronic alcohol abuse results in persistent changes in brain function mediated in part by alterations in gene expression. miRNA global regulation of many downstream genes deems significant regarding the reorganization or synaptic connections or long term neural adaptations involving the behavioral change from alcohol consumption to withdrawal and/or dependence. Up to 35 different miRNAs have been found to be altered in the alcoholic post-mortem brain, all of which target genes that include the regulation of the cell cycle, apoptosis, cell adhesion, nervous system development and cell signaling. Altered miRNA levels were found in the medial prefrontal cortex of alcohol-dependent mice, suggesting the role of miRNA in orchestrating translational imbalances and the creation of differentially expressed proteins within an area of the brain where complex cognitive behavior and decision making likely originate.
miRNAs can be either upregulated or downregulated in response to chronic alcohol use. miR-206 expression increased in the prefrontal cortex of alcohol-dependent rats, targeting the transcription factor brain-derived neurotrophic factor (BDNF) and ultimately reducing its expression. BDNF plays a critical role in the formation and maturation of new neurons and synapses, suggesting a possible implication in synapse growth/synaptic plasticity in alcohol abusers. miR-155, important in regulating alcohol-induced neuroinflammation responses, was found to be upregulated, suggesting the role of microglia and inflammatory cytokines in alcohol pathophysiology. Downregulation of miR-382 was found in the nucleus accumbens, a structure in the basal forebrain significant in regulating feelings of reward that power motivational habits. miR-382 is the target for the dopamine receptor D1 (DRD1), and its overexpression results in the upregulation of DRD1 and delta fosB, a transcription factor that activates a series of transcription events in the nucleus accumbens that ultimately result in addictive behaviors. Alternatively, overexpressing miR-382 resulted in attenuated drinking and the inhibition of DRD1 and delta fosB upregulation in rat models of alcoholism, demonstrating the possibility of using miRNA-targeted pharmaceuticals in treatments.
Obesity
miRNAs play crucial roles in the regulation of stem cell progenitors differentiating into adipocytes. Studies to determine what role pluripotent stem cells play in adipogenesis, were examined in the immortalized human bone marrow-derived stromal cell line hMSC-Tert20. Decreased expression of miR-155, miR-221, and miR-222, have been found during the adipogenic programming of both immortalized and primary hMSCs, suggesting that they act as negative regulators of differentiation. Conversely, ectopic expression of the miRNAs 155, 221, and 222 significantly inhibited adipogenesis and repressed induction of the master regulators PPARγ and CCAAT/enhancer-binding protein alpha (CEBPA). This paves the way for possible genetic obesity treatments.
Another class of miRNAs that regulate insulin resistance, obesity, and diabetes, is the let-7 family. Let-7 accumulates in human tissues during the course of aging. When let-7 was ectopically overexpressed to mimic accelerated aging, mice became insulin-resistant, and thus more prone to high fat diet-induced obesity and diabetes. In contrast when let-7 was inhibited by injections of let-7-specific antagomirs, mice become more insulin-sensitive and remarkably resistant to high fat diet-induced obesity and diabetes. Not only could let-7 inhibition prevent obesity and diabetes, it could also reverse and cure the condition. These experimental findings suggest that let-7 inhibition could represent a new therapy for obesity and type 2 diabetes.
Hemostasis
miRNAs also play crucial roles in the regulation of complex enzymatic cascades including the hemostatic blood coagulation system. Large scale studies of functional miRNA targeting have recently uncovered rationale therapeutic targets in the hemostatic system. They have been directly linked to Calcium homeostasis in the endoplasmic reticulum, which is critical in cell differentiation in early development.
Plants
miRNAs are considered to be key regulators of many developmental, homeostatic, and immune processes in plants. Their roles in plant development include shoot apical meristem development, leaf growth, flower formation, seed production, or root expansion. In addition, they play a complex role in responses to various abiotic stresses comprising heat stress, low-temperature stress, drought stress, light stress, or gamma radiation exposure.
Viruses
Viral microRNAs play an important role in the regulation of gene expression of viral and/or host genes to benefit the virus. Hence, miRNAs play a key role in host–virus interactions and pathogenesis of viral diseases. The expression of transcription activators by human herpesvirus-6 DNA is believed to be regulated by viral miRNA.
Target prediction
miRNAs can bind to target messenger RNA (mRNA) transcripts of protein-coding genes and negatively control their translation or cause mRNA degradation. It is of key importance to identify the miRNA targets accurately. A comparison of the predictive performance of eighteen in silico algorithms is available. Large scale studies of functional miRNA targeting suggest that many functional miRNAs can be missed by target prediction algorithms.
| Biology and health sciences | Molecular biology | Biology |
156970 | https://en.wikipedia.org/wiki/Cytoskeleton | Cytoskeleton | The cytoskeleton is a complex, dynamic network of interlinking protein filaments present in the cytoplasm of all cells, including those of bacteria and archaea. In eukaryotes, it extends from the cell nucleus to the cell membrane and is composed of similar proteins in the various organisms. It is composed of three main components: microfilaments, intermediate filaments, and microtubules, and these are all capable of rapid growth and or disassembly depending on the cell's requirements.
A multitude of functions can be performed by the cytoskeleton. Its primary function is to give the cell its shape and mechanical resistance to deformation, and through association with extracellular connective tissue and other cells it stabilizes entire tissues. The cytoskeleton can also contract, thereby deforming the cell and the cell's environment and allowing cells to migrate. Moreover, it is involved in many cell signaling pathways and in the uptake of extracellular material (endocytosis), the segregation of chromosomes during cellular division, the cytokinesis stage of cell division, as scaffolding to organize the contents of the cell in space and in intracellular transport (for example, the movement of vesicles and organelles within the cell) and can be a template for the construction of a cell wall. Furthermore, it can form specialized structures, such as flagella, cilia, lamellipodia and podosomes. The structure, function and dynamic behavior of the cytoskeleton can be very different, depending on organism and cell type. Even within one cell, the cytoskeleton can change through association with other proteins and the previous history of the network.
A large-scale example of an action performed by the cytoskeleton is muscle contraction. This is carried out by groups of highly specialized cells working together. A main component in the cytoskeleton that helps show the true function of this muscle contraction is the microfilament. Microfilaments are composed of the most abundant cellular protein known as actin. During contraction of a muscle, within each muscle cell, myosin molecular motors collectively exert forces on parallel actin filaments. Muscle contraction starts from nerve impulses which then causes increased amounts of calcium to be released from the sarcoplasmic reticulum. Increases in calcium in the cytosol allows muscle contraction to begin with the help of two proteins, tropomyosin and troponin. Tropomyosin inhibits the interaction between actin and myosin, while troponin senses the increase in calcium and releases the inhibition. This action contracts the muscle cell, and through the synchronous process in many muscle cells, the entire muscle.
History
In 1903, Nikolai K. Koltsov proposed that the shape of cells was determined by a network of tubules that he termed the cytoskeleton. The concept of a protein mosaic that dynamically coordinated cytoplasmic biochemistry was proposed by Rudolph Peters in 1929 while the term (cytosquelette, in French) was first introduced by French embryologist Paul Wintrebert in 1931.
When the cytoskeleton was first introduced, it was thought to be an uninteresting gel-like substance that helped organelles stay in place. Much research took place to try to understand the purpose of the cytoskeleton and its components.
Initially, it was thought that the cytoskeleton was exclusive to eukaryotes but in 1992 it was discovered to be present in prokaryotes as well. This discovery came after the realization that bacteria possess proteins that are homologous to tubulin and actin; the main components of the eukaryotic cytoskeleton.
Eukaryotic cytoskeleton
Eukaryotic cells contain three main kinds of cytoskeletal filaments: microfilaments, microtubules, and intermediate filaments. In neurons the intermediate filaments are known as neurofilaments. Each type is formed by the polymerization of a distinct type of protein subunit and has its own characteristic shape and intracellular distribution. Microfilaments are polymers of the protein actin and are 7 nm in diameter. Microtubules are composed of tubulin and are 25 nm in diameter. Intermediate filaments are composed of various proteins, depending on the type of cell in which they are found; they are normally 8-12 nm in diameter. The cytoskeleton provides the cell with structure and shape, and by excluding macromolecules from some of the cytosol, it adds to the level of macromolecular crowding in this compartment. Cytoskeletal elements interact extensively and intimately with cellular membranes.
Research into neurodegenerative disorders such as Parkinson's disease, Alzheimer's disease, Huntington's disease, and amyotrophic lateral sclerosis (ALS) indicate that the cytoskeleton is affected in these diseases. Parkinson's disease is marked by the degradation of neurons, resulting in tremors, rigidity, and other non-motor symptoms. Research has shown that microtubule assembly and stability in the cytoskeleton is compromised causing the neurons to degrade over time. In Alzheimer's disease, tau proteins which stabilize microtubules malfunction in the progression of the illness causing pathology of the cytoskeleton. Excess glutamine in the Huntington protein involved with linking vesicles onto the cytoskeleton is also proposed to be a factor in the development of Huntington's Disease. Amyotrophic lateral sclerosis results in a loss of movement caused by the degradation of motor neurons, and also involves defects of the cytoskeleton.
Stuart Hameroff and Roger Penrose suggest a role of microtubule vibrations in neurons in the origin of consciousness.
Accessory proteins including motor proteins regulate and link the filaments to other cell compounds and each other and are essential for controlled assembly of cytoskeletal filaments in particular locations.
A number of small-molecule cytoskeletal drugs have been discovered that interact with actin and microtubules. These compounds have proven useful in studying the cytoskeleton, and several have clinical applications.
Microfilaments
Microfilaments, also known as actin filaments, are composed of linear polymers of G-actin proteins, and generate force when the growing (plus) end of the filament pushes against a barrier, such as the cell membrane. They also act as tracks for the movement of myosin molecules that affix to the microfilament and "walk" along them. In general, the major component or protein of microfilaments are actin. The G-actin monomer combines to form a polymer which continues to form the microfilament (actin filament). These subunits then assemble into two chains that intertwine into what are called F-actin chains. Myosin motoring along F-actin filaments generates contractile forces in so-called actomyosin fibers, both in muscle as well as most non-muscle cell types. Actin structures are controlled by the Rho family of small GTP-binding proteins such as Rho itself for contractile acto-myosin filaments ("stress fibers"), Rac for lamellipodia and Cdc42 for filopodia.
Functions include:
Muscle contraction
Cell movement
Intracellular transport/trafficking
Maintenance of eukaryotic cell shape
Cytokinesis
Cytoplasmic streaming
Intermediate filaments
Intermediate filaments are a part of the cytoskeleton of many eukaryotic cells. These filaments, averaging 10 nanometers in diameter, are more stable (strongly bound) than microfilaments, and heterogeneous constituents of the cytoskeleton. Like actin filaments, they function in the maintenance of cell-shape by bearing tension (microtubules, by contrast, resist compression but can also bear tension during mitosis and during the positioning of the centrosome). Intermediate filaments organize the internal tridimensional structure of the cell, anchoring organelles and serving as structural components of the nuclear lamina. They also participate in some cell-cell and cell-matrix junctions. Nuclear lamina exist in all animals and all tissues. Some animals like the fruit fly do not have any cytoplasmic intermediate filaments. In those animals that express cytoplasmic intermediate filaments, these are tissue specific. Keratin intermediate filaments in epithelial cells provide protection for different mechanical stresses the skin may endure. They also provide protection for organs against metabolic, oxidative, and chemical stresses. Strengthening of epithelial cells with these intermediate filaments may prevent onset of apoptosis, or cell death, by reducing the probability of stress.
Intermediate filaments are most commonly known as the support system or "scaffolding" for the cell and nucleus while also playing a role in some cell functions. In combination with proteins and desmosomes, the intermediate filaments form cell-cell connections and anchor the cell-matrix junctions that are used in messaging between cells as well as vital functions of the cell. These connections allow the cell to communicate through the desmosome of multiple cells to adjust structures of the tissue based on signals from the cells environment. Mutations in the IF proteins have been shown to cause serious medical issues such as premature aging, desmin mutations compromising organs, Alexander Disease, and muscular dystrophy.
Different intermediate filaments are:
made of vimentins. Vimentin intermediate filaments are in general present in mesenchymal cells.
made of keratin. Keratin is present in general in epithelial cells.
neurofilaments of neural cells.
made of lamin, giving structural support to the nuclear envelope.
made of desmin, play an important role in structural and mechanical support of muscle cells.
Microtubules
Microtubules are hollow cylinders about 23 nm in diameter (lumen diameter of approximately 15 nm), most commonly comprising 13 protofilaments that, in turn, are polymers of alpha and beta tubulin. They have a very dynamic behavior, binding GTP for polymerization. They are commonly organized by the centrosome.
In nine triplet sets (star-shaped), they form the centrioles, and in nine doublets oriented about two additional microtubules (wheel-shaped), they form cilia and flagella. The latter formation is commonly referred to as a "9+2" arrangement, wherein each doublet is connected to another by the protein dynein. As both flagella and cilia are structural components of the cell, and are maintained by microtubules, they can be considered part of the cytoskeleton. There are two types of cilia: motile and non-motile cilia. Cilia are short and more numerous than flagella. The motile cilia have a rhythmic waving or beating motion compared to the non-motile cilia which receive sensory information for the cell; processing signals from the other cells or the fluids surrounding it. Additionally, the microtubules control the beating (movement) of the cilia and flagella. Also, the dynein arms attached to the microtubules function as the molecular motors. The motion of the cilia and flagella is created by the microtubules sliding past one another, which requires ATP.
They play key roles in:
intracellular transport (associated with dyneins and kinesins, they transport organelles like mitochondria or vesicles).
the axoneme of cilia and flagella.
the mitotic spindle.
synthesis of the cell wall in plants.
In addition to the roles described above, Stuart Hameroff and Roger Penrose have proposed that microtubules function in consciousness.
Comparison
Septins
Septins are a group of the highly conserved GTP binding proteins found in eukaryotes. Different septins form protein complexes with each other. These can assemble to filaments and rings. Therefore, septins can be considered part of the cytoskeleton. The function of septins in cells include serving as a localized attachment site for other proteins, and preventing the diffusion of certain molecules from one cell compartment to another. In yeast cells, they build scaffolding to provide structural support during cell division and compartmentalize parts of the cell. Recent research in human cells suggests that septins build cages around bacterial pathogens, immobilizing the harmful microbes and preventing them from invading other cells.
Spectrin
Spectrin is a cytoskeletal protein that lines the intracellular side of the plasma membrane in eukaryotic cells. Spectrin forms pentagonal or hexagonal arrangements, forming a scaffolding and playing an important role in maintenance of plasma membrane integrity and cytoskeletal structure.
Yeast cytoskeleton
In budding yeast (an important model organism), actin forms cortical patches, actin cables, and a cytokinetic ring and the cap. Cortical patches are discrete actin bodies on the membrane and are vital for endocytosis, especially the recycling of glucan synthase which is important for cell wall synthesis. Actin cables are bundles of actin filaments and are involved in the transport of vesicles towards the cap (which contains a number of different proteins to polarize cell growth) and in the positioning of mitochondria. The cytokinetic ring forms and constricts around the site of cell division.
Prokaryotic cytoskeleton
Prior to the work of Jones et al., 2001, the cell wall was believed to be the deciding factor for many bacterial cell shapes, including rods and spirals. When studied, many misshapen bacteria were found to have mutations linked to development of a cell envelope. The cytoskeleton was once thought to be a feature only of eukaryotic cells, but homologues to all the major proteins of the eukaryotic cytoskeleton have been found in prokaryotes. Harold Erickson notes that before 1992, only eukaryotes were believed to have cytoskeleton components. However, research in the early '90s suggested that bacteria and archaea had homologues of actin and tubulin, and that these were the basis of eukaryotic microtubules and microfilaments. Although the evolutionary relationships are so distant that they are not obvious from protein sequence comparisons alone, the similarity of their three-dimensional structures and similar functions in maintaining cell shape and polarity provides strong evidence that the eukaryotic and prokaryotic cytoskeletons are truly homologous. Three laboratories independently discovered that FtsZ, a protein already known as a key player in bacterial cytokinesis, had the "tubulin signature sequence" present in all α-, β-, and γ-tubulins. However, some structures in the bacterial cytoskeleton may not have been identified as of yet.
FtsZ
FtsZ was the first protein of the prokaryotic cytoskeleton to be identified. Like tubulin, FtsZ forms filaments in the presence of guanosine triphosphate (GTP), but these filaments do not group into tubules. During cell division, FtsZ is the first protein to move to the division site, and is essential for recruiting other proteins that synthesize the new cell wall between the dividing cells.
MreB and ParM
Prokaryotic actin-like proteins, such as MreB, are involved in the maintenance of cell shape. All non-spherical bacteria have genes encoding actin-like proteins, and these proteins form a helical network beneath the cell membrane that guides the proteins involved in cell wall biosynthesis.
Some plasmids encode a separate system that involves an actin-like protein ParM. Filaments of ParM exhibit dynamic instability, and may partition plasmid DNA into the dividing daughter cells by a mechanism analogous to that used by microtubules during eukaryotic mitosis.
Crescentin
The bacterium Caulobacter crescentus contains a third protein, crescentin, that is related to the intermediate filaments of eukaryotic cells. Crescentin is also involved in maintaining cell shape, such as helical and vibrioid forms of bacteria, but the mechanism by which it does this is currently unclear. Additionally, curvature could be described by the displacement of crescentic filaments, after the disruption of peptidoglycan synthesis.
The cytoskeleton and cell mechanics
The cytoskeleton is a highly anisotropic and dynamic network, constantly remodeling itself in response to the changing cellular microenvironment. The network influences cell mechanics and dynamics by differentially polymerizing and depolymerizing its constituent filaments (primarily actin and myosin, but microtubules and intermediate filaments also play a role). This generates forces, which play an important role in informing the cell of its microenvironment. Specifically, forces such as tension, stiffness, and shear forces have all been shown to influence cell fate, differentiation, migration, and motility. Through a process called “mechanotransduction,” the cell remodels its cytoskeleton to sense and respond to these forces.
Mechanotransduction relies heavily on focal adhesions, which essentially connect the intracellular cytoskeleton with the extracellular matrix (ECM). Through focal adhesions, the cell is able to integrate extracellular forces into intracellular ones as the proteins present at focal adhesions undergo conformational changes to initiate signaling cascades. Proteins such as focal adhesion kinase (FAK) and Src have been shown to transduce force signals in response to cellular activities such as proliferation and differentiation, and are hypothesized to be key sensors in the mechanotransduction pathway. As a result of mechanotransduction, the cytoskeleton changes its composition and/or orientation to accommodate the force stimulus and ensure the cell responds accordingly.
The cytoskeleton changes the mechanics of the cell in response to detected forces. For example, increasing tension within the plasma membrane makes it more likely that ion channels will open, which increases ion conductance and makes cellular change ion influx or efflux much more likely. Moreover, the mechanical properties of cells determine how far and where, directionally, a force will propagate throughout the cell and how it will change cell dynamics. A membrane protein that is not closely coupled to the cytoskeleton, for instance, will not produce a significant effect on the cortical actin network if it is subjected to a specifically directed force. However, membrane proteins that are more closely associated with the cytoskeleton will induce a more significant response. In this way, the anisotropy of the cytoskeleton serves to more keenly direct cell responses to intra or extracellular signals.
Long-range order
The specific pathways and mechanisms by which the cytoskeleton senses and responds to forces are still under investigation. However, the long-range order generated by the cytoskeleton is known to contribute to mechanotransduction. Cells, which are around 10–50 μm in diameter, are several thousand times larger than the molecules found within the cytoplasm that are essential to coordinate cellular activities. Because cells are so large in comparison to essential biomolecules, it is difficult, in the absence of an organizing network, for different parts of the cytoplasm to communicate. Moreover, biomolecules must polymerize to lengths comparable to the length of the cell, but resulting polymers can be highly disorganized and unable to effectively transmit signals from one part of the cytoplasm to another. Thus, it is necessary to have the cytoskeleton to organize the polymers and ensure that they can effectively communicate across the entirety of the cell.
Common features and differences between prokaryotes and eukaryotes
By definition, the cytoskeleton is composed of proteins that can form longitudinal arrays (fibres) in all organisms. These filament forming proteins have been classified into 4 classes. Tubulin-like, actin-like, Walker A cytoskeletal ATPases (WACA-proteins), and intermediate filaments.
Tubulin-like proteins are tubulin in eukaryotes and FtsZ, TubZ, RepX in prokaryotes. Actin-like proteins are actin in eukaryotes and MreB, FtsA in prokaryotes. An example of a WACA-proteins, which are mostly found in prokaryotes, is MinD. Examples for intermediate filaments, which have almost exclusively been found in animals (i.e. eukaryotes) are the lamins, keratins, vimentin, neurofilaments, and desmin.
Although tubulin-like proteins share some amino acid sequence similarity, their equivalence in protein-fold and the similarity in the GTP binding site is more striking. The same holds true for the actin-like proteins and their structure and ATP binding domain.
Cytoskeletal proteins are usually correlated with cell shape, DNA segregation and cell division in prokaryotes and eukaryotes. Which proteins fulfill which task is very different. For example, DNA segregation in all eukaryotes happens through use of tubulin, but in prokaryotes either WACA proteins, actin-like or tubulin-like proteins can be used. Cell division is mediated in eukaryotes by actin, but in prokaryotes usually by tubulin-like (often FtsZ-ring) proteins and sometimes (Thermoproteota) ESCRT-III, which in eukaryotes still has a role in the last step of division.
Cytoplasmic streaming
Cytoplasmic streaming, also known as cyclosis, is the active movement of a cell's contents along the components of the cytoskeleton. While mainly seen in plants, all cell types use this process for transportation of waste, nutrients, and organelles to other parts of the cell. Plant and algae cells are generally larger than many other cells; so cytoplasmic streaming is important in these types of cells. This is because the cell's extra volume requires cytoplasmic streaming in order to move organelles throughout the entire cell. Organelles move along microfilaments in the cytoskeleton driven by myosin motors binding and pushing along actin filament bundles.
| Biology and health sciences | Organelles and other cell parts | null |
157009 | https://en.wikipedia.org/wiki/Intersection%20%28road%29 | Intersection (road) | An intersection or an at-grade junction is a junction where two or more roads converge, diverge, meet or cross at the same height, as opposed to an interchange, which uses bridges or tunnels to separate different roads. Major intersections are often delineated by gores and may be classified by road segments, traffic controls and lane design.
This article primarily reflects practice in jurisdictions where vehicles are driven on the right. If not otherwise specified, "right" and "left" can be reversed to reflect jurisdictions where vehicles are driven on the left.
Types
Road segments
One way to classify intersections is by the number of road segments (arms) that are involved.
A three-way intersection is a junction between three road segments (arms): a T junction when two arms form one road, or a Y junction, the latter also known as a fork if approached from the stem of the Y.
A four-way intersection, or crossroads, usually involves a crossing over of two streets or roads. In areas where there are blocks and in some other cases, the crossing streets or roads are perpendicular to each other. However, two roads may cross at a different angle. In a few cases, the junction of two road segments may be offset from each when reaching an intersection, even though both ends may be considered the same street.
Six-way intersections usually involve a crossing of three streets at one junction; for example, a crossing of two perpendicular streets and a diagonal street is a rather common type of 6-way intersection.
Five, seven or more approaches to a single intersection, such as at Seven Dials, London, are not common.
Traffic controls
Another way of classifying intersections is by traffic control technology:
Uncontrolled intersections, without signs or signals (or sometimes with a warning sign). Priority (right-of-way) rules may vary by country: on a 4-way intersection traffic from the right often has priority; on a 3-way intersection either traffic from the right has priority again, or traffic on the continuing road. For traffic coming from the same or opposite direction, that which goes straight has priority over that which turns off.
Yield-controlled intersections may or may not have specific "YIELD" signs (known as "GIVE WAY" signs in some countries).
Stop-controlled intersections have one or more "STOP" signs. Two-way stops are common, while some countries also employ four-way stops.
Signal-controlled intersections depend on Traffic light, usually electric, which indicate which traffic is allowed to proceed at any particular time.
Lane design
A traffic circle is a type of intersection at which traffic streams are directed around a circle. Types of traffic circles include roundabouts, "mini-roundabouts", "rotaries", "STOP"-controlled circles, and signal-controlled circles. Some people consider roundabouts to be a distinct type of intersection from traffic circles (with the distinction based on certain differences in size and engineering).
A box junction can be added to an intersection, generally prohibiting entry to the intersection unless the exit is clear.
Some (unconventional or alternative) intersections employ indirect left turns to increase capacity and reduce delays. The Michigan left combines a right turn and a U-turn. Jughandle lefts diverge to the right, then curve to the left, converting a left turn to a crossing maneuver, similar to throughabouts. These techniques are generally used in conjunction with signal-controlled intersections, although they may also be used at stop-controlled intersections.
Other designs include advanced stop lines, parallel-flow and continuous-flow intersections, hook turns, quadrants, seagull intersections, slip lanes, staggered junctions (junctions consisting of two opposing T-junctions where one road intersects two sideroads located diagonally opposite each other; in American English referred to as doglegs), superstreets, Texas Ts, Texas U-turns and turnarounds.
A roundabout and its variants like turbo roundabouts, bowties and distributing circles like traffic circles and right-in/right-out (RIRO) intersections.
Turns
At intersections, turns are usually allowed, but are often regulated to avoid interference with other traffic. Certain turns may be not allowed or may be limited by regulatory signs or signals, particularly those that cross oncoming traffic. Alternative designs often attempt to reduce or eliminate such potential conflicts.
Turn lanes
At intersections with large proportions of turning traffic, turn lanes (also known as turn bays) may be provided. For example, in the intersection shown in the diagram, left turn lanes are present in the right-left street.
Turn lanes allow vehicles, to cross oncoming traffic (i.e., a left turn in right-side driving countries, or a right turn in left-side driving countries), or to exit a road without crossing traffic (i.e., a right turn in right-side driving countries, or a left turn in left-side driving countries). Absence of a turn lane does not normally indicate a prohibition of turns in that direction. Instead, traffic control signs are used to prohibit specific turns.
Turn lanes can increase the capacity of an intersection or improve safety. Turn lanes can have a dramatic effect on the safety of a junction. In rural areas, crash frequency can be reduced by up to 48% if left turn lanes are provided on both main-road approaches at stop-controlled intersections. At signalized intersections, crashes can be reduced by 33%. Results are slightly lower in urban areas.
Turn lanes are marked with an arrow bending into the direction of the turn which is to be made from that lane. Multi-headed arrows indicate that vehicle drivers may travel in any one of the directions pointed to by an arrow.
Turn signals
Traffic signals facing vehicles in turn lanes often have arrow-shaped indications. North America uses various indication patterns. Green arrows indicate protected turn phases, when vehicles may turn unhindered by oncoming traffic. Red arrows may be displayed to prohibit turns in that direction. Red arrows may be displayed along with a circular green indication to show that turns in the direction of the arrow are prohibited, but other movements are allowed. In some jurisdictions, a red arrow prohibits a turn on red. In Europe, if different lanes have differing phases, red, yellow and green traffic lights corresponding to each lane have blacked-out areas in the middle in the shape of arrows indicating the direction(s) drivers in that lane may travel in. This makes it easier for drivers to be aware which traffic light they need to pay attention to. A green arrow may also be provided; when it is on, drivers heading in the direction of the arrow may proceed, but must yield to all other vehicles. This is similar to the right turn on red in the US.
Disadvantages to turn lanes include increased pavement area, with associated increases in construction and maintenance costs, as well as increased amounts of stormwater runoff. They also increase the distance over which pedestrians crossing the street are exposed to vehicle traffic. If a turn lane has a separate signal phase, it often increases the delay experienced by oncoming through traffic. Without a separate phase, left crossing traffic does not get the full safety benefit of the turn lane.
Lane management
Alternative intersection configurations, formerly called unconventional intersections, can manage turning traffic to increase safety and intersection throughput. These include the Michigan left/Superstreet (RCUT/MUT) and continuous flow intersection (CFI/DLT), to improve traffic flow, and also interchange types like Diverging diamond interchange (DDI/DCD) design as part of the Federal Highway Administration's Every Day Counts initiative which started in 2012.
Vulnerable road users
Vulnerable road users include pedestrians, cyclists, motorcyclists, and individuals using motorized scooters and similar devices. Compared to people who are in motor vehicles (like cars and trucks), they are much more likely to suffer catastrophic or fatal injuries at an intersection.
Pedestrians
Intersections generally must manage pedestrian as well as vehicle traffic. Pedestrian aids include crosswalks, pedestrian-directed traffic signals ("walk light") and over/underpasses. Traffic signals can be time consuming to navigate, especially if programmed to prioritise vehicle flow over pedestrians, while over and underpasses which rely on stairs are inaccessible to those who can not climb them. Walk lights may be accompanied by audio signals to aid the visually impaired. Medians can offer pedestrian islands, allowing pedestrians to divide their crossings into a separate segment for each traffic direction, possibly with a separate signal for each.
Some intersections display red lights in all directions for a period of time. Known as a pedestrian scramble, this type of vehicle all-way stop allows pedestrians to cross safely in any direction, including diagonally. All green for non motorists is known from the crossing at Shibuya Station, Tokyo.
In 2020, NHTSA reported that more than 50% of pedestrian deaths in the United States (3,262 total) were attributed to failure to yield the right of way-- which typically occurs at intersections.
Cyclists and motorcyclists
Poor visibility at junctions can lead to drivers colliding with cyclists and motorcyclists. Some junctions use advanced stop lines which allow cyclists to filter to the front of a traffic queue which makes them more visible to drivers.
Safety
A European study found that in Germany and Denmark, the most important crash scenario involving vulnerable road users was:
motor vehicle turning right/left while cyclist going straight;
motor vehicle turning right/left while pedestrian crossing the intersection approach.
These findings are supported by data elsewhere. According to the U.S. National Highway Traffic Safety Administration, roughly half of all U.S. car crashes occurred at intersections or were intersection related in 2019.
At grade railways
In the case of railways or rail tracks the term at grade applies to a rail line that is not on an embankment nor in an open cut. As such, it crosses streets and roads without going under or over them. This requires level crossings. At-grade railways may run along the median of a highway. The opposite is grade-separated. There may be overpasses or underpasses.
| Technology | Road infrastructure | null |
157055 | https://en.wikipedia.org/wiki/Law%20of%20large%20numbers | Law of large numbers | In probability theory, the law of large numbers (LLN) is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.
The LLN is important because it guarantees stable long-term results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. Importantly, the law applies (as the name indicates) only when a large number of observations are considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy).
The LLN only applies to the average of the results obtained from repeated trials and claims that this average converges to the expected value; it does not claim that the sum of n results gets close to the expected value times n as n increases.
Throughout its history, many mathematicians have refined this law. Today, the LLN is used in many fields including statistics, probability theory, economics, and insurance.
Examples
For example, a single roll of a six-sided die produces one of the numbers 1, 2, 3, 4, 5, or 6, each with equal probability. Therefore, the expected value of the roll is:
According to the law of large numbers, if a large number of six-sided dice are rolled, the average of their values (sometimes called the sample mean) will approach 3.5, with the precision increasing as more dice are rolled.
It follows from the law of large numbers that the empirical probability of success in a series of Bernoulli trials will converge to the theoretical probability. For a Bernoulli random variable, the expected value is the theoretical probability of success, and the average of n such variables (assuming they are independent and identically distributed (i.i.d.)) is precisely the relative frequency.
For example, a fair coin toss is a Bernoulli trial. When a fair coin is flipped once, the theoretical probability that the outcome will be heads is equal to . Therefore, according to the law of large numbers, the proportion of heads in a "large" number of coin flips "should be" roughly . In particular, the proportion of heads after n flips will almost surely converge to as n approaches infinity.
Although the proportion of heads (and tails) approaches , almost surely the absolute difference in the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the absolute difference is a small number approaches zero as the number of flips becomes large. Also, almost surely the ratio of the absolute difference to the number of flips will approach zero. Intuitively, the expected difference grows, but at a slower rate than the number of flips.
Another good example of the LLN is the Monte Carlo method. These methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The larger the number of repetitions, the better the approximation tends to be. The reason that this method is important is mainly that, sometimes, it is difficult or impossible to use other approaches.
Limitation
The average of the results obtained from a large number of trials may fail to converge in some cases. For instance, the average of n results taken from the Cauchy distribution or some Pareto distributions (α<1) will not converge as n becomes larger; the reason is heavy tails. The Cauchy distribution and the Pareto distribution represent two cases: the Cauchy distribution does not have an expectation, whereas the expectation of the Pareto distribution (α<1) is infinite. One way to generate the Cauchy-distributed example is where the random numbers equal the tangent of an angle uniformly distributed between −90° and +90°. The median is zero, but the expected value does not exist, and indeed the average of n such variables have the same distribution as one such variable. It does not converge in probability toward zero (or any other value) as n goes to infinity.
And if the trials embed a selection bias, typical in human economic/rational behaviour, the law of large numbers does not help in solving the bias. Even if the number of trials is increased the selection bias remains.
History
The Italian mathematician Gerolamo Cardano (1501–1576) stated without proof that the accuracies of empirical statistics tend to improve with the number of trials. This was then formalized as a law of large numbers. A special form of the LLN (for a binary random variable) was first proved by Jacob Bernoulli. It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in his (The Art of Conjecturing) in 1713. He named this his "Golden Theorem" but it became generally known as "Bernoulli's theorem". This should not be confused with Bernoulli's principle, named after Jacob Bernoulli's nephew Daniel Bernoulli. In 1837, S. D. Poisson further described it under the name ("the law of large numbers"). Thereafter, it was known under both names, but the "law of large numbers" is most frequently used.
After Bernoulli and Poisson published their efforts, other mathematicians also contributed to refinement of the law, including Chebyshev, Markov, Borel, Cantelli, Kolmogorov and Khinchin. Markov showed that the law can apply to a random variable that does not have a finite variance under some other weaker assumption, and Khinchin showed in 1929 that if the series consists of independent identically distributed random variables, it suffices that the expected value exists for the weak law of large numbers to be true. These further studies have given rise to two prominent forms of the LLN. One is called the "weak" law and the other the "strong" law, in reference to two different modes of convergence of the cumulative sample means to the expected value; in particular, as explained below, the strong form implies the weak.
Forms
There are two different versions of the law of large numbers that are described below. They are called the strong law of large numbers and the weak law of large numbers. Stated for the case where X1, X2, ... is an infinite sequence of independent and identically distributed (i.i.d.) Lebesgue integrable random variables with expected value E(X1) = E(X2) = ... = μ, both versions of the law state that the sample average
converges to the expected value:
(Lebesgue integrability of Xj means that the expected value E(Xj) exists according to Lebesgue integration and is finite. It does not mean that the associated probability measure is absolutely continuous with respect to Lebesgue measure.)
Introductory probability texts often additionally assume identical finite variance (for all ) and no correlation between random variables. In that case, the variance of the average of n random variables is
which can be used to shorten and simplify the proofs. This assumption of finite variance is not necessary. Large or infinite variance will make the convergence slower, but the LLN holds anyway.
Mutual independence of the random variables can be replaced by pairwise independence or exchangeability in both versions of the law.
The difference between the strong and the weak version is concerned with the mode of convergence being asserted. For interpretation of these modes, see Convergence of random variables.
Weak law
The weak law of large numbers (also called Khinchin's law) states that given a collection of independent and identically distributed (iid) samples from a random variable with finite mean, the sample mean converges in probability to the expected value
That is, for any positive number ε,
Interpreting this result, the weak law states that for any nonzero margin specified (ε), no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value; that is, within the margin.
As mentioned earlier, the weak law applies in the case of i.i.d. random variables, but it also applies in some other cases. For example, the variance may be different for each random variable in the series, keeping the expected value constant. If the variances are bounded, then the law applies, as shown by Chebyshev as early as 1867. (If the expected values change during the series, then we can simply apply the law to the average deviation from the respective expected values. The law then states that this converges in probability to zero.) In fact, Chebyshev's proof works so long as the variance of the average of the first n values goes to zero as n goes to infinity. As an example, assume that each random variable in the series follows a Gaussian distribution (normal distribution) with mean zero, but with variance equal to , which is not bounded. At each stage, the average will be normally distributed (as the average of a set of normally distributed variables). The variance of the sum is equal to the sum of the variances, which is asymptotic to . The variance of the average is therefore asymptotic to and goes to zero.
There are also examples of the weak law applying even though the expected value does not exist.
Strong law
The strong law of large numbers (also called Kolmogorov's law) states that the sample average converges almost surely to the expected value
That is,
What this means is that, as the number of trials n goes to infinity, the probability that the average of the observations converges to the expected value, is equal to one. The modern proof of the strong law is more complex than that of the weak law, and relies on passing to an appropriate subsequence.
The strong law of large numbers can itself be seen as a special case of the pointwise ergodic theorem. This view justifies the intuitive interpretation of the expected value (for Lebesgue integration only) of a random variable when sampled repeatedly as the "long-term average".
Law 3 is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). However the weak law is known to hold in certain conditions where the strong law does not hold and then the convergence is only weak (in probability). See differences between the weak law and the strong law.
The strong law applies to independent identically distributed random variables having an expected value (like the weak law). This was proved by Kolmogorov in 1930. It can also apply in other cases. Kolmogorov also showed, in 1933, that if the variables are independent and identically distributed, then for the average to converge almost surely on something (this can be considered another statement of the strong law), it is necessary that they have an expected value (and then of course the average will converge almost surely on that).
If the summands are independent but not identically distributed, then
provided that each Xk has a finite second moment and
This statement is known as Kolmogorov's strong law, see e.g. .
Differences between the weak law and the strong law
The weak law states that for a specified large n, the average is likely to be near μ. Thus, it leaves open the possibility that happens an infinite number of times, although at infrequent intervals. (Not necessarily for all n).
The strong law shows that this almost surely will not occur. It does not imply that with probability 1, we have that for any the inequality holds for all large enough n, since the convergence is not necessarily uniform on the set where it holds.
The strong law does not hold in the following cases, but the weak law does.
Uniform laws of large numbers
There are extensions of the law of large numbers to collections of estimators, where the convergence is uniform over the collection; thus the name uniform law of large numbers.
Suppose f(x,θ) is some function defined for θ ∈ Θ, and continuous in θ. Then for any fixed θ, the sequence {f(X1,θ), f(X2,θ), ...} will be a sequence of independent and identically distributed random variables, such that the sample mean of this sequence converges in probability to E[f(X,θ)]. This is the pointwise (in θ) convergence.
A particular example of a uniform law of large numbers states the conditions under which the convergence happens uniformly in θ. If
Θ is compact,
f(x,θ) is continuous at each θ ∈ Θ for almost all xs, and measurable function of x at each θ.
there exists a dominating function d(x) such that E[d(X)] < ∞, and
Then E[f(X,θ)] is continuous in θ, and
This result is useful to derive consistency of a large class of estimators (see Extremum estimator).
Borel's law of large numbers
Borel's law of large numbers, named after Émile Borel, states that if an experiment is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified event is expected to occur approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be. More precisely, if E denotes the event in question, p its probability of occurrence, and Nn(E) the number of times E occurs in the first n trials, then with probability one,
This theorem makes rigorous the intuitive notion of probability as the expected long-run relative frequency of an event's occurrence. It is a special case of any of several more general laws of large numbers in probability theory.
Chebyshev's inequality. Let X be a random variable with finite expected value μ and finite non-zero variance σ2. Then for any real number ,
Proof of the weak law
Given X1, X2, ... an infinite sequence of i.i.d. random variables with finite expected value , we are interested in the convergence of the sample average
The weak law of large numbers states:
Proof using Chebyshev's inequality assuming finite variance
This proof uses the assumption of finite variance (for all ). The independence of the random variables implies no correlation between them, and we have that
The common mean μ of the sequence is the mean of the sample average:
Using Chebyshev's inequality on results in
This may be used to obtain the following:
As n approaches infinity, the expression approaches 1. And by definition of convergence in probability, we have obtained
Proof using convergence of characteristic functions
By Taylor's theorem for complex functions, the characteristic function of any random variable, X, with finite mean μ, can be written as
All X1, X2, ... have the same characteristic function, so we will simply denote this φX.
Among the basic properties of characteristic functions there are
if X and Y are independent.
These rules can be used to calculate the characteristic function of in terms of φX:
The limit eitμ is the characteristic function of the constant random variable μ, and hence by the Lévy continuity theorem, converges in distribution to μ:
μ is a constant, which implies that convergence in distribution to μ and convergence in probability to μ are equivalent (see Convergence of random variables.) Therefore,
This shows that the sample mean converges in probability to the derivative of the characteristic function at the origin, as long as the latter exists.
Proof of the strong law
We give a relatively simple proof of the strong law under the assumptions that the are iid, , , and .
Let us first note that without loss of generality we can assume that by centering. In this case, the strong law says that
or
It is equivalent to show that
Note that
and thus to prove the strong law we need to show that for every , we have
Define the events , and if we can show that
then the Borel-Cantelli Lemma implies the result. So let us estimate .
We compute
We first claim that every term of the form where all subscripts are distinct, must have zero expectation. This is because by independence, and the last term is zero --- and similarly for the other terms. Therefore the only terms in the sum with nonzero expectation are and . Since the are identically distributed, all of these are the same, and moreover .
There are terms of the form and terms of the form , and so
Note that the right-hand side is a quadratic polynomial in , and as such there exists a such that for sufficiently large. By Markov,
for sufficiently large, and therefore this series is summable. Since this holds for any , we have established the Strong LLN.
Another proof was given by Etemadi.
For a proof without the added assumption of a finite fourth moment, see Section 22 of Billingsley.
Consequences
The law of large numbers provides an expectation of an unknown distribution from a realization of the sequence, but also any feature of the probability distribution. By applying Borel's law of large numbers, one could easily obtain the probability mass function. For each event in the objective probability mass function, one could approximate the probability of the event's occurrence with the proportion of times that any specified event occurs. The larger the number of repetitions, the better the approximation. As for the continuous case: , for small positive h. Thus, for large n:
With this method, one can cover the whole x-axis with a grid (with grid size 2h) and obtain a bar graph which is called a histogram.
Applications
One application of the LLN is an important method of approximation known as the Monte Carlo method, which uses a random sampling of numbers to approximate numerical results. The algorithm to compute an integral of f(x) on an interval [a,b] is as follows:
Simulate uniform random variables X1, X2, ..., Xn which can be done using a software, and use a random number table that gives U1, U2, ..., Un independent and identically distributed (i.i.d.) random variables on [0,1]. Then let Xi = a+(b - a)Ui for i= 1, 2, ..., n. Then X1, X2, ..., Xn are independent and identically distributed uniform random variables on [a, b].
Evaluate f(X1), f(X2), ..., f(Xn)
Take the average of f(X1), f(X2), ..., f(Xn) by computing and then by the Strong Law of Large Numbers, this converges to = =
We can find the integral of on [-1,2]. Using traditional methods to compute this integral is very difficult, so the Monte Carlo method can be used here. Using the above algorithm, we get
= 0.905 when n=25
and
= 1.028 when n=250
We observe that as n increases, the numerical value also increases. When we get the actual results for the integral we get
= 1.000194
When the LLN was used, the approximation of the integral was closer to its true value, and thus more accurate.
Another example is the integration of f(x) = on [0,1]. Using the Monte Carlo method and the LLN, we can see that as the number of samples increases, the numerical value gets closer to 0.4180233.
| Mathematics | Statistics and probability | null |
157057 | https://en.wikipedia.org/wiki/Correlation | Correlation | In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics it usually refers to the degree to which a pair of variables are linearly related.
Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the demand curve.
Correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather. In this example, there is a causal relationship, because extreme weather causes people to use more electricity for heating or cooling. However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e., correlation does not imply causation).
Formally, random variables are dependent if they do not satisfy a mathematical property of probabilistic independence. In informal parlance, correlation is synonymous with dependence. However, when used in a technical sense, correlation refers to any of several specific types of mathematical relationship between the conditional expectation of one variable given the other is not constant as the conditioning variable changes; broadly correlation in this specific sense is used when is related to in some manner (such as linearly, monotonically, or perhaps according to some particular functional form such as logarithmic). Essentially, correlation is the measure of how two or more variables are related to one another. There are several correlation coefficients, often denoted or , measuring the degree of correlation. The most common of these is the Pearson correlation coefficient, which is sensitive only to a linear relationship between two variables (which may be present even when one variable is a nonlinear function of the other). Other correlation coefficients – such as Spearman's rank correlation coefficient – have been developed to be more robust than Pearson's, that is, more sensitive to nonlinear relationships. Mutual information can also be applied to measure dependence between two variables.
Pearson's product-moment coefficient
The most familiar measure of dependence between two quantities is the Pearson product-moment correlation coefficient (PPMCC), or "Pearson's correlation coefficient", commonly called simply "the correlation coefficient". It is obtained by taking the ratio of the covariance of the two variables in question of our numerical dataset, normalized to the square root of their variances. Mathematically, one simply divides the covariance of the two variables by the product of their standard deviations. Karl Pearson developed the coefficient from a similar but slightly different idea by Francis Galton.
A Pearson product-moment correlation coefficient attempts to establish a line of best fit through a dataset of two variables by essentially laying out the expected values and the resulting Pearson's correlation coefficient indicates how far away the actual dataset is from the expected values. Depending on the sign of our Pearson's correlation coefficient, we can end up with either a negative or positive correlation if there is any sort of relationship between the variables of our data set.
The population correlation coefficient between two random variables and with expected values and and standard deviations and is defined as:
where is the expected value operator, means covariance, and is a widely used alternative notation for the correlation coefficient. The Pearson correlation is defined only if both standard deviations are finite and positive. An alternative formula purely in terms of moments is:
Correlation and independence
It is a corollary of the Cauchy–Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between −1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), −1 in the case of a perfect inverse (decreasing) linear relationship (anti-correlation), and some value in the open interval in all other cases, indicating the degree of linear dependence between the variables. As it approaches zero there is less of a relationship (closer to uncorrelated). The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables.
If the variables are independent, Pearson's correlation coefficient is 0. However, because the correlation coefficient detects only linear dependencies between two variables, the converse is not necessarily true. A correlation coefficient of 0 does not imply that the variables are independent.
For example, suppose the random variable is symmetrically distributed about zero, and . Then is completely determined by , so that and are perfectly dependent, but their correlation is zero; they are uncorrelated. However, in the special case when and are jointly normal, uncorrelatedness is equivalent to independence.
Even though uncorrelated data does not necessarily imply independence, one can check if random variables are independent if their mutual information is 0.
Sample correlation coefficient
Given a series of measurements of the pair indexed by , the sample correlation coefficient can be used to estimate the population Pearson correlation between and . The sample correlation coefficient is defined as
where and are the sample means of and , and and are the corrected sample standard deviations of and .
Equivalent expressions for are
where and are the uncorrected sample standard deviations of and .
If and are results of measurements that contain measurement error, the realistic limits on the correlation coefficient are not −1 to +1 but a smaller range. For the case of a linear model with a single independent variable, the coefficient of determination (R squared) is the square of , Pearson's product-moment coefficient.
Example
Consider the joint probability distribution of and given in the table below.
{| class="wikitable" style="text-align:center;"
|+
!
!−1
!0
!1
|-
!0
|0
|
|0
|-
!1
|
|0
|
|}
For this joint distribution, the marginal distributions are:
This yields the following expectations and variances:
Therefore:
Rank correlation coefficients
Rank correlation coefficients, such as Spearman's rank correlation coefficient and Kendall's rank correlation coefficient (τ) measure the extent to which, as one variable increases, the other variable tends to increase, without requiring that increase to be represented by a linear relationship. If, as the one variable increases, the other decreases, the rank correlation coefficients will be negative. It is common to regard these rank correlation coefficients as alternatives to Pearson's coefficient, used either to reduce the amount of calculation or to make the coefficient less sensitive to non-normality in distributions. However, this view has little mathematical basis, as rank correlation coefficients measure a different type of relationship than the Pearson product-moment correlation coefficient, and are best seen as measures of a different type of association, rather than as an alternative measure of the population correlation coefficient.
To illustrate the nature of rank correlation, and its difference from linear correlation, consider the following four pairs of numbers :
(0, 1), (10, 100), (101, 500), (102, 2000).
As we go from each pair to the next pair increases, and so does . This relationship is perfect, in the sense that an increase in is always accompanied by an increase in . This means that we have a perfect rank correlation, and both Spearman's and Kendall's correlation coefficients are 1, whereas in this example Pearson product-moment correlation coefficient is 0.7544, indicating that the points are far from lying on a straight line. In the same way if always decreases when increases, the rank correlation coefficients will be −1, while the Pearson product-moment correlation coefficient may or may not be close to −1, depending on how close the points are to a straight line. Although in the extreme cases of perfect rank correlation the two coefficients are both equal (being both +1 or both −1), this is not generally the case, and so values of the two coefficients cannot meaningfully be compared. For example, for the three pairs (1, 1) (2, 3) (3, 2) Spearman's coefficient is 1/2, while Kendall's coefficient is 1/3.
Other measures of dependence among random variables
The information given by a correlation coefficient is not enough to define the dependence structure between random variables. The correlation coefficient completely defines the dependence structure only in very particular cases, for example when the distribution is a multivariate normal distribution. (See diagram above.) In the case of elliptical distributions it characterizes the (hyper-)ellipses of equal density; however, it does not completely characterize the dependence structure (for example, a multivariate t-distribution's degrees of freedom determine the level of tail dependence).
For continuous variables, multiple alternative measures of dependence were introduced to address the deficiency of Pearson's correlation that it can be zero for dependent random variables (see and reference references therein for an overview). They all share the important property that a value of zero implies independence. This led some authors to recommend their routine usage, particularly of Distance correlation. Another alternative measure is the Randomized Dependence Coefficient. The RDC is a computationally efficient, copula-based measure of dependence between multivariate random variables and is invariant with respect to non-linear scalings of random variables.
One important disadvantage of the alternative, more general measures is that, when used to test whether two variables are associated, they tend to have lower power compared to Pearson's correlation when the data follow a multivariate normal distribution. This is an implication of the No free lunch theorem. To detect all kinds of relationships, these measures have to sacrifice power on other relationships, particularly for the important special case of a linear relationship with Gaussian marginals, for which Pearson's correlation is optimal. Another problem concerns interpretation. While Person's correlation can be interpreted for all values, the alternative measures can generally only be interpreted meaningfully at the extremes.
For two binary variables, the odds ratio measures their dependence, and takes range non-negative numbers, possibly infinity: . Related statistics such as Yule's Y and Yule's Q normalize this to the correlation-like range . The odds ratio is generalized by the logistic model to model cases where the dependent variables are discrete and there may be one or more independent variables.
The correlation ratio, entropy-based mutual information, total correlation, dual total correlation and polychoric correlation are all also capable of detecting more general dependencies, as is consideration of the copula between them, while the coefficient of determination generalizes the correlation coefficient to multiple regression.
Sensitivity to the data distribution
The degree of dependence between variables and does not depend on the scale on which the variables are expressed. That is, if we are analyzing the relationship between and , most correlation measures are unaffected by transforming to and to , where a, b, c, and d are constants (b and d being positive). This is true of some correlation statistics as well as their population analogues. Some correlation statistics, such as the rank correlation coefficient, are also invariant to monotone transformations of the marginal distributions of and/or .
Most correlation measures are sensitive to the manner in which and are sampled. Dependencies tend to be stronger if viewed over a wider range of values. Thus, if we consider the correlation coefficient between the heights of fathers and their sons over all adult males, and compare it to the same correlation coefficient calculated when the fathers are selected to be between 165 cm and 170 cm in height, the correlation will be weaker in the latter case. Several techniques have been developed that attempt to correct for range restriction in one or both variables, and are commonly used in meta-analysis; the most common are Thorndike's case II and case III equations.
Various correlation measures in use may be undefined for certain joint distributions of and . For example, the Pearson correlation coefficient is defined in terms of moments, and hence will be undefined if the moments are undefined. Measures of dependence based on quantiles are always defined. Sample-based statistics intended to estimate population measures of dependence may or may not have desirable statistical properties such as being unbiased, or asymptotically consistent, based on the spatial structure of the population from which the data were sampled.
Sensitivity to the data distribution can be used to an advantage. For example, scaled correlation is designed to use the sensitivity to the range in order to pick out correlations between fast components of time series. By reducing the range of values in a controlled manner, the correlations on long time scale are filtered out and only the correlations on short time scales are revealed.
Correlation matrices
The correlation matrix of random variables is the matrix whose entry is
Thus the diagonal entries are all identically one. If the measures of correlation used are product-moment coefficients, the correlation matrix is the same as the covariance matrix of the standardized random variables for . This applies both to the matrix of population correlations (in which case is the population standard deviation), and to the matrix of sample correlations (in which case denotes the sample standard deviation). Consequently, each is necessarily a positive-semidefinite matrix. Moreover, the correlation matrix is strictly positive definite if no variable can have all its values exactly generated as a linear function of the values of the others.
The correlation matrix is symmetric because the correlation between and is the same as the correlation between and .
A correlation matrix appears, for example, in one formula for the coefficient of multiple determination, a measure of goodness of fit in multiple regression.
In statistical modelling, correlation matrices representing the relationships between variables are categorized into different correlation structures, which are distinguished by factors such as the number of parameters required to estimate them. For example, in an exchangeable correlation matrix, all pairs of variables are modeled as having the same correlation, so all non-diagonal elements of the matrix are equal to each other. On the other hand, an autoregressive matrix is often used when variables represent a time series, since correlations are likely to be greater when measurements are closer in time. Other examples include independent, unstructured, M-dependent, and Toeplitz.
In exploratory data analysis, the iconography of correlations consists in replacing a correlation matrix by a diagram where the "remarkable" correlations are represented by a solid line (positive correlation), or a dotted line (negative correlation).
Nearest valid correlation matrix
In some applications (e.g., building data models from only partially observed data) one wants to find the "nearest" correlation matrix to an "approximate" correlation matrix (e.g., a matrix which typically lacks semi-definite positiveness due to the way it has been computed).
In 2002, Higham formalized the notion of nearness using the Frobenius norm and provided a method for computing the nearest correlation matrix using the Dykstra's projection algorithm, of which an implementation is available as an online Web API.
This sparked interest in the subject, with new theoretical (e.g., computing the nearest correlation matrix with factor structure) and numerical (e.g. usage the Newton's method for computing the nearest correlation matrix) results obtained in the subsequent years.
Uncorrelatedness and independence of stochastic processes
Similarly for two stochastic processes and : If they are independent, then they are uncorrelated. The opposite of this statement might not be true. Even if two variables are uncorrelated, they might not be independent to each other.
Common misconceptions
Correlation and causality
The conventional dictum that "correlation does not imply causation" means that correlation cannot be used by itself to infer a causal relationship between the variables. This dictum should not be taken to mean that correlations cannot indicate the potential existence of causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown, and high correlations also overlap with identity relations (tautologies), where no causal process exists. Consequently, a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction).
A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health, or does good health lead to good mood, or both? Or does some other factor underlie both? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be.
Simple linear correlations
The Pearson correlation coefficient indicates the strength of a linear relationship between two variables, but its value generally does not completely characterize their relationship. In particular, if the conditional mean of given , denoted , is not linear in , the correlation coefficient will not fully determine the form of .
The adjacent image shows scatter plots of Anscombe's quartet, a set of four different pairs of variables created by Francis Anscombe. The four variables have the same mean (7.5), variance (4.12), correlation (0.816) and regression line (). However, as can be seen on the plots, the distribution of the variables is very different. The first one (top left) seems to be distributed normally, and corresponds to what one would expect when considering two variables correlated and following the assumption of normality. The second one (top right) is not distributed normally; while an obvious relationship between the two variables can be observed, it is not linear. In this case the Pearson correlation coefficient does not indicate that there is an exact functional relationship: only the extent to which that relationship can be approximated by a linear relationship. In the third case (bottom left), the linear relationship is perfect, except for one outlier which exerts enough influence to lower the correlation coefficient from 1 to 0.816. Finally, the fourth example (bottom right) shows another example when one outlier is enough to produce a high correlation coefficient, even though the relationship between the two variables is not linear.
These examples indicate that the correlation coefficient, as a summary statistic, cannot replace visual examination of the data. The examples are sometimes said to demonstrate that the Pearson correlation assumes that the data follow a normal distribution, but this is only partially correct. The Pearson correlation can be accurately calculated for any distribution that has a finite covariance matrix, which includes most distributions encountered in practice. However, the Pearson correlation coefficient (taken together with the sample mean and variance) is only a sufficient statistic if the data is drawn from a multivariate normal distribution. As a result, the Pearson correlation coefficient fully characterizes the relationship between variables if and only if the data are drawn from a multivariate normal distribution.
Bivariate normal distribution
If a pair of random variables follows a bivariate normal distribution, the conditional mean is a linear function of , and the conditional mean is a linear function of The correlation coefficient between and and the marginal means and variances of and determine this linear relationship:
where and are the expected values of and respectively, and and are the standard deviations of and respectively.
The empirical correlation is an estimate of the correlation coefficient A distribution estimate for is given by
where is the Gaussian hypergeometric function.
This density is both a Bayesian posterior density and an exact optimal confidence distribution density.
| Mathematics | Statistics and probability | null |
157059 | https://en.wikipedia.org/wiki/Covariance | Covariance | In probability theory and statistics, covariance is a measure of the joint variability of two random variables.
The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one variable mainly correspond with greater values of the other variable, and the same holds for lesser values (that is, the variables tend to show similar behavior), the covariance is positive. In the opposite case, when greater values of one variable mainly correspond to lesser values of the other (that is, the variables tend to show opposite behavior), the covariance is negative. The magnitude of the covariance is the geometric mean of the variances that are in common for the two random variables. The correlation coefficient normalizes the covariance by dividing by the geometric mean of the total variances for the two random variables.
A distinction must be made between (1) the covariance of two random variables, which is a population parameter that can be seen as a property of the joint probability distribution, and (2) the sample covariance, which in addition to serving as a descriptor of the sample, also serves as an estimated value of the population parameter.
Definition
For two jointly distributed real-valued random variables and with finite second moments, the covariance is defined as the expected value (or mean) of the product of their deviations from their individual expected values:
where is the expected value of , also known as the mean of . The covariance is also sometimes denoted or , in analogy to variance. By using the linearity property of expectations, this can be simplified to the expected value of their product minus the product of their expected values:
but this equation is susceptible to catastrophic cancellation (see the section on numerical computation below).
The units of measurement of the covariance are those of times those of . By contrast, correlation coefficients, which depend on the covariance, are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)
Complex random variables
The covariance between two complex random variables is defined as
Notice the complex conjugation of the second factor in the definition.
A related pseudo-covariance can also be defined.
Discrete random variables
If the (real) random variable pair can take on the values for , with equal probabilities , then the covariance can be equivalently written in terms of the means and as
It can also be equivalently expressed, without directly referring to the means, as
More generally, if there are possible realizations of , namely but with possibly unequal probabilities for , then the covariance is
In the case where two discrete random variables and have a joint probability distribution, represented by elements corresponding to the joint probabilities of , the covariance is calculated using a double summation over the indices of the matrix:
Examples
Consider three independent random variables and two constants .
In the special case, and , the covariance between and is just the variance of and the name covariance is entirely appropriate.
Suppose that and have the following joint probability mass function, in which the six central cells give the discrete joint probabilities of the six hypothetical realizations
can take on three values (5, 6 and 7) while can take on two (8 and 9). Their means are and . Then,
Properties
Covariance with itself
The variance is a special case of the covariance in which the two variables are identical:
Covariance of linear combinations
If , , , and are real-valued random variables and are real-valued constants, then the following facts are a consequence of the definition of covariance:
For a sequence of random variables in real-valued, and constants , we have
Hoeffding's covariance identity
A useful identity to compute the covariance between two random variables is the Hoeffding's covariance identity:
where is the joint cumulative distribution function of the random vector and are the marginals.
Uncorrelatedness and independence
Random variables whose covariance is zero are called uncorrelated. Similarly, the components of random vectors whose covariance matrix is zero in every entry outside the main diagonal are also called uncorrelated.
If and are independent random variables, then their covariance is zero. This follows because under independence,
The converse, however, is not generally true. For example, let be uniformly distributed in and let . Clearly, and are not independent, but
In this case, the relationship between and is non-linear, while correlation and covariance are measures of linear dependence between two random variables. This example shows that if two random variables are uncorrelated, that does not in general imply that they are independent. However, if two variables are jointly normally distributed (but not if they are merely individually normally distributed), uncorrelatedness does imply independence.
and whose covariance is positive are called positively correlated, which implies if then likely . Conversely, and with negative covariance are negatively correlated, and if then likely .
Relationship to inner products
Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an inner product:
bilinear: for constants and and random variables
symmetric:
positive semi-definite: for all random variables , and implies that is constant almost surely.
In fact these properties imply that the covariance defines an inner product over the quotient vector space obtained by taking the subspace of random variables with finite second moment and identifying any two that differ by a constant. (This identification turns the positive semi-definiteness above into positive definiteness.) That quotient vector space is isomorphic to the subspace of random variables with finite second moment and mean zero; on that subspace, the covariance is exactly the L2 inner product of real-valued functions on the sample space.
As a result, for random variables with finite variance, the inequality
holds via the Cauchy–Schwarz inequality.
Proof: If , then it holds trivially. Otherwise, let random variable
Then we have
Calculating the sample covariance
The sample covariances among variables based on observations of each, drawn from an otherwise unobserved population, are given by the matrix with the entries
which is an estimate of the covariance between variable and variable .
The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector , a vector whose jth element is one of the random variables. The reason the sample covariance matrix has in the denominator rather than is essentially that the population mean is not known and is replaced by the sample mean . If the population mean is known, the analogous unbiased estimate is given by
.
Generalizations
Auto-covariance matrix of real random vectors
For a vector of jointly distributed random variables with finite second moments, its auto-covariance matrix (also known as the variance–covariance matrix or simply the covariance matrix) (also denoted by or ) is defined as
Let be a random vector with covariance matrix , and let be a matrix that can act on on the left. The covariance matrix of the matrix-vector product is:
This is a direct result of the linearity of expectation and is useful
when applying a linear transformation, such as a whitening transformation, to a vector.
Cross-covariance matrix of real random vectors
For real random vectors and , the cross-covariance matrix is equal to
where is the transpose of the vector (or matrix) .
The -th element of this matrix is equal to the covariance between the -th scalar component of and the -th scalar component of . In particular, is the transpose of .
Cross-covariance sesquilinear form of random vectors in a real or complex Hilbert space
More generally let and , be Hilbert spaces over or with anti linear in the first variable, and let be resp. valued random variables.
Then the covariance of and is the sesquilinear form on
(anti linear in the first variable) given by
Numerical computation
When , the equation is prone to catastrophic cancellation if and are not computed exactly and thus should be avoided in computer programs when the data has not been centered before. Numerically stable algorithms should be preferred in this case.
Comments
The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of linear algebra (see linear dependence). When the covariance is normalized, one obtains the Pearson correlation coefficient, which gives the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.
Applications
In genetics and molecular biology
Covariance is an important measure in biology. Certain sequences of DNA are conserved more than others among species, and thus to study secondary and tertiary structures of proteins, or of RNA structures, sequences are compared in closely related species. If sequence changes are found or no changes at all are found in noncoding RNA (such as microRNA), sequences are found to be necessary for common structural motifs, such as an RNA loop. In genetics, covariance serves a basis for computation of Genetic Relationship Matrix (GRM) (aka kinship matrix), enabling inference on population structure from sample with no known close relatives as well as inference on estimation of heritability of complex traits.
In the theory of evolution and natural selection, the price equation describes how a genetic trait changes in frequency over time. The equation uses a covariance between a trait and fitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission and natural selection have on the proportion of genes within each new generation of a population.
In financial economics
Covariances play a key role in financial economics, especially in modern portfolio theory and in the capital asset pricing model. Covariances among various assets' returns are used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis) or are predicted to (in a positive analysis) choose to hold in a context of diversification.
In meteorological and oceanographic data assimilation
The covariance matrix is important in estimating the initial conditions required for running weather forecast models, a procedure known as data assimilation. The 'forecast error covariance matrix' is typically constructed between perturbations around a mean state (either a climatological or ensemble mean). The 'observation error covariance matrix' is constructed to represent the magnitude of combined observational errors (on the diagonal) and the correlated errors between measurements (off the diagonal). This is an example of its widespread application to Kalman filtering and more general state estimation for time-varying systems.
In micrometeorology
The eddy covariance technique is a key atmospherics measurement technique where the covariance between instantaneous deviation in vertical wind speed from the mean value and instantaneous deviation in gas concentration is the basis for calculating the vertical turbulent fluxes.
In signal processing
The covariance matrix is used to capture the spectral variability of a signal.
In statistics and image processing
The covariance matrix is used in principal component analysis to reduce feature dimensionality in data preprocessing.
| Mathematics | Statistics and probability | null |
157092 | https://en.wikipedia.org/wiki/Cross%20product | Cross product | In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here ), and is denoted by the symbol . Given two linearly independent vectors and , the cross product, (read "a cross b"), is a vector that is perpendicular to both and , and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product (projection product).
The magnitude of the cross product equals the area of a parallelogram with the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths. The units of the cross-product are the product of the units of each vector. If two vectors are parallel or are anti-parallel (that is, they are linearly dependent), or if either one has zero length, then their cross product is zero.
The cross product is anticommutative (that is, ) and is distributive over addition, that is, . The space together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket.
Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on a choice of orientation (or "handedness") of the space (it is why an oriented space is needed). The resultant vector is invariant of rotation of basis. Due to the dependence on handedness, the cross product is said to be a pseudovector.
In connection with the cross product, the exterior product of vectors can be used in arbitrary dimensions (with a bivector or 2-form result) and is independent of the orientation of the space.
The product can be generalized in various ways, using the orientation and metric structure just as for the traditional 3-dimensional cross product; one can, in dimensions, take the product of vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. The cross-product in seven dimensions has undesirable properties (e.g. it fails to satisfy the Jacobi identity), so it is not used in mathematical physics to represent quantities such as multi-dimensional space-time. (See below for other dimensions.)
Definition
The cross product of two vectors a and b is defined only in three-dimensional space and is denoted by . In physics and applied mathematics, the wedge notation is often used (in conjunction with the name vector product), although in pure mathematics such notation is usually reserved for just the exterior product, an abstraction of the vector product to dimensions.
The cross product is defined as a vector c that is perpendicular (orthogonal) to both a and b, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the vectors span.
The cross product is defined by the formula
where
θ is the angle between a and b in the plane containing them (hence, it is between 0° and 180°),
‖a‖ and ‖b‖ are the magnitudes of vectors a and b,
n is a unit vector perpendicular to the plane containing a and b, with direction such that the ordered set (a, b, n) is positively oriented.
If the vectors a and b are parallel (that is, the angle θ between them is either 0° or 180°), by the above formula, the cross product of a and b is the zero vector 0.
Direction
The direction of the vector n depends on the chosen orientation of the space. Conventionally, it is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (see the adjacent picture). Using this rule implies that the cross product is anti-commutative; that is, . By pointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in the opposite direction, reversing the sign of the product vector.
As the cross product operator depends on the orientation of the space, in general the cross product of two vectors is not a "true" vector, but a pseudovector. See for more detail.
Names and origin
In 1842, William Rowan Hamilton first described the algebra of quaternions and the non-commutative Hamilton product. In particular, when the Hamilton product of two vectors (that is, pure quaternions with zero scalar part) is performed, it results in a quaternion with a scalar and vector part. The scalar and vector part of this Hamilton product corresponds to the negative of dot product and cross product of the two vectors.
In 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced the notation for both the dot product and the cross product using a period () and an "×" (), respectively, to denote them.
In 1877, to emphasize the fact that the result of a dot product is a scalar while the result of a cross product is a vector, William Kingdon Clifford coined the alternative names scalar product and vector product for the two operations. These alternative names are still widely used in the literature.
Both the cross notation () and the name cross product were possibly inspired by the fact that each scalar component of is computed by multiplying non-corresponding components of a and b. Conversely, a dot product involves multiplications between corresponding components of a and b. As explained below, the cross product can be expressed in the form of a determinant of a special matrix. According to Sarrus's rule, this involves multiplications between matrix elements identified by crossed diagonals.
Computing
Coordinate notation
If is a positively oriented orthonormal basis, the basis vectors satisfy the following equalities
A mnemonic for these formulas it that that they can be deduced from any of them by a cyclic permutation of the basis vectors. This mnemonic applies also to many formulas given in this article.
The anticommutativity of the cross product, implies that
The anticommutativity of the cross product (and the obvious lack of linear independence) also implies that
(the zero vector).
These equalities, together with the distributivity and linearity of the cross product (though neither follows easily from the definition given above), are sufficient to determine the cross product of any two vectors a and b. Each vector can be defined as the sum of three orthogonal components parallel to the standard basis vectors:
Their cross product can be expanded using distributivity:
This can be interpreted as the decomposition of into the sum of nine simpler cross products involving vectors aligned with i, j, or k. Each one of these nine cross products operates on two vectors that are easy to handle as they are either parallel or orthogonal to each other. From this decomposition, by using the above-mentioned equalities and collecting similar terms, we obtain:
meaning that the three scalar components of the resulting vector s = s1i + s2j + s3k = are
Using column vectors, we can represent the same result as follows:
Matrix notation
The cross product can also be expressed as the formal determinant:
This determinant can be computed using Sarrus's rule or cofactor expansion. Using Sarrus's rule, it expands to
which gives the components of the resulting vector directly.
Using Levi-Civita tensors
In any basis, the cross-product is given by the tensorial formula where is the covariant Levi-Civita tensor (we note the position of the indices). That corresponds to the intrinsic formula given here.
In an orthonormal basis having the same orientation as the space, is given by the pseudo-tensorial formula where is the Levi-Civita symbol (which is a pseudo-tensor). That is the formula used for everyday physics but it works only for this special choice of basis.
In any orthonormal basis, is given by the pseudo-tensorial formula where indicates whether the basis has the same orientation as the space or not.
The latter formula avoids having to change the orientation of the space when we inverse an orthonormal basis.
Properties
Geometric meaning
The magnitude of the cross product can be interpreted as the positive area of the parallelogram having a and b as sides (see Figure 1):
Indeed, one can also compute the volume V of a parallelepiped having a, b and c as edges by using a combination of a cross product and a dot product, called scalar triple product (see Figure 2):
Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolute value:
Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure of perpendicularity in the same way that the dot product is a measure of parallelism. Given two unit vectors, their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel. The dot product of two unit vectors behaves just oppositely: it is zero when the unit vectors are perpendicular and 1 if the unit vectors are parallel.
Unit vectors enable two convenient identities: the dot product of two unit vectors yields the cosine (which may be positive or negative) of the angle between the two unit vectors. The magnitude of the cross product of the two unit vectors yields the sine (which will always be positive).
Algebraic properties
If the cross product of two vectors is the zero vector (that is, ), then either one or both of the inputs is the zero vector, ( or ) or else they are parallel or antiparallel () so that the sine of the angle between them is zero ( or and ).
The self cross product of a vector is the zero vector:
The cross product is anticommutative,
distributive over addition,
and compatible with scalar multiplication so that
It is not associative, but satisfies the Jacobi identity:
Distributivity, linearity and Jacobi identity show that the R3 vector space together with vector addition and the cross product forms a Lie algebra, the Lie algebra of the real orthogonal group in 3 dimensions, SO(3).
The cross product does not obey the cancellation law; that is, with does not imply , but only that:
This can be the case where b and c cancel, but additionally where a and are parallel; that is, they are related by a scale factor t, leading to:
for some scalar t.
If, in addition to and as above, it is the case that then
As cannot be simultaneously parallel (for the cross product to be 0) and perpendicular (for the dot product to be 0) to a, it must be the case that b and c cancel: .
From the geometrical definition, the cross product is invariant under proper rotations about the axis defined by . In formulae:
, where is a rotation matrix with .
More generally, the cross product obeys the following identity under matrix transformations:
where is a 3-by-3 matrix and is the transpose of the inverse and is the cofactor matrix. It can be readily seen how this formula reduces to the former one if is a rotation matrix. If is a 3-by-3 symmetric matrix applied to a generic cross product , the following relation holds true:
The cross product of two vectors lies in the null space of the matrix with the vectors as rows:
For the sum of two cross products, the following identity holds:
Differentiation
The product rule of differential calculus applies to any bilinear operation, and therefore also to the cross product:
where a and b are vectors that depend on the real variable t.
Triple product expansion
The cross product is used in both forms of the triple product. The scalar triple product of three vectors is defined as
It is the signed volume of the parallelepiped with edges a, b and c and as such the vectors can be used in any order that's an even permutation of the above ordering. The following therefore are equal:
The vector triple product is the cross product of a vector with the result of another cross product, and is related to the dot product by the following formula
The mnemonic "BAC minus CAB" is used to remember the order of the vectors in the right hand member. This formula is used in physics to simplify vector calculations. A special case, regarding gradients and useful in vector calculus, is
where ∇2 is the vector Laplacian operator.
Other identities relate the cross product to the scalar triple product:
where I is the identity matrix.
Alternative formulation
The cross product and the dot product are related by:
The right-hand side is the Gram determinant of a and b, the square of the area of the parallelogram defined by the vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in terms of the angle θ between the two vectors, as:
the above given relationship can be rewritten as follows:
Invoking the Pythagorean trigonometric identity one obtains:
which is the magnitude of the cross product expressed in terms of θ, equal to the area of the parallelogram defined by a and b (see definition above).
The combination of this requirement and the property that the cross product be orthogonal to its constituents a and b provides an alternative definition of the cross product.
Cross product inverse
For the cross product , there are multiple vectors that give the same value of . As a result, it is not possible to rearrange this equation to yield a unique solution for in terms of and . Nevertheless, it is possible to find a family of solutions for , which are
where is an arbitrary constant.
This can be derived using the triple product expansion:
Rearrange to solve for to give
The coefficient of the last term can be simplified to just the arbitrary constant to yield the result shown above.
Lagrange's identity
The relation
can be compared with another relation involving the right-hand side, namely Lagrange's identity expressed as
where a and b may be n-dimensional vectors. This also shows that the Riemannian volume form for surfaces is exactly the surface element from vector calculus. In the case where , combining these two equations results in the expression for the magnitude of the cross product in terms of its components:
The same result is found directly using the components of the cross product found from
In R3, Lagrange's equation is a special case of the multiplicativity of the norm in the quaternion algebra.
It is a special case of another formula, also sometimes called Lagrange's identity, which is the three dimensional case of the Binet–Cauchy identity:
If and , this simplifies to the formula above.
Infinitesimal generators of rotations
The cross product conveniently describes the infinitesimal generators of rotations in R3. Specifically, if n is a unit vector in R3 and R(φ, n) denotes a rotation about the axis through the origin specified by n, with angle φ (measured in radians, counterclockwise when viewed from the tip of n), then
for every vector x in R3. The cross product with n therefore describes the infinitesimal generator of the rotations about n. These infinitesimal generators form the Lie algebra so(3) of the rotation group SO(3), and we obtain the result that the Lie algebra R3 with cross product is isomorphic to the Lie algebra so(3).
Alternative ways to compute
Conversion to matrix multiplication
The vector cross product also can be expressed as the product of a skew-symmetric matrix and a vector:
where superscript refers to the transpose operation, and [a]× is defined by:
The columns [a]×,i of the skew-symmetric matrix for a vector a can be also obtained by calculating the cross product with unit vectors. That is,
or
where is the outer product operator.
Also, if a is itself expressed as a cross product:
then
This result can be generalized to higher dimensions using geometric algebra. In particular in any dimension bivectors can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector. In three dimensions bivectors are dual to vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to vectors.
This notation is also often much easier to work with, for example, in epipolar geometry.
From the general properties of the cross product follows immediately that
and
and from fact that [a]× is skew-symmetric it follows that
The above-mentioned triple product expansion (bac–cab rule) can be easily proven using this notation.
As mentioned above, the Lie algebra R3 with cross product is isomorphic to the Lie algebra so(3), whose elements can be identified with the 3×3 skew-symmetric matrices. The map a → [a]× provides an isomorphism between R3 and so(3). Under this map, the cross product of 3-vectors corresponds to the commutator of 3x3 skew-symmetric matrices.
{| class="toccolours collapsible collapsed" width="70%" style="text-align:left"
!Matrix conversion for cross product with canonical base vectors
|-
|Denoting with the -th canonical base vector, the cross product of a generic vector with is given by: , where
These matrices share the following properties:
(skew-symmetric);
Both trace and determinant are zero;
;
(see below);
The orthogonal projection matrix of a vector is given by . The projection matrix onto the orthogonal complement is given by , where is the identity matrix. For the special case of , it can be verified that
For other properties of orthogonal projection matrices, see projection (linear algebra).
|}
Index notation for tensors
The cross product can alternatively be defined in terms of the Levi-Civita tensor Eijk and a dot product ηmi, which are useful in converting vector notation for tensor applications:
where the indices correspond to vector components. This characterization of the cross product is often expressed more compactly using the Einstein summation convention as
in which repeated indices are summed over the values 1 to 3.
In a positively-oriented orthonormal basis ηmi = δmi (the Kronecker delta) and (the Levi-Civita symbol). In that case, this representation is another form of the skew-symmetric representation of the cross product:
In classical mechanics: representing the cross product by using the Levi-Civita symbol can cause mechanical symmetries to be obvious when physical systems are isotropic. (An example: consider a particle in a Hooke's Law potential in three-space, free to oscillate in three dimensions; none of these dimensions are "special" in any sense, so symmetries lie in the cross-product-represented angular momentum, which are made clear by the abovementioned Levi-Civita representation).
Mnemonic
The word "xyzzy" can be used to remember the definition of the cross product.
If
where:
then:
The second and third equations can be obtained from the first by simply vertically rotating the subscripts, . The problem, of course, is how to remember the first equation, and two options are available for this purpose: either to remember the relevant two diagonals of Sarrus's scheme (those containing i), or to remember the xyzzy sequence.
Since the first diagonal in Sarrus's scheme is just the main diagonal of the above-mentioned 3×3 matrix, the first three letters of the word xyzzy can be very easily remembered.
Cross visualization
Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation. This may be helpful for remembering the correct cross product formula.
If
then:
If we want to obtain the formula for we simply drop the and from the formula, and take the next two components down:
When doing this for the next two elements down should "wrap around" the matrix so that after the z component comes the x component. For clarity, when performing this operation for , the next two components should be z and x (in that order). While for the next two components should be taken as x and y.
For then, if we visualize the cross operator as pointing from an element on the left to an element on the right, we can take the first element on the left and simply multiply by the element that the cross points to in the right-hand matrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here as well. This results in our formula –
We can do this in the same way for and to construct their associated formulas.
Applications
The cross product has applications in various contexts. For example, it is used in computational geometry, physics and engineering.
A non-exhaustive list of examples follows.
Computational geometry
The cross product appears in the calculation of the distance of two skew lines (lines not in the same plane) from each other in three-dimensional space.
The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in computer graphics. For example, the winding of a polygon (clockwise or anticlockwise) about a point within the polygon can be calculated by triangulating the polygon (like spoking a wheel) and summing the angles (between the spokes) using the cross product to keep track of the sign of each angle.
In computational geometry of the plane, the cross product is used to determine the sign of the acute angle defined by three points and . It corresponds to the direction (upward or downward) of the cross product of the two coplanar vectors defined by the two pairs of points and . The sign of the acute angle is the sign of the expression
which is the signed length of the cross product of the two vectors.
To use the cross product, simply extend the 2D vectors to co-planar 3D vectors by setting for each of them.
In the "right-handed" coordinate system, if the result is 0, the points are collinear; if it is positive, the three points constitute a positive angle of rotation around from to , otherwise a negative angle. From another point of view, the sign of tells whether lies to the left or to the right of line
The cross product is used in calculating the volume of a polyhedron such as a tetrahedron or parallelepiped.
Angular momentum and torque
The angular momentum of a particle about a given origin is defined as:
where is the position vector of the particle relative to the origin, is the linear momentum of the particle.
In the same way, the moment of a force applied at point B around point A is given as:
In mechanics the moment of a force is also called torque and written as
Since position linear momentum and force are all true vectors, both the angular momentum and the moment of a force are pseudovectors or axial vectors.
Rigid body
The cross product frequently appears in the description of rigid motions. Two points P and Q on a rigid body can be related by:
where is the point's position, is its velocity and is the body's angular velocity.
Since position and velocity are true vectors, the angular velocity is a pseudovector or axial vector.
Lorentz force
The cross product is used to describe the Lorentz force experienced by a moving electric charge
Since velocity force and electric field are all true vectors, the magnetic field is a pseudovector.
Other
In vector calculus, the cross product is used to define the formula for the vector operator curl.
The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar and multi-view geometry, in particular when deriving matching constraints.
As an external product
The cross product can be defined in terms of the exterior product. It can be generalized to an external product in other than three dimensions. This generalization allows a natural geometric interpretation of the cross product. In exterior algebra the exterior product of two vectors is a bivector. A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectors a and b, one can view the bivector as the oriented parallelogram spanned by a and b. The cross product is then obtained by taking the Hodge star of the bivector , mapping 2-vectors to vectors:
This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. In a d-dimensional space, Hodge star takes a k-vector to a (d–k)-vector; thus only in d = 3 dimensions is the result an element of dimension one (3–2 = 1), i.e. a vector. For example, in d = 4 dimensions, the cross product of two vectors has dimension 4–2 = 2, giving a bivector. Thus, only in three dimensions does cross product define an algebra structure to multiply vectors.
Handedness
Consistency
When physics laws are written as equations, it is possible to make an arbitrary choice of the coordinate system, including handedness. One should be careful to never write down an equation where the two sides do not behave equally under all transformations that need to be considered. For example, if one side of the equation is a cross product of two polar vectors, one must take into account that the result is an axial vector. Therefore, for consistency, the other side must also be an axial vector. More generally, the result of a cross product may be either a polar vector or an axial vector, depending on the type of its operands (polar vectors or axial vectors). Namely, polar vectors and axial vectors are interrelated in the following ways under application of the cross product:
polar vector × polar vector = axial vector
axial vector × axial vector = axial vector
polar vector × axial vector = polar vector
axial vector × polar vector = polar vector
or symbolically
polar × polar = axial
axial × axial = axial
polar × axial = polar
axial × polar = polar
Because the cross product may also be a polar vector, it may not change direction with a mirror image transformation. This happens, according to the above relationships, if one of the operands is a polar vector and the other one is an axial vector (e.g., the cross product of two polar vectors). For instance, a vector triple product involving three polar vectors is a polar vector.
A handedness-free approach is possible using exterior algebra.
The paradox of the orthonormal basis
Let (i, j, k) be an orthonormal basis. The vectors i, j and k do not depend on the orientation of the space. They can even be defined in the absence of any orientation. They can not therefore be axial vectors. But if i and j are polar vectors, then k is an axial vector for i × j = k or j × i = k. This is a paradox.
"Axial" and "polar" are physical qualifiers for physical vectors; that is, vectors which represent physical quantities such as the velocity or the magnetic field. The vectors i, j and k are mathematical vectors, neither axial nor polar. In mathematics, the cross-product of two vectors is a vector. There is no contradiction.
Generalizations
There are several ways to generalize the cross product to higher dimensions.
Lie algebra
The cross product can be seen as one of the simplest Lie products, and is thus generalized by Lie algebras, which are axiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity. Many Lie algebras exist, and their study is a major field of mathematics, called Lie theory.
For example, the Heisenberg algebra gives another Lie algebra structure on In the basis the product is
Quaternions
The cross product can also be described in terms of quaternions.
In general, if a vector is represented as the quaternion , the cross product of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result. The real part will be the negative of the dot product of the two vectors.
Octonions
A cross product for 7-dimensional vectors can be obtained in the same way by using the octonions instead of the quaternions. The nonexistence of nontrivial vector-valued cross products of two vectors in other dimensions is related to the result from Hurwitz's theorem that the only normed division algebras are the ones with dimension 1, 2, 4, and 8.
Exterior product
In general dimension, there is no direct analogue of the binary cross product that yields specifically a vector. There is however the exterior product, which has similar properties, except that the exterior product of two vectors is now a 2-vector instead of an ordinary vector. As mentioned above, the cross product can be interpreted as the exterior product in three dimensions by using the Hodge star operator to map 2-vectors to vectors. The Hodge dual of the exterior product yields an -vector, which is a natural generalization of the cross product in any number of dimensions.
The exterior product and dot product can be combined (through summation) to form the geometric product in geometric algebra.
External product
As mentioned above, the cross product can be interpreted in three dimensions as the Hodge dual of the exterior product. In any finite n dimensions, the Hodge dual of the exterior product of vectors is a vector. So, instead of a binary operation, in arbitrary finite dimensions, the cross product is generalized as the Hodge dual of the exterior product of some given vectors. This generalization is called external product.
Commutator product
Interpreting the three-dimensional vector space of the algebra as the 2-vector (not the 1-vector) subalgebra of the three-dimensional geometric algebra, where , , and , the cross product corresponds exactly to the commutator product in geometric algebra and both use the same symbol . The commutator product is defined for 2-vectors and in geometric algebra as:
where is the geometric product.
The commutator product could be generalised to arbitrary multivectors in three dimensions, which results in a multivector consisting of only elements of grades 1 (1-vectors/true vectors) and 2 (2-vectors/pseudovectors). While the commutator product of two 1-vectors is indeed the same as the exterior product and yields a 2-vector, the commutator of a 1-vector and a 2-vector yields a true vector, corresponding instead to the left and right contractions in geometric algebra. The commutator product of two 2-vectors has no corresponding equivalent product, which is why the commutator product is defined in the first place for 2-vectors. Furthermore, the commutator triple product of three 2-vectors is the same as the vector triple product of the same three pseudovectors in vector algebra. However, the commutator triple product of three 1-vectors in geometric algebra is instead the negative of the vector triple product of the same three true vectors in vector algebra.
Generalizations to higher dimensions is provided by the same commutator product of 2-vectors in higher-dimensional geometric algebras, but the 2-vectors are no longer pseudovectors. Just as the commutator product/cross product of 2-vectors in three dimensions correspond to the simplest Lie algebra, the 2-vector subalgebras of higher dimensional geometric algebra equipped with the commutator product also correspond to the Lie algebras. Also as in three dimensions, the commutator product could be further generalised to arbitrary multivectors.
Multilinear algebra
In the context of multilinear algebra, the cross product can be seen as the (1,2)-tensor (a mixed tensor, specifically a bilinear map) obtained from the 3-dimensional volume form, a (0,3)-tensor, by raising an index.
In detail, the 3-dimensional volume form defines a product by taking the determinant of the matrix given by these 3 vectors. By duality, this is equivalent to a function (fixing any two inputs gives a function by evaluating on the third input) and in the presence of an inner product (such as the dot product; more generally, a non-degenerate bilinear form), we have an isomorphism and thus this yields a map which is the cross product: a (0,3)-tensor (3 vector inputs, scalar output) has been transformed into a (1,2)-tensor (2 vector inputs, 1 vector output) by "raising an index".
Translating the above algebra into geometry, the function "volume of the parallelepiped defined by " (where the first two vectors are fixed and the last is an input), which defines a function , can be represented uniquely as the dot product with a vector: this vector is the cross product From this perspective, the cross product is defined by the scalar triple product,
In the same way, in higher dimensions one may define generalized cross products by raising indices of the n-dimensional volume form, which is a -tensor.
The most direct generalizations of the cross product are to define either:
a -tensor, which takes as input vectors, and gives as output 1 vector – an -ary vector-valued product, or
a -tensor, which takes as input 2 vectors and gives as output skew-symmetric tensor of rank – a binary product with rank tensor values. One can also define -tensors for other k.
These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant and parity.
The -ary product can be described as follows: given vectors in define their generalized cross product as:
perpendicular to the hyperplane defined by the
magnitude is the volume of the parallelotope defined by the which can be computed as the Gram determinant of the
oriented so that is positively oriented.
This is the unique multilinear, alternating product which evaluates to , and so forth for cyclic permutations of indices.
In coordinates, one can give a formula for this -ary analogue of the cross product in Rn by:
This formula is identical in structure to the determinant formula for the normal cross product in R3 except that the row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the ordered vectors (v1, ..., vn−1, Λvi) have a positive orientation with respect to (e1, ..., en). If n is odd, this modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product. In the case that n is even, however, the distinction must be kept. This -ary form enjoys many of the same properties as the vector cross product: it is alternating and linear in its arguments, it is perpendicular to each argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the arguments. Moreover, the product satisfies the Filippov identity,
and so it endows Rn+1 with a structure of n-Lie algebra (see Proposition 1 of ).
History
In 1773, Joseph-Louis Lagrange used the component form of both the dot and cross products in order to study the tetrahedron in three dimensions.
In 1843, William Rowan Hamilton introduced the quaternion product, and with it the terms vector and scalar. Given two quaternions and , where u and v are vectors in R3, their quaternion product can be summarized as . James Clerk Maxwell used Hamilton's quaternion tools to develop his famous electromagnetism equations, and for this and other reasons quaternions for a time were an essential part of physics education.
In 1844, Hermann Grassmann published a geometric algebra not tied to dimension two or three. Grassmann developed several products, including a cross product represented then by . ( | Mathematics | Algebra | null |
157093 | https://en.wikipedia.org/wiki/Dot%20product | Dot product | In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. It is often called the inner product (or rarely the projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see Inner product space for more).
Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry, Euclidean spaces are often defined by using vector spaces. In this case, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle between two vectors is the quotient of their dot product by the product of their lengths).
The name "dot product" is derived from the dot operator " · " that is often used to designate this operation; the alternative name "scalar product" emphasizes that the result is a scalar, rather than a vector (as with the vector product in three-dimensional space).
Definition
The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude) of vectors. The equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space.
In modern presentations of Euclidean geometry, the points of space are defined in terms of their Cartesian coordinates, and Euclidean space itself is commonly identified with the real coordinate space . In such a presentation, the notions of length and angle are defined by means of the dot product. The length of a vector is defined as the square root of the dot product of the vector by itself, and the cosine of the (non oriented) angle between two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry.
Coordinate definition
The dot product of two vectors and specified with respect to an orthonormal basis, is defined as:
where denotes summation and is the dimension of the vector space. For instance, in three-dimensional space, the dot product of vectors and is:
Likewise, the dot product of the vector with itself is:
If vectors are identified with column vectors, the dot product can also be written as a matrix product
where denotes the transpose of .
Expressing the above example in this way, a 1 × 3 matrix (row vector) is multiplied by a 3 × 1 matrix (column vector) to get a 1 × 1 matrix that is identified with its unique entry:
Geometric definition
In Euclidean space, a Euclidean vector is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The magnitude of a vector is denoted by . The dot product of two Euclidean vectors and is defined by
where is the angle between and .
In particular, if the vectors and are orthogonal (i.e., their angle is or ), then , which implies that
At the other extreme, if they are codirectional, then the angle between them is zero with and
This implies that the dot product of a vector with itself is
which gives
the formula for the Euclidean length of the vector.
Scalar projection and first properties
The scalar projection (or scalar component) of a Euclidean vector in the direction of a Euclidean vector is given by
where is the angle between and .
In terms of the geometric definition of the dot product, this can be rewritten as
where is the unit vector in the direction of .
The dot product is thus characterized geometrically by
The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar ,
It also satisfies the distributive law, meaning that
These properties may be summarized by saying that the dot product is a bilinear form. Moreover, this bilinear form is positive definite, which means that is never negative, and is zero if and only if , the zero vector.
Equivalence of the definitions
If are the standard basis vectors in , then we may write
The vectors are an orthonormal basis, which means that they have unit length and are at right angles to each other. Since these vectors have unit length,
and since they form right angles with each other, if ,
Thus in general, we can say that:
where is the Kronecker delta.
Also, by the geometric definition, for any vector and a vector , we note that
where is the component of vector in the direction of . The last step in the equality can be seen from the figure.
Now applying the distributivity of the geometric version of the dot product gives
which is precisely the algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product.
Properties
The dot product fulfills the following properties if , , and are real vectors and , , and are scalars.
Commutative which follows from the definition ( is the angle between and ): The commutative property can also be easily proven with the algebraic definition, and in more general spaces (where the notion of angle might not be geometrically intuitive but an analogous product can be defined) the angle between two vectors can be defined as
Bilinear (additive, distributive and scalar-multiplicative in both arguments)
Not associative Because the dot product is not defined between a scalar and a vector associativity is meaningless. However, bilinearity implies This property is sometimes called the "associative law for scalar and dot product", and one may say that "the dot product is associative with respect to scalar multiplication".
Orthogonal Two non-zero vectors and are orthogonal if and only if .
No cancellation
Unlike multiplication of ordinary numbers, where if , then always equals unless is zero, the dot product does not obey the cancellation law: If and , then we can write: by the distributive law; the result above says this just means that is perpendicular to , which still allows , and therefore allows .
Product rule If and are vector-valued differentiable functions, then the derivative (denoted by a prime ) of is given by the rule
Application to the law of cosines
Given two vectors and separated by angle (see the upper image), they form a triangle with a third side . Let , and denote the lengths of , , and , respectively. The dot product of this with itself is:
which is the law of cosines.
Triple product
There are two ternary operations involving dot product and cross product.
The scalar triple product of three vectors is defined as
Its value is the determinant of the matrix whose columns are the Cartesian coordinates of the three vectors. It is the signed volume of the parallelepiped defined by the three vectors, and is isomorphic to the three-dimensional special case of the exterior product of three vectors.
The vector triple product is defined by
This identity, also known as Lagrange's formula, may be remembered as "ACB minus ABC", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in physics.
Physics
In physics, the dot product takes two vectors and returns a scalar quantity. It is also known as the "scalar product". The dot product of two vectors can be defined as the product of the magnitudes of the two vectors and the cosine of the angle between the two vectors. Thus, Alternatively, it is defined as the product of the projection of the first vector onto the second vector and the magnitude of the second vector.
For example:
Mechanical work is the dot product of force and displacement vectors,
Power is the dot product of force and velocity.
Generalizations
Complex vectors
For vectors with complex entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself could be zero without the vector being the zero vector (e.g. this would happen with the vector This in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the dot product, through the alternative definition
where is the complex conjugate of . When vectors are represented by column vectors, the dot product can be expressed as a matrix product involving a conjugate transpose, denoted with the superscript H:
In the case of vectors with real components, this definition is the same as in the real case. The dot product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex dot product is sesquilinear rather than bilinear, as it is conjugate linear and not linear in . The dot product is not symmetric, since
The angle between two complex vectors is then given by
The complex dot product leads to the notions of Hermitian forms and general inner product spaces, which are widely used in mathematics and physics.
The self dot product of a complex vector , involving the conjugate transpose of a row vector, is also known as the norm squared, , after the Euclidean norm; it is a vector generalization of the absolute square of a complex scalar (see also: Squared Euclidean distance).
Inner product
The inner product generalizes the dot product to abstract vector spaces over a field of scalars, being either the field of real numbers or the field of complex numbers . It is usually denoted using angular brackets by .
The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space, and the inner product of a vector with itself is real and positive-definite.
Functions
The dot product is defined for vectors that have a finite number of entries. Thus these vectors can be regarded as discrete functions: a length- vector is, then, a function with domain , and is a notation for the image of by the function/vector .
This notion can be generalized to square-integrable functions: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some measure space :
For example, if and are continuous functions over a compact subset of with the standard Lebesgue measure, the above definition becomes:
Generalized further to complex continuous functions and , by analogy with the complex inner product above, gives:
Weight function
Inner products can have a weight function (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions and with respect to the weight function is
Dyadics and matrices
A double-dot product for matrices is the Frobenius inner product, which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices and of the same size:
And for real matrices,
Writing a matrix as a dyadic, we can define a different double-dot product (see ) however it is not an inner product.
Tensors
The inner product between a tensor of order and a tensor of order is a tensor of order , see Tensor contraction for details.
Computation
Algorithms
The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from catastrophic cancellation. To avoid this, approaches such as the Kahan summation algorithm are used.
Libraries
A dot product function is included in:
BLAS level 1 real , ; complex , , ,
Fortran as or
Julia as or standard library LinearAlgebra as
R (programming language) as for vectors or, more generally for matrices, as
Matlab as or or or
Python (package NumPy) as or or
GNU Octave as , and similar code as Matlab
Intel oneAPI Math Kernel Library real p?dot ; complex p?dotc
| Mathematics | Algebra | null |
157115 | https://en.wikipedia.org/wiki/Cut%2C%20copy%2C%20and%20paste | Cut, copy, and paste | Cut, copy, and paste are essential commands of modern human–computer interaction and user interface design. They offer an interprocess communication technique for transferring data through a computer's user interface. The cut command removes the selected data from its original position, and the copy command creates a duplicate; in both cases the selected data is kept in temporary storage called the clipboard. Clipboard data is later inserted wherever a paste command is issued. The data remains available to any application supporting the feature, thus allowing easy data transfer between applications.
The command names are an interface metaphor based on the physical procedure used in manuscript print editing to create a page layout, like with paper. The commands were pioneered into computing by Xerox PARC in 1974, popularized by Apple Computer in the 1983 Lisa workstation and the 1984 Macintosh computer, and in a few home computer applications such the 1984 word processor Cut & Paste.
This interaction technique has close associations with related techniques in graphical user interfaces (GUIs) that use pointing devices such as a computer mouse (by drag and drop, for example). Typically, clipboard support is provided by an operating system as part of its GUI and widget toolkit.
The capability to replicate information with ease, changing it between contexts and applications, involves privacy concerns because of the risks of disclosure when handling sensitive information. Terms like cloning, copy forward, carry forward, or re-use refer to the dissemination of such information through documents, and may be subject to regulation by administrative bodies.
History
Origins
The term "cut and paste" comes from the traditional practice in manuscript editing, whereby people cut paragraphs from a page with scissors and paste them onto another page. This practice remained standard into the 1980s. Stationery stores sold "editing scissors" with blades long enough to cut an 8½"-wide page. The advent of photocopiers made the practice easier and more flexible.
The act of copying or transferring text from one part of a computer-based document ("buffer") to a different location within the same or different computer-based document was a part of the earliest on-line computer editors. As soon as computer data entry moved from punch-cards to online files (in the mid/late 1960s) there were "commands" for accomplishing this operation. This mechanism was often used to transfer frequently-used commands or text snippets from additional buffers into the document, as was the case with the QED text editor.
Early methods
The earliest editors (designed for teleprinter terminals) provided keyboard commands to delineate a contiguous region of text, then delete or move it. Since moving a region of text requires first removing it from its initial location and then inserting it into its new location, various schemes had to be invented to allow for this multi-step process to be specified by the user. Often this was done with a "move" command, but some text editors required that the text be first put into some temporary location for later retrieval/placement. In 1983, the Apple Lisa became the first text editing system to call that temporary location "the clipboard".
Earlier control schemes such as NLS used a verb—object command structure, where the command name was provided first and the object to be copied or moved was second. The inversion from verb—object to object—verb on which copy and paste are based, where the user selects the object to be operated before initiating the operation, was an innovation crucial for the success of the desktop metaphor as it allowed copy and move operations based on direct manipulation.
Popularization
Inspired by early line and character editors, such as Pentti Kanerva's TV-Edit, that broke a move or copy operation into two steps—between which the user could invoke a preparatory action such as navigation—Lawrence G. "Larry" Tesler proposed the names "cut" and "copy" for the first step and "paste" for the second step. Beginning in 1974, he and colleagues at Xerox PARC implemented several text editors that used cut/copy-and-paste commands to move and copy text.
Apple Computer popularized this paradigm with its Lisa (1983) and Macintosh (1984) operating systems and applications. The functions were mapped to key combinations using the key as a special modifier, which is held down while also pressing for cut, for copy, or for paste. These few keyboard shortcuts allow the user to perform all the basic editing operations, and the keys are clustered at the left end of the bottom row of the standard QWERTY keyboard.
These are the standard shortcuts:
Control-Z (or ) to undo
Control-X (or ) to cut
Control-C (or ) to copy
Control-V (or ) to paste
The IBM Common User Access (CUA) standard also uses combinations of the Insert, Del, Shift and Control keys. Early versions of Windows used the IBM standard. Microsoft later also adopted the Apple key combinations with the introduction of Windows, using the control key as modifier key. For users migrating to Windows from DOS this was a big change as DOS users used the "COPY" and "MOVE" commands.
Similar patterns of key combinations, later borrowed by others, are widely available in most GUI applications.
The original cut, copy, and paste workflow, as implemented at PARC, utilizes a unique workflow: With two windows on the same screen, the user could use the mouse to pick a point at which to make an insertion in one window (or a segment of text to replace). Then, by holding shift and selecting the copy source elsewhere on the same screen, the copy would be made as soon as the shift was released. Similarly, holding shift and control would copy and cut (delete) the source. This workflow requires many fewer keystrokes/mouse clicks than the current multi-step workflows, and did not require an explicit copy buffer. It was dropped, one presumes, because the original Apple and IBM GUIs were not high enough density to permit multiple windows, as were the PARC machines, and so multiple simultaneous windows were rarely used.
Cut and paste
Computer-based editing can involve very frequent use of cut-and-paste operations. Most software-suppliers provide several methods for performing such tasks, and this can involve (for example) key combinations, pulldown menus, pop-up menus, or toolbar buttons.
The user selects or "highlights" the text or file for moving by some method, typically by dragging over the text or file name with the pointing-device or holding down the Shift key while using the arrow keys to move the text cursor.
The user performs a "cut" operation via key combination ( for Macintosh users), menu, or other means.
Visibly, "cut" text immediately disappears from its location. "Cut" files typically change color to indicate that they will be moved.
Conceptually, the text has now moved to a location often called the clipboard. The clipboard typically remains invisible. On most systems only one clipboard location exists, hence another cut or copy operation overwrites the previously stored information. Many UNIX text-editors provide multiple clipboard entries, as do some Macintosh programs such as Clipboard Master, and Windows clipboard-manager programs such as the one in Microsoft Office.
The user selects a location for insertion by some method, typically by clicking at the desired insertion point.
A paste operation takes place which visibly inserts the clipboard text at the insertion point. (The paste operation does not typically destroy the clipboard text: it remains available in the clipboard and the user can insert additional copies at other points).
Whereas cut-and-paste often takes place with a mouse-equivalent in Windows-like GUI environments, it may also occur entirely from the keyboard, especially in UNIX text editors, such as Pico or vi. Cutting and pasting without a mouse can involve a selection (for which is pressed in most graphical systems) or the entire current line, but it may also involve text after the cursor until the end of the line and other more sophisticated operations.
The clipboard usually stays invisible, because the operations of cutting and pasting, while actually independent, usually take place in quick succession, and the user (usually) needs no assistance in understanding the operation or maintaining mental context. Some application programs provide a means of viewing, or sometimes even editing, the data on the clipboard.
Copy and paste
The term "copy-and-paste" refers to the popular, simple method of reproducing text or other data from a source to a destination. It differs from cut and paste in that the original source text or data does not get deleted or removed. The popularity of this method stems from its simplicity and the ease with which users can move data between various applications visually – without resorting to permanent storage.
Use in healthcare documentation and electronic health records are sensitive, with potential for the introduction of medical errors, information overload, and fraud.
| Technology | User interface | null |
157175 | https://en.wikipedia.org/wiki/Ramsey%20theory | Ramsey theory | Ramsey theory, named after the British mathematician and philosopher Frank P. Ramsey, is a branch of the mathematical field of combinatorics that focuses on the appearance of order in a substructure given a structure of a known size. Problems in Ramsey theory typically ask a question of the form: "how big must some structure be to guarantee that a particular property holds?"
Examples
A typical result in Ramsey theory starts with some mathematical structure that
is then cut into pieces. How big must the original structure be in order to ensure that at least one of the pieces has a given interesting property? This idea can be defined as partition regularity.
For example, consider a complete graph of order n; that is, there are n vertices and each vertex is connected to every other vertex by an edge. A complete graph of order 3 is called a triangle. Now colour each edge either red or blue. How large must n be in order to ensure that there is either a blue triangle or a red triangle? It turns out that the answer is 6. See the article on Ramsey's theorem for a rigorous proof.
Another way to express this result is as follows: at any party with at least six people, there are three people who are all either mutual acquaintances (each one knows the other two) or mutual strangers (none of them knows either of the other two). See theorem on friends and strangers.
This also is a special case of Ramsey's theorem, which says that for any given integer c, any given integers n1,...,nc, there is a number, R(n1,...,nc), such that if the edges of a complete graph of order R(n1,...,nc) are coloured with c different colours, then for some i between 1 and c, it must contain a complete subgraph of order ni whose edges are all colour i. The special case above has c = 2 and n1 = n2 = 3.
Results
Two key theorems of Ramsey theory are:
Van der Waerden's theorem: For any given c and n, there is a number V, such that if V consecutive numbers are coloured with c different colours, then it must contain an arithmetic progression of length n whose elements are all the same colour.
Hales–Jewett theorem: For any given n and c, there is a number H such that if the cells of an H-dimensional n×n×n×...×n cube are coloured with c colours, there must be one row, column, etc. of length n all of whose cells are the same colour. That is: a multi-player n-in-a-row tic-tac-toe cannot end in a draw, no matter how large n is, and no matter how many people are playing, if you play on a board with sufficiently many dimensions. The Hales–Jewett theorem implies Van der Waerden's theorem.
A theorem similar to van der Waerden's theorem is Schur's theorem: for any given c there is a number N such that if the numbers 1, 2, ..., N are coloured with c different colours, then there must be a pair of integers x, y such that x, y, and x+y are all the same colour. Many generalizations of this theorem exist, including Rado's theorem, Rado–Folkman–Sanders theorem, Hindman's theorem, and the Milliken–Taylor theorem. A classic reference for these and many other results in Ramsey theory is Graham, Rothschild, Spencer and Solymosi, updated and expanded in 2015 to its first new edition in 25 years.
Results in Ramsey theory typically have two primary characteristics. Firstly, they are unconstructive: they may show that some structure exists, but they give no process for finding this structure (other than brute-force search). For instance, the pigeonhole principle is of this form. Secondly, while Ramsey theory results do say that sufficiently large objects must necessarily contain a given structure, often the proof of these results requires these objects to be enormously large – bounds that grow exponentially, or even as fast as the Ackermann function are not uncommon. In some small niche cases, upper and lower bounds are improved, but not in general. In many cases these bounds are artifacts of the proof, and it is not known whether they can be substantially improved. In other cases it is known that any bound must be extraordinarily large, sometimes even greater than any primitive recursive function; see the Paris–Harrington theorem for an example. Graham's number, one of the largest numbers ever used in serious mathematical proof, is an upper bound for a problem related to Ramsey theory. Another large example is the Boolean Pythagorean triples problem.
Theorems in Ramsey theory are generally one of the following two types. Many such theorems, which are modeled after Ramsey's theorem itself, assert that in every partition of a large structured object, one of the classes necessarily contains its own structured object, but gives no information about which class this is. In other cases, the reason behind a Ramsey-type result is that the largest partition class always contains the desired substructure. The results of this latter kind are called either density results or Turán-type result, after Turán's theorem. Notable examples include Szemerédi's theorem, which is such a strengthening of van der Waerden's theorem, and the density version of the Hales-Jewett theorem.
| Mathematics | Combinatorics | null |
157302 | https://en.wikipedia.org/wiki/Pepper%20spray | Pepper spray | Pepper spray, oleoresin capsicum spray, OC spray, capsaicin spray,mace, or capsicum spray is a lachrymator (tear gas) product containing the compound capsaicin as the active ingredient that irritates the eyes to cause burning and pain sensations, as well as temporary blindness. Its inflammatory effects cause the eyes to close, temporarily taking away vision. This temporary blindness allows officers to more easily restrain subjects and permits people in danger to use pepper spray in self-defense for an opportunity to escape. It also causes temporary discomfort and burning of the lungs which causes shortness of breath. Pepper spray is used as a less lethal weapon in policing, riot control, crowd control, and self-defense, including defense against dogs and bears.
Pepper spray was engineered originally for defense against bears, mountain lions, wolves and other dangerous predators, and is often referred to colloquially as bear spray.
Kamran Loghman, the person who developed it for use in riot control, wrote the guide for police departments on how it should be used. It was successfully adopted, except for improper usages such as when police sprayed peaceful protestors at University of California, Davis in 2011. Loghman commented, "I have never seen such an inappropriate and improper use of chemical agents", prompting court rulings completely barring its use on docile persons.
Components
The active ingredient in pepper spray is capsaicin, which is derived from the fruit of plants in the genus Capsicum, including chilis in the form of oleoresin capsicum (OC). Extraction of OC from peppers requires capsicum to be finely ground, from which capsaicin is then extracted using an organic solvent such as ethanol. The solvent is then evaporated, and the remaining waxlike resin is the oleoresin capsaicin.
An emulsifier such as propylene glycol is used to suspend OC in water, and the suspension is then pressurized to make an aerosol pepper spray. Other sprays may use an alcohol (such as isopropyl alcohol) base for a more penetrating product, but a risk of fire is present if combined with a taser.
Determining the strength of pepper sprays made by different manufacturers can be confusing and difficult. Statements a company makes about their product strength are not regulated.
The US federal government uses CRC (capsaicin and related capsaicinoids) content for regulation. CRC is the pain-producing component of the OC that produces the burning sensation. Personal pepper sprays can range from a low of 0.18% to a high of 3%. Most law enforcement pepper sprays use between 1.3% and 2%. The federal government of the United States has determined that bear attack deterrent sprays must contain at least 1.0% and not more than 2% CRC. Because the six different types of capsaicinoids under the CRC heading has different levels of potency (up to 2× on the SHU scale), the measurement does not fully represent the strength. Manufacturers do not state which particular type of capsaicinoids are used.
Using the OC concentration is unreliable because the concentration of CRC (and potency of these compounds) can vary. Some manufacturers may show a very high percentage of OC, but the resin itself may not be spicy enough. Higher OC content only reliably implies a higher oil content, which may be undesirable as the hydrophobic oil is less able to soak and penetrate skin. Solutions of more than 5% OC may not spray properly.
Scoville heat units (SHU) is a common indication of pepper spiciness. It does take into account the different potency of CRC compounds, but it cannot be reliably used in pepper spray because it measures the strength of the dry product, i.e. the OC resin and not what comes in the aerosol spray. As the resin is always diluted to make it spray-able, the SHU rating is not useful on its own.
Counterparts
There are several counterparts of pepper spray developed and legal to possess in some countries.
In the United Kingdom, desmethyldihydrocapsaicin (known also as PAVA spray) is used by police officers. As a Section 5 weapon, it is not generally permitted to the public.
Pelargonic acid morpholide (MPK) is widely used as a self-defense chemical agent spray in Russia, though its effectiveness compared to natural pepper spray is unclear.
In China, Ministry of Public Security police units and security guards use tear gas ejectors with OC, CS or CN gases. These are defined as a "restricted" weapon that only police officers, as well as approved security, can use.
Types
Aerosol compound
Cone pattern dispersion - wide pattern, don't have to aim precisely. It can be blown back by wind and if used inside a building, will eventually make room temporarily uninhabitable.
Fog pattern dispersion (fogger)
Stream pattern dispersion
Grenade
Gel compound: has greater accuracy and a reduced risk of blowback and area cross-contamination as the carrying gel does not disperse over a large area. The gel compound also adheres to the target making it more difficult to remove.
Foam compound
Effects
Pepper spray is an inflammatory agent. It inflames the mucous membranes in the eyes, nose, throat and lungs. It causes immediate closing of the eyes, difficulty breathing, runny nose, and coughing. The duration of its effects depends on the strength of the spray; the average full effect lasts from 20 to 90 minutes, but eye irritation and redness can last for up to 24 hours.
The Journal of Investigative Ophthalmology and Visual Science published a study that concluded that single exposure of the eye to OC is harmless, but repeated exposure can result in long-lasting changes in corneal sensitivity. They found no lasting decrease in visual acuity.
The European Parliament Scientific and Technological Options Assessment (STOA) published in 1998 "An Appraisal of Technologies of Political Control"
The STOA appraisal states:
"Past experience has shown that to rely on manufacturers unsubstantiated claims about the absence of hazards is unwise. In the US, companies making crowd control weapons, (e.g. pepper-gas manufacturer Zarc International), have put their technical data in the public domain without loss of profitability."
and
"Research on chemical irritants should be published in open scientific journals before authorization for any usage is permitted and that the safety criteria for such chemicals should be treated as if they were drugs rather than riot control agents;"
For those taking drugs, or those subjected to restraining techniques that restrict the breathing passages, there is a risk of death. In 1995, the Los Angeles Times reported at least 61 deaths associated with police use of pepper spray since 1990 in the USA. The American Civil Liberties Union (ACLU) documented 27 people in police custody who died after exposure to pepper spray in California since 1993. However, the ACLU report counts all deaths occurring within hours of exposure to pepper spray regardless of prior interaction, taser use, or if drugs are involved. In all 27 cases listed by the ACLU, the coroners' report listed other factors as the primary cause of death; in a few cases the use of pepper spray may have been a contributing factor.
The US Army performed studies in 1993 at Aberdeen Proving Ground, and a UNC study in 2000 stated that the compound in peppers, capsaicin, is mildly mutagenic, and 10% of mice exposed to it developed cancer. Where the study also found many beneficial effects of capsaicin, the Occupational Safety and Health Administration released statements declaring exposure of employees to OC is an unnecessary health risk. As of 1999, it was in use by more than 2,000 public safety agencies.
The head of the FBI's Less-Than-Lethal Weapons Program at the time of the 1991 study, Special Agent Thomas W. W. Ward, was fired by the FBI and was sentenced to two months in prison for receiving payments from a pepper-gas manufacturer while conducting and authoring the FBI study that eventually approved pepper spray for FBI use. Prosecutors said that from December 1989 through 1990, Ward received about $5,000 a month for a total of $57,500, from Luckey Police Products, a Fort Lauderdale, Florida-based company that was a major producer and supplier of pepper spray. The payments were paid through a Florida company owned by Ward's wife.
Direct close-range spray can cause more serious eye irritation by attacking the cornea with a concentrated stream of liquid (the so-called "hydraulic needle" effect). Some brands have addressed this problem by means of an elliptically cone-shaped spray pattern.
Pepper spray has been associated with positional asphyxiation of individuals in police custody. There is much debate over the actual cause of death in these cases. There have been few controlled clinical studies of the human health effects of pepper spray marketed for police use, and those studies are contradictory. Some studies have found no harmful effects beyond the effects described above. Due to these studies and deaths, many law enforcement agencies have moved to include policies and training to prevent positional deaths. However, there are some scientific studies that argue the positional asphyxiation claim is a myth due to pinpoint pressure on a person. The study by two universities stressed that no pressure should be applied to the neck area. They concluded that the person's own weight is not enough to stop a person's breathing with the rest of their body supported.
Acute response
For individuals not previously exposed to OC effects, the general feelings after being sprayed can be best likened to being "set alight". The initial reaction, should the spray be directed at the face, is the involuntary closing of the eyes, an instant sensation of the restriction of the airways and the general feeling of sudden and intense searing pain about the face, nose, and throat. This is due to irritation of mucous membranes. Many people experience fear and are disoriented due to sudden restriction of vision even though it is temporary. There is associated shortness of breath, although studies performed with asthmatics have not produced any asthma attacks in those individuals, and monitoring is still needed for the individuals after exposure. Police are trained to repeatedly instruct targets to breathe normally if they complain of difficulty, as the shock of the exposure can generate considerable panic as opposed to actual physical symptoms.
Treatment
Capsaicin is not soluble in water, and even large volumes of water will not wash it off, only dilute it. In general, victims are encouraged to blink vigorously in order to encourage tears, which will help flush the irritant from the eyes.
A study of five often-recommended treatments for skin pain (Maalox, 2% lidocaine gel, baby shampoo, milk, or water) concluded that: "...there was no significant difference in pain relief provided by five different treatment regimens. Time after exposure appeared to be the best predictor for a decrease in pain...".
Many ambulance services and emergency departments carry saline to remove the spray. Some of the OC and CS will remain in the respiratory system, but a recovery of vision and the coordination of the eyes can be expected within 7 to 15 minutes.
Some "triple-action" pepper sprays also contain "tear gas" (CS gas), which can be neutralized with sodium metabisulfite (Campden tablets), though it is not for use on a person, only for area clean up.
Use
Pepper spray typically comes in canisters, which are often small enough to be carried or concealed in a pocket or purse. Pepper spray can also be purchased concealed in items such as rings. There are also pepper spray projectiles available, which can be fired from a paintball gun or similar platform. It has been used for years against demonstrators and aggressive animals like bears. There are also many types such as foam, gel, foggers, and spray.
Oleoresin capsicum
Oleoresin capsicum, also known as capsicum oleoresin, is also used in food and medicine. In food, it serves as a concentrated and predictable source of spiciness. The food industry has accordingly changed to prefer a combination of milder and more predictable strains of jalapeno and OC for flavoring. In medicine, OC is used in a number of products for external use.
OC used for food is generally rated between 80 000 and 500 000 SHU, roughly equivalent to 0.6-3.9% capsaicin. Paprika oleoresin is a different extract, containing very little heat and mostly used for coloring.
Legality
Pepper spray is banned for use in war by Article I.5 of the Chemical Weapons Convention, which bans the use of all riot control agents in warfare whether lethal or less-than-lethal. Depending on the location, it may be legal to use for self-defense.
Africa
Nigeria: Assistant Police Commissioner stated that pepper sprays are illegal for civilians to possess.
South Africa: Pepper sprays are legal to own by civilians for self defense.
Asia
Bangladesh:
Bengal Police started using pepper spray to control opposition movement.
China: Forbidden for civilians, it is used only by law enforcement agencies. Underground trade leads to some civilian self-defense use.
Hong Kong: Forbidden for civilians, it is legal to possess and use only by the members of Disciplined Services when on duty.
Such devices are classified as "arms" under the "Laws of Hong Kong". Chap 238 Firearms and Ammunition Ordinance. Without a valid license from the Hong Kong Police Force, it is a crime to possess and can result in a fine of $100,000 and imprisonment for up to 14 years.
India: Legal
They are sold via government-approved companies after performing a background verification.
Indonesia: It is legal, but there are restrictions on its sale and possession.
Iran: Forbidden for civilians, it is used only by the police.
Israel: OC and CS spray cans may be purchased by any member of the public without restriction and carried in public.
In the 1980s, a firearms license was required for doing so, but these sprays have since been deregulated.
Japan: There are no laws against possession or use, but using it could result in imprisonment, depending on the damage caused to the target.
Malaysia: Use and possession of pepper spray for self-defense are legal.
Mongolia: Possession and use for self-defense are legal, and it is freely available in stores.
Pakistan: Possession and use for self-defense is legal and its available at physical and online stores.
Philippines: Possession and use for self-defense is legal, and it is freely available in stores.
Saudi Arabia: Use and possession of pepper spray for self-defense are legal.
It is an offense to use pepper spray on anyone for reasons other than self-defense.
Singapore: Travellers are prohibited from bringing pepper spray into the country, and it is illegal for the public to possess it.
South Korea: Pepper sprays containing OC are legal.
Requires a permit to distribute, own, carry pepper sprays containing pre-compressed gas or explosive propellent.
Pepper sprays without any pre-compressed gas or explosive propellent are unrestricted.
Thailand: Use for self-defense is legal, and it is freely available in stores.
Possession in a public place can be punished by confiscation and a fine.
Taiwan: Legal for self-defense, it is available in some shops.
It is an offense to use pepper spray on anyone for reasons other than self-defense.
Vietnam: Forbidden for civilians and used only by the police.
Europe
Austria: Pepper spray is classified as a self-defense device, they may be owned and carried by adults without registration or permission. Justified use against humans as self-defense is allowed.
Belgium: Pepper spray is classified as a prohibited weapon.
Possession is illegal for anyone other than police officers, police agents (assistant police officers), security officers of public transport companies, soldiers and customs officers to carry a capsicum spray. It's also authorised after obtaining permission from the Minister of Internal Affairs.
Czech Republic: Possession and carrying is legal.
Police also encourage vulnerable groups like pensioners, children, and women to carry pepper spray.
Carrying at public demonstrations and into court buildings is illegal (pepper spray as well as other weapons may be left with armed guard upon entry of a courthouse).
Denmark: Pepper spray is generally illegal to own.
Finland: Possession of pepper spray requires a license.
Licenses are issued for defensive purposes and to individuals working jobs where such a device is needed such as the private security sector.
France: It is legal for anyone over the age of 18 to buy pepper spray in an armory or military surplus store.
It is classified as a Category D Weapon in French law and if the aerosol contains more than , it is classed as an offensive weapon; possession in a public place can be punished by confiscation and a fine.
However, if it contains less than , while still a Category 6 Weapon, it is not classed as a punishable offense for the purposes of the Weapons law. Upon control, it will be confiscated and a verbal warning might be issued.
Germany: Pepper sprays labeled for the purpose of defense against animals may be owned and carried by all citizens regardless of age. Such sprays are not legally considered as weapons §1. Carrying it at (or on the way to and from) demonstrations may still be punished.
Sprays that are not labelled "animal-defence spray" or do not bear the test mark of the (MPA, material testing institute) are classified as prohibited weapons.
Justified use against humans as self-defense is allowed.
CS sprays bearing a test mark of the MPA may be owned and carried by anyone over the age of 14.
Greece: Such items are illegal. They will be confiscated and possession may result in detention and arrest.
Hungary: Such items are reserved for law enforcement (including civilian members of the auxiliary police).
Civilians may carry canisters filled with maximum of any other lachrymatory agent.
However, there is no restriction for pepper gas pistol cartridges.
Iceland: Possession of pepper spray is illegal for private citizens.
Police officers and customs officers carry it. Coast guardsmen as well as prison officers have access to it.
Members of the riot police use larger pepper-spray canisters than what is used by a normal police officer.
Ireland: Possession of this spray by persons other the Garda Síochána (national police) is an offence under the Firearms and Offensive Weapons Act.
Italy: Any citizen over 16 years of age without a criminal record could possess, carry and purchase any OC-based compounds and personal defence devices that respond to the following criteria:
Containing a payload not exceeding , with a percentage of Oleoresin Capsicum not exceeding 10% and a maximum concentration of capsaicin and capsaicinoid substances not exceeding 2,5%;
Containing no flammable, corrosive, toxic or carcinogenic substances, and no other aggressive chemical compound than OC itself;
Being sealed when sold and featuring a safety device against accidental discharge;
Featuring a range not exceeding .
Latvia: Pepper spray is classified as a self-defense device.
It can be bought and carried by anyone over 16 years of age.
Pepper spray handguns can be bought and carried without any license by anyone over 18.
Lithuania: Classified as D category weapon, but can be bought and carried by anyone over 18 years of age (without registration nor permission).
Issued as auxiliary service device to police.
Police also encourages vulnerable groups like pensioners or women to carry one.
Montenegro: It is legal for civilians over the age of 16 to buy, own and carry pepper spray but it is illegal to carry it in a way that it is shown to other people in public spaces or disturb people with it in any way. You are allowed to use it as a self-defense tool if needed.
Netherlands: It is illegal for civilians to own and carry pepper spray.
Only police officers trained in the specific use of pepper spray are allowed to carry and use it against civilians and animals.
Norway: It is illegal for civilians.
Police officers are allowed to carry pepper spray as part of their standard equipment.
Poland: Called precisely in Polish Penal Code "a hand-held disabling gas thrower", sprays are not considered a weapon.
They can be carried by anyone without further registration or permission.
Portugal: Civilians who do not have criminal records are allowed to get police permits to purchase from gun shops, carry, and use OC sprays with a maximum concentration of 5%.
Romania: Pepper spray is banned at sporting and cultural events, public transportation and entertainment locations (according to Penal Code 2012, art 372, (1), c).
Russia: It is classified as a self-defense weapon and can be carried by anyone over 18.
Use against humans is legal.
OC is not the only legal agent used. CS, CR, PAM (МПК), and (rarely) CN are also legal and highly popular.
Serbia: Pepper spray is legal under the new law as of 2016 and can be carried by anyone over the age of 16. Use against humans in self-defence is legal.
Slovakia: It is classified as a self-defense weapon.
It is available to anyone over 18.
The police recommend its use.
Spain: Approved pepper spray made with 5% CS is available to anyone older than 18 years.
OC pepper spray, recently adopted for some civilian use (e.g., one of , with no registration DGSP-07-22-SDP, is approved by the Ministry of Health and Consumption).
Sweden: Requires weapons licence, essentially always illegal to carry in public or private. Issued as supplementary service weapon to police.
Switzerland: Pepper spray in Switzerland is subject to the Chemicals Legislation. It may only be distributed to buyers above 18 years of age and against ID evidence. Self-service is not permitted and the customer ought to be made aware of safe storage, use and disposal. The vendor needs to possess the "Know-how for the distribution of particularly hazardous chemicals". Potential mailing has to be shipped by registered courier with the remark "to addressee only". The products must be classified and labeled at least an irritant (Xi;R36/37). Regulations for aerosol packages need to be observed. Sprays with greenhouse relevant propellants such as R134a (1,1,1,2-Tetrafluorethan) are banned. Spray products for self-defense with irritants such as CA, CS, CN, CR are considered as weapons in terms of the gun control law. The weapon purchase permit, as well as the weapon carrier permit, are required for the purchase of such weapons. In 2009, the Swiss Army introduced for the military personnel the irritant atomizer 2000 (RSG-2000) and is introduced during watch functions. The military bearer permit is granted after passing the half-day training.
Ukraine: Called legally "Tearing and irritating aerosols (gas canisters)", sprays are not considered a weapon and can be carried by anyone over 18 without further registration or permission. It is classified as a self-defense device.
United Kingdom: Pepper spray is illegal under Section 5(1)(b) of the Firearms Act 1968: "A person commits an offence if [...] he has in his possession [...] any weapon of whatever description designed or adapted for the discharge of any noxious liquid, gas or other thing."
Police officers are exempt from this law and permitted to carry pepper spray as part of their standard equipment.
North America
Canada
Pepper spray designed to be used against people is considered a prohibited weapon in Canada. The definition under regulation states "any device designed to be used for the purpose of injuring, immobilizing or otherwise incapacitating any person by the discharge therefrom of (a) tear gas, Mace or other gas, or (b) any liquid, spray, powder or other substance that is capable of injuring, immobilizing or otherwise incapacitating any person" is a prohibited weapon.
Only law enforcement officers may legally carry or possess pepper spray labeled for use on persons. Any similar canister with the labels reading "dog spray" or "bear spray" is regulated under the Pest Control Products Act—while legal to be carried by anyone, it is against the law if its use causes "a risk of imminent death or serious bodily harm to another person" or harming the environment and carries a penalty up to a fine of $500,000 and jail time of maximum 3 years. Carrying bear spray in public, without justification, may also lead to charges under the Criminal Code.
United States
It is a federal offense to carry/ship pepper spray on a commercial airliner or possess it in the secure area of an airport. State law and local ordinances regarding possession and use vary across the country. Pepper spray up to 4 oz. is permitted in checked baggage.
When pepper spray is used in the workplace, OSHA requires a pepper spray Safety Data Sheet (SDS) be available to all employees.
Pepper spray can be legally purchased and carried in all 50 states and the District of Columbia. Some states regulate the maximum allowed strength of the pepper spray, age restriction, content and use.
California: As of January 1, 1996 and as a result of Assembly Bill 830 (Speier), the pepper spray and Mace programs are now deregulated. Consumers will no longer be required to have the training, and a certificate is not required to purchase or possess these items. Pepper spray and Mace are available through gun shops, sporting goods stores, and other business outlets. California Penal Code Section 12400–12460 govern pepper spray use in California. Container holding the defense spray must contain no more than net weight of aerosol spray.
Certain individuals are still prohibited from possessing pepper spray, including minors under the age of 16, convicted felons, individuals convicted of certain drug offenses, individuals convicted of assault, and individuals convicted of misusing pepper spray.
Massachusetts: Before July 1, 2014, residents may purchase defense sprays only from licensed Firearms Dealers in that state, and must hold a valid Firearms Identification Card (FID) or License to Carry Firearms (LTC) to purchase or to possess outside of one's own private property. New legislations allow residents to purchase pepper spray without a Firearms Identification Card starting July 1.
Florida: Any pepper spray containing no more than of chemical can be carried in public openly or concealed without a permit. Furthermore, any such pepper spray is classified as "self-defense chemical spray" and therefore not considered a weapon under Florida law.
Michigan: Allows "reasonable use" of spray containing not more than 18% oleoresin capsicum to protect "a person or property under circumstances that would justify the person's use of physical force". It is illegal to distribute a "self-defense spray" to a person under 18 years of age.
New Jersey: Non-felons over the age of 18 can possess a small amount of pepper spray, with no more than three-quarters of an ounce of chemical substance.
New York: Can be legally possessed by any person age 18 or over. Restricted to no more than 0.67% capsaicin content.
It must be purchased in person (i.e., cannot be purchased by mail-order or internet sale) either at a pharmacy or from a licensed firearm retailer (NY Penal Law 265.20 14) and the seller must keep a record of purchases.
The use of pepper spray to prevent a public official from performing his/her official duties is a class-E felony.
Texas law makes it legal for an individual to possess a small, commercially sold container of pepper spray for personal self-defense. However, Texas law otherwise makes it illegal to carry a "Chemical dispensing device".
Virginia: Code of Virginia § 18.2-312. Illegal use of tear gas, phosgene, and other gases. "If any person maliciously releases or cause or procure to be released in any private home, place of business or place of public gathering any tear gas, mustard gas, phosgene gas or other noxious or nauseating gases or mixtures of chemicals designed to, and capable of, producing vile or injurious or nauseating odors or gases, and bodily injury results to any person from such gas or odor, the offending person shall be guilty of a Class 3 felony. If such act be done unlawfully, but not maliciously, the offending person shall be guilty of a Class 6 felony. Nothing herein contained shall prevent the use of tear gas or other gases by police officers or other peace officers in the proper performance of their duties, or by any person or persons in the protection of the person, life or property."
Washington: Persons over 18 may carry personal-protection spray devices.
Persons over age 14 may carry personal-protection spray devices with their legal guardian's consent.
Wisconsin: Tear gas is not permissible.
By regulation, OC products with a maximum OC concentration of 10% and weight range of oleoresin of capsicum and inert ingredients of are authorized. Further, the product cannot be camouflaged and must have a safety feature designed to prevent accidental discharge. The units may not have an effective range of over and must have an effective range of .
In addition there are certain labeling and packaging requirements, it must not be sold to anyone under 18 and the phone number of the manufacturer has to be on the label. The units must also be sold in sealed tamper-proof packages.
South America
Brazil: Classified as a weapon by Federal Act n° 3665/2000 (Regulation for Fiscalization of Controlled Products). Only law enforcement officers and private security agents with a recognized Less Lethal Weapons training certificate can carry it.
Colombia: Can be sold without any kind of restriction to anyone older than 14 years.
Use has not been inducted on the law enforcement officer's arsenal.
Australia
Australian Capital Territory: Pepper spray is a "prohibited weapon", making it an offence to possess or use it.
New South Wales: Possession of pepper spray by unauthorized persons is illegal, under schedule 1 of the Weapons Prohibition Act 1998, being classified as a "prohibited weapon".
Northern Territory: Prescribed by regulation to be a prohibited weapon under the Weapons Control Act.
This legislation makes it an offense for someone without a permit, normally anyone who is not an officer of Police/Correctional Services/Customs/Defence, to carry a prohibited weapon.
Tasmania: Possession of pepper spray by unauthorized persons is illegal, under an amendment of the Police Offences Act 1935, being classified as an "offensive weapon". Likewise, possession of knives, batons, and any other instrument that may be considered, "Offensive Weapons" if they are possessed by an individual, in a Public Place, "Without lawful excuse", leading to confusion within the police force over what constitutes "lawful excuse". Self-defense as a lawful excuse to carry such items varies from one officer to the next.
Pepper spray is commercially available without a license. Authority to possess and use Oleoresin Capsicum devices remains with Tasmania Police Officers (As part of general-issue operational equipment), and Tasmanian Justice Department (H.M. Prisons) Officers.
South Australia: in South Australia, possession of pepper spray without lawful excuse is illegal.
Western Australia: The possession of pepper spray by individuals for self-defense subject to a "reasonable excuse" test has been legal in Western Australia following the landmark Supreme Court decision in Hall v Collins [2003] WASCA 74 (4 April 2003).
Victoria: Schedule 3 of the Control of Weapons Regulations 2011 designates "an article designed or adapted to discharge oleoresin capsicum spray" as a prohibited weapon.
Queensland: in Queensland, pepper spray is considered an offensive weapon and can not be used for self-defence.
New Zealand
Classed as a restricted weapon.
A permit is required to obtain or carry pepper spray.
Front-line police officers have routinely carried pepper spray since 1997. New Zealand Prison Service made OC spray available for use in approved situations in 2013.
New Zealand Defence Force Military Police are permitted to carry OC spray under a special agreement due to the nature of their duties.
The Scoville rating of these sprays are 500,000 (sabre MK9 HVS unit) and 2,000,000 (Sabre, cell buster fog delivery). This was as a result of excessive staff assaults and a two-year trial in ten prisons throughout the country.
Civilian use advocates
In June 2002, West Australian resident Rob Hall was convicted for using a canister of pepper spray to break up an altercation between two guests at his home in Midland. He was sentenced to a good behavior bond and granted a spent conviction order, which he appealed to the Supreme Court. Justice Christine Wheeler ruled in his favor, thereby legalizing pepper spray in the state on a case-by-case basis for those who are able to show a reasonable excuse.
On 14 March 2012, a person dressed entirely in black entered the public gallery of the New South Wales Legislative Council and launched a paper plane into the air in the form of a petition to Police Minister Mike Gallacher calling on the government to allow civilians to carry capsicum spray.
| Technology | Less-lethal weapons | null |
157606 | https://en.wikipedia.org/wiki/Glass%20fiber | Glass fiber | Glass fiber (or glass fibre) is a material consisting of numerous extremely fine fibers of glass.
Glassmakers throughout history have experimented with glass fibers, but mass manufacture of glass fiber was only made possible with the invention of finer machine tooling. In 1893, Edward Drummond Libbey exhibited a dress at the World's Columbian Exposition incorporating glass fibers with the diameter and texture of silk fibers. Glass fibers can also occur naturally, as Pele's hair.
Glass wool, which is one product called "fiberglass" today, was invented some time between 1932 and 1933 by Games Slayter of Owens-Illinois, as a material to be used as thermal building insulation. It is marketed under the trade name Fiberglas, which has become a genericized trademark. Glass fiber, when used as a thermal insulating material, is specially manufactured with a bonding agent to trap many small air cells, resulting in the characteristically air-filled low-density "glass wool" family of products.
Glass fiber has roughly comparable mechanical properties to other fibers such as polymers and carbon fiber. Although not as rigid as carbon fiber, it is much cheaper and significantly less brittle when used in composites. Glass fiber reinforced composites are used in marine industry and piping industries because of good environmental resistance, better damage tolerance for impact loading, high specific strength and stiffness.
Fiber formation
Glass fiber is formed when thin strands of silica-based or other formulation glass are extruded into many fibers with small diameters suitable for textile processing. The technique of heating and drawing glass into fine fibers has been known for millennia, and was practiced in Egypt and Venice. Before the recent use of these fibers for textile applications, all glass fiber had been manufactured as staple (that is, clusters of short lengths of fiber).
The modern method for producing glass wool is the invention of Games Slayter working at the Owens-Illinois Glass Company (Toledo, Ohio). He first applied for a patent for a new process to make glass wool in 1933. The first commercial production of glass fiber was in 1936. In 1938 Owens-Illinois Glass Company and Corning Glass Works joined to form the Owens-Corning Fiberglas Corporation. When the two companies joined to produce and promote glass fiber, they introduced continuous filament glass fibers. Owens-Corning is still the major glass-fiber producer in the market today.
The most common type of glass fiber used in fiberglass is E-glass, which is alumino-borosilicate glass with less than 1% w/w alkali oxides, mainly used for glass-reinforced plastics. Other types of glass used are A-glass (Alkali-lime glass with little or no boron oxide), E-CR-glass (Electrical/Chemical Resistance; alumino-lime silicate with less than 1% w/w alkali oxides, with high acid resistance), C-glass (alkali-lime glass with high boron oxide content, used for glass staple fibers and insulation), D-glass (borosilicate glass, named for its low dielectric constant), R-glass (alumino silicate glass without MgO and CaO with high mechanical requirements as reinforcement), and S-glass (alumino silicate glass without CaO but with high MgO content with high tensile strength).
Pure silica (silicon dioxide), when cooled as fused quartz into a glass with no true melting point, can be used as a glass fiber for fiberglass, but has the drawback that it must be worked at very high temperatures. In order to lower the necessary work temperature, other materials are introduced as "fluxing agents" (i.e., components to lower the melting point). Ordinary A-glass ("A" for "alkali-lime") or soda lime glass, crushed and ready to be remelted, as so-called cullet glass, was the first type of glass used for fiberglass. E-glass ("E" because of initial electrical application), is alkali free, and was the first glass formulation used for continuous filament formation. It now makes up most of the fiberglass production in the world, and also is the single largest consumer of boron minerals globally. It is susceptible to chloride ion attack and is a poor choice for marine applications. S-glass ("S" for "Strength") is used when high tensile strength (modulus) is important, and is thus important in composites for building and aircraft construction. The same substance is known as R-glass ("R" for "reinforcement") in Europe. C-glass ("C" for "chemical resistance") and T-glass ("T" is for "thermal insulator" – a North American variant of C-glass) are resistant to chemical attack; both are often found in insulation-grades of blown fiberglass.
Chemistry
The basis of textile-grade glass fibers is silica, SiO2. In its pure form it exists as a polymer, (SiO2)n. It has no true melting point but softens up to 1200 °C, where it starts to degrade. At 1713 °C, most of the molecules can move about freely. If the glass is extruded and cooled quickly at this temperature, it will be unable to form an ordered structure. In the polymer it forms SiO4 groups which are configured as a tetrahedron with the silicon atom at the center, and four oxygen atoms at the corners. These atoms then form a network bonded at the corners by sharing the oxygen atoms.
The vitreous and crystalline states of silica (glass and quartz) have similar energy levels on a molecular basis, also implying that the glassy form is extremely stable. In order to induce crystallization, it must be heated to temperatures above 1200 °C for long periods of time.
Although pure silica is a perfectly viable glass and glass fiber, it must be worked with at very high temperatures, which is a drawback unless its specific chemical properties are needed. It is usual to introduce impurities into the glass in the form of other materials to lower its working temperature. These materials also impart various other properties to the glass that may be beneficial in different applications. The first type of glass used for fiber was soda lime glass or A-glass ("A" for the alkali it contains). It is not very resistant to alkali. A newer, alkali-free (<2%) type, E-glass, is an alumino-borosilicate glass. C-glass was developed to resist attack from chemicals, mostly acids that destroy E-glass. T-glass is a North American variant of C-glass. AR-glass is alkali-resistant glass. Most glass fibers have limited solubility in water but are very dependent on pH. Chloride ions will also attack and dissolve E-glass surfaces.
E-glass does not actually melt, but softens instead, the softening point being "the temperature at which a 0.55–0.77 mm diameter fiber 235 mm long, elongates under its own weight at 1 mm/min when suspended vertically and heated at the rate of 5 °C per minute". The strain point is reached when the glass has a viscosity of 1014.5 poise. The annealing point, which is the temperature where the internal stresses are reduced to an acceptable commercial limit in 15 minutes, is marked by a viscosity of 1013 poise.
Properties
Thermal
Fabrics of woven glass fibers are useful thermal insulators because of their high ratio of surface area to weight. However, the increased surface area makes them much more susceptible to chemical attack. By trapping air within them, blocks of glass fiber make good thermal insulation, with a thermal conductivity of the order of 0.05 W/(m·K).
Selected properties
Mechanical properties
The strength of glass is usually tested and reported for "virgin" or pristine fibers—those that have just been manufactured. The freshest, thinnest fibers are the strongest because the thinner fibers are more ductile. The more the surface is scratched, the less the resulting tenacity. Because glass has an amorphous structure, its properties are the same along the fiber and across the fiber. Humidity is an important factor in the tensile strength. Moisture is easily adsorbed and can worsen microscopic cracks and surface defects, and lessen tenacity.
In contrast to carbon fiber, glass can undergo more elongation before it breaks. Thinner filaments can bend further before they break. The viscosity of the molten glass is very important for manufacturing success. During drawing, the process where the hot glass is pulled to reduce the diameter of the fiber, the viscosity must be relatively low. If it is too high, the fiber will break during drawing. However, if it is too low, the glass will form droplets instead of being drawn out into a fiber.
Manufacturing processes
Melting
There are two main types of glass fiber manufacture and two main types of glass fiber product. First, fiber is made either from a direct melt process or a marble remelt process. Both start with the raw materials in solid form. The materials are mixed together and melted in a furnace. Then, for the marble process, the molten material is sheared and rolled into marbles which are cooled and packaged. The marbles are taken to the fiber manufacturing facility where they are inserted into a can and remelted. The molten glass is extruded to the bushing to be formed into fiber. In the direct melt process, the molten glass in the furnace goes directly to the bushing for formation.
Formation
The bushing plate is the most important part of the machinery for making the fiber. This is a small metal furnace containing nozzles for the fiber to be formed through. It is almost always made of platinum alloyed with rhodium for durability. Platinum is used because the glass melt has a natural affinity for wetting it. When bushings were first used they were pure platinum, and the glass wetted the bushing so easily that it ran under the plate after exiting the nozzle and accumulated on the underside. Also, due to its cost and the tendency to wear, the platinum was alloyed with rhodium. In the direct melt process, the bushing serves as a collector for the molten glass. It is heated slightly to keep the glass at the correct temperature for fiber formation. In the marble melt process, the bushing acts more like a furnace as it melts more of the material.
Bushings are the major expense in fiber glass production. The nozzle design is also critical. The number of nozzles ranges from 200 to 4000 in multiples of 200. The important part of the nozzle in continuous filament manufacture is the thickness of its walls in the exit region. It was found that inserting a counterbore here reduced wetting. Today, the nozzles are designed to have a minimum thickness at the exit. As glass flows through the nozzle, it forms a drop which is suspended from the end. As it falls, it leaves a thread attached by the meniscus to the nozzle as long as the viscosity is in the correct range for fiber formation. The smaller the annular ring of the nozzle and the thinner the wall at exit, the faster the drop will form and fall away, and the lower its tendency to wet the vertical part of the nozzle. The surface tension of the glass is what influences the formation of the meniscus. For E-glass it should be around 400 mN/m.
The attenuation (drawing) speed is important in the nozzle design. Although slowing this speed down can make coarser fiber, it is uneconomic to run at speeds for which the nozzles were not designed.
Continuous filament process
In the continuous filament process, after the fiber is drawn, a size is applied. This size helps protect the fiber as it is wound onto a bobbin. The particular size applied relates to end-use. While some sizes are processing aids, others make the fiber have an affinity for a certain resin, if the fiber is to be used in a composite. Size is usually added at 0.5–2.0% by weight. Winding then takes place at around 1 km/min.
Staple fiber process
For staple fiber production, there are a number of ways to manufacture the fiber. The glass can be blown or blasted with heat or steam after exiting the formation machine. Usually these fibers are made into some sort of mat. The most common process used is the rotary process. Here, the glass enters a rotating spinner, and due to centrifugal force is thrown out horizontally. The air jets push it down vertically, and binder is applied. Then the mat is vacuumed to a screen and the binder is cured in the oven.
Safety
Glass fiber has increased in popularity since the discovery that asbestos causes cancer and its subsequent removal from most products. Following this increase in popularity, the safety of glass fiber has also been called into question. Research shows that the composition of glass fiber can cause similar toxicity as asbestos since both are silicate fibers.
Studies on rats conducted during the 1970s found that fibrous glass of less than 3 μm in diameter and greater than 20 μm in length is a "potent carcinogen". Likewise, the International Agency for Research on Cancer found it "may reasonably be anticipated to be a carcinogen" in 1990. The American Conference of Governmental Industrial Hygienists, on the other hand, says that there is insufficient evidence, and that glass fiber is in group A4: "Not classifiable as a human carcinogen".
The North American Insulation Manufacturers Association (NAIMA) claims that glass fiber is fundamentally different from asbestos, since it is man-made instead of naturally occurring. They claim that glass fiber "dissolves in the lungs", while asbestos remains in the body for life. Although both glass fiber and asbestos are made from silica filaments, NAIMA claims that asbestos is more dangerous because of its crystalline structure, which causes it to cleave into smaller, more dangerous pieces, citing the U.S. Department of Health and Human Services:
A 1998 study using rats found that the biopersistence of synthetic fibers after one year was 0.04–13%, but 27% for amosite asbestos. Fibers that persisted longer were found to be more carcinogenic.
Glass-reinforced plastic (fiberglass)
Glass-reinforced plastic (GRP) is a composite material or fiber-reinforced plastic made of a plastic reinforced by fine glass fibers. The glass can be in the form of a chopped strand mat (CSM) or a woven fabric.
As with many other composite materials (such as reinforced concrete), the two materials act together, each overcoming the deficits of the other. Whereas the plastic resins are strong in compressive loading and relatively weak in tensile strength, the glass fibers are very strong in tension but tend not to resist compression. By combining the two materials, GRP becomes a material that resists both compressive and tensile forces well. The two materials may be used uniformly or the glass may be specifically placed in those portions of the structure that will experience tensile loads.
Uses
Uses for regular glass fiber include mats and fabrics for thermal insulation, electrical insulation, sound insulation, high-strength fabrics or heat- and corrosion-resistant fabrics. It is also used to reinforce various materials, such as tent poles, pole vault poles, arrows, bows and crossbows, translucent roofing panels, automobile bodies, hockey sticks, surfboards, boat hulls, and paper honeycomb. It has been used for medical purposes in casts. Glass fiber is extensively used for making FRP tanks and vessels.
Open-weave glass fiber grids are used to reinforce asphalt pavement. Non-woven glass fiber/polymer blend mats are used saturated with asphalt emulsion and overlaid with asphalt, producing a waterproof, crack-resistant membrane. Use of glass-fiber reinforced polymer rebar instead of steel rebar shows promise in areas where avoidance of steel corrosion is desired.
Potential uses
Glass fiber has recently seen use in biomedical applications in the assistance of joint replacement where the electric field orientation of short phosphate glass fibers can improve osteogenic qualities through the proliferation of osteoblasts and with improved surface chemistry. Another potential use is within electronic applications as sodium based glass fibers assist or replace lithium in lithium-ion batteries due to its improved electronic properties.
Role of recycling in glass fiber manufacturing
Manufacturers of glass-fiber insulation can use recycled glass. Recycled glass fiber contains up to 40% recycled glass.
| Technology | Materials | null |
157616 | https://en.wikipedia.org/wiki/Composite%20material | Composite material | A composite or composite material (also composition material) is a material which is produced from two or more constituent materials. These constituent materials have notably dissimilar chemical or physical properties and are merged to create a material with properties unlike the individual elements. Within the finished structure, the individual elements remain separate and distinct, distinguishing composites from mixtures and solid solutions. Composite materials with more than one distinct layer are called composite laminates.
Typical engineered composite materials are made up of a binding agent forming the matrix and a filler material (particulates or fibres) giving substance, e.g.:
Concrete, reinforced concrete and masonry with cement, lime or mortar (which is itself a composite material) as a binder
Composite wood such as glulam and plywood with wood glue as a binder
Reinforced plastics, such as fiberglass and fibre-reinforced polymer with resin or thermoplastics as a binder
Ceramic matrix composites (composite ceramic and metal matrices)
Metal matrix composites
advanced composite materials, often first developed for spacecraft and aircraft applications.
Composite materials can be less expensive, lighter, stronger or more durable than common materials. Some are inspired by biological structures found in plants and animals.
Robotic materials are composites that include sensing, actuation, computation, and communication components.
Composite materials are used for construction and technical structures such as boat hulls, swimming pool panels, racing car bodies, shower stalls, bathtubs, storage tanks, imitation granite, and cultured marble sinks and countertops. They are also being increasingly used in general automotive applications.
History
The earliest composite materials were made from straw and mud combined to form bricks for building construction. Ancient brick-making was documented by Egyptian tomb paintings.
Wattle and daub might be the oldest composite materials, at over 6000 years old.
Woody plants, both true wood from trees and such plants as palms and bamboo, yield natural composites that were used prehistorically by humankind and are still used widely in construction and scaffolding.
Plywood, 3400 BC, by the Ancient Mesopotamians; gluing wood at different angles gives better properties than natural wood.
Cartonnage, layers of linen or papyrus soaked in plaster dates to the First Intermediate Period of Egypt c. 2181–2055 BC and was used for death masks.
Cob mud bricks, or mud walls, (using mud (clay) with straw or gravel as a binder) have been used for thousands of years.
Concrete was described by Vitruvius, writing around 25 BC in his Ten Books on Architecture, distinguished types of aggregate appropriate for the preparation of lime mortars. For structural mortars, he recommended pozzolana, which were volcanic sands from the sandlike beds of Pozzuoli brownish-yellow-gray in colour near Naples and reddish-brown at Rome. Vitruvius specifies a ratio of 1 part lime to 3 parts pozzolana for cements used in buildings and a 1:2 ratio of lime to pulvis Puteolanus for underwater work, essentially the same ratio mixed today for concrete used at sea. Natural cement-stones, after burning, produced cements used in concretes from post-Roman times into the 20th century, with some properties superior to manufactured Portland cement.
Papier-mâché, a composite of paper and glue, has been used for hundreds of years.
The first artificial fibre reinforced plastic was a combination of fiber glass and bakelite, performed in 1935 by Al Simison and Arthur D Little in Owens Corning Company
One of the most common and familiar composite is fibreglass, in which small glass fibre are embedded within a polymeric material (normally an epoxy or polyester). The glass fibre is relatively strong and stiff (but also brittle), whereas the polymer is ductile (but also weak and flexible). Thus the resulting fibreglass is relatively stiff, strong, flexible, and ductile.
Composite bow
Leather cannon, wooden cannon
Examples
Composite materials
Concrete is the most common artificial composite material of all. , about 7.5 billion cubic metres of concrete are made each year.
Concrete typically consists of loose stones (construction aggregate) held with a matrix of cement. Concrete is an inexpensive material resisting large compressive forces, however, susceptible to tensile loading. To give concrete the ability to resist being stretched, steel bars, which can resist high stretching (tensile) forces, are often added to concrete to form reinforced concrete.
Fibre-reinforced polymers include carbon-fiber-reinforced polymers and glass-reinforced plastic. If classified by matrix then there are thermoplastic composites, short fibre thermoplastics, long fibre thermoplastics or long-fiber-reinforced thermoplastics. There are numerous thermoset composites, including paper composite panels. Many advanced thermoset polymer matrix systems usually incorporate aramid fibre and carbon fibre in an epoxy resin matrix.
Shape-memory polymer composites are high-performance composites, formulated using fibre or fabric reinforcements and shape-memory polymer resin as the matrix. Since a shape-memory polymer resin is used as the matrix, these composites have the ability to be easily manipulated into various configurations when they are heated above their activation temperatures and will exhibit high strength and stiffness at lower temperatures. They can also be reheated and reshaped repeatedly without losing their material properties. These composites are ideal for applications such as lightweight, rigid, deployable structures; rapid manufacturing; and dynamic reinforcement.
High strain composites are another type of high-performance composites that are designed to perform in a high deformation setting and are often used in deployable systems where structural flexing is advantageous. Although high strain composites exhibit many similarities to shape-memory polymers, their performance is generally dependent on the fibre layout as opposed to the resin content of the matrix.
Composites can also use metal fibres reinforcing other metals, as in metal matrix composites (MMC) or ceramic matrix composites (CMC), which includes bone (hydroxyapatite reinforced with collagen fibres), cermet (ceramic and metal), and concrete. Ceramic matrix composites are built primarily for fracture toughness, not for strength. Another class of composite materials involve woven fabric composite consisting of longitudinal and transverse laced yarns. Woven fabric composites are flexible as they are in form of fabric.
Organic matrix/ceramic aggregate composites include asphalt concrete, polymer concrete, mastic asphalt, mastic roller hybrid, dental composite, syntactic foam, and mother of pearl. Chobham armour is a special type of composite armour used in military applications.
Additionally, thermoplastic composite materials can be formulated with specific metal powders resulting in materials with a density range from 2 g/cm3 to 11 g/cm3 (same density as lead). The most common name for this type of material is "high gravity compound" (HGC), although "lead replacement" is also used. These materials can be used in place of traditional materials such as aluminium, stainless steel, brass, bronze, copper, lead, and even tungsten in weighting, balancing (for example, modifying the centre of gravity of a tennis racquet), vibration damping, and radiation shielding applications. High density composites are an economically viable option when certain materials are deemed hazardous and are banned (such as lead) or when secondary operations costs (such as machining, finishing, or coating) are a factor.
There have been several studies indicating that interleaving stiff and brittle epoxy-based carbon-fiber-reinforced polymer laminates with flexible thermoplastic laminates can help to make highly toughened composites that show improved impact resistance. Another interesting aspect of such interleaved composites is that they are able to have shape memory behaviour without needing any shape-memory polymers or shape-memory alloys e.g. balsa plies interleaved with hot glue, aluminium plies interleaved with acrylic polymers or PVC and carbon-fiber-reinforced polymer laminates interleaved with polystyrene.
A sandwich-structured composite is a special class of composite material that is fabricated by attaching two thin but stiff skins to a lightweight but thick core. The core material is normally low strength material, but its higher thickness provides the sandwich composite with high bending stiffness with overall low density.
Wood is a naturally occurring composite comprising cellulose fibres in a lignin and hemicellulose matrix. Engineered wood includes a wide variety of different products such as wood fibre board, plywood, oriented strand board, wood plastic composite (recycled wood fibre in polyethylene matrix), Pykrete (sawdust in ice matrix), plastic-impregnated or laminated paper or textiles, Arborite, Formica (plastic), and Micarta. Other engineered laminate composites, such as Mallite, use a central core of end grain balsa wood, bonded to surface skins of light alloy or GRP. These generate low-weight, high rigidity materials.
Particulate composites have particle as filler material dispersed in matrix, which may be nonmetal, such as glass, epoxy. Automobile tire is an example of particulate composite.
Advanced diamond-like carbon (DLC) coated polymer composites have been reported where the coating increases the surface hydrophobicity, hardness and wear resistance.
Ferromagnetic composites, including those with a polymer matrix consisting, for example, of nanocrystalline filler of Fe-based powders and polymers matrix. Amorphous and nanocrystalline powders obtained, for example, from metallic glasses can be used. Their use makes it possible to obtain ferromagnetic nanocomposites with controlled magnetic properties.
Products
Fibre-reinforced composite materials have gained popularity (despite their generally high cost) in high-performance products that need to be lightweight, yet strong enough to take harsh loading conditions such as aerospace components (tails, wings, fuselages, propellers), boat and scull hulls, bicycle frames, and racing car bodies. Other uses include fishing rods, storage tanks, swimming pool panels, and baseball bats. The Boeing 787 and Airbus A350 structures including the wings and fuselage are composed largely of composites. Composite materials are also becoming more common in the realm of orthopedic surgery, and it is the most common hockey stick material.
Carbon composite is a key material in today's launch vehicles and heat shields for the re-entry phase of spacecraft. It is widely used in solar panel substrates, antenna reflectors and yokes of spacecraft. It is also used in payload adapters, inter-stage structures and heat shields of launch vehicles. Furthermore, disk brake systems of airplanes and racing cars are using carbon/carbon material, and the composite material with carbon fibres and silicon carbide matrix has been introduced in luxury vehicles and sports cars.
In 2006, a fibre-reinforced composite pool panel was introduced for in-ground swimming pools, residential as well as commercial, as a non-corrosive alternative to galvanized steel.
In 2007, an all-composite military Humvee was introduced by TPI Composites Inc and Armor Holdings Inc, the first all-composite military vehicle. By using composites the vehicle is lighter, allowing higher payloads. In 2008, carbon fibre and DuPont Kevlar (five times stronger than steel) were combined with enhanced thermoset resins to make military transit cases by ECS Composites creating 30-percent lighter cases with high strength.
Pipes and fittings for various purpose like transportation of potable water, fire-fighting, irrigation, seawater, desalinated water, chemical and industrial waste, and sewage are now manufactured in glass reinforced plastics.
Composite materials used in tensile structures for facade application provides the advantage of being translucent. The woven base cloth combined with the appropriate coating allows better light transmission. This provides a very comfortable level of illumination compared to the full brightness of outside.
The wings of wind turbines, in growing sizes in the order of 50 m length are fabricated in composites since several years.
Two-lower-leg-amputees run on carbon-composite spring-like artificial feet as quick as non-amputee athletes.
High-pressure gas cylinders typically about 7–9 litre volume x 300 bar pressure for firemen are nowadays constructed from carbon composite. Type-4-cylinders include metal only as boss that carries the thread to screw in the valve.
On 5 September 2019, HMD Global unveiled the Nokia 6.2 and Nokia 7.2 which are claimed to be using polymer composite for the frames.
Overview
Composite materials are created from individual materials. These individual materials are known as constituent materials, and there are two main categories of it. One is the matrix (binder) and the other reinforcement. A portion of each kind is needed at least. The reinforcement receives support from the matrix as the matrix surrounds the reinforcement and maintains its relative positions. The properties of the matrix are improved as the reinforcements impart their exceptional physical and mechanical properties. The mechanical properties become unavailable from the individual constituent materials by synergism. At the same time, the designer of the product or structure receives options to choose an optimum combination from the variety of matrix and strengthening materials.
To shape the engineered composites, it must be formed. The reinforcement is placed onto the mould surface or into the mould cavity. Before or after this, the matrix can be introduced to the reinforcement. The matrix undergoes a melding event which sets the part shape necessarily. This melding event can happen in several ways, depending upon the matrix nature, such as solidification from the melted state for a thermoplastic polymer matrix composite or chemical polymerization for a thermoset polymer matrix.
According to the requirements of end-item design, various methods of moulding can be used. The natures of the chosen matrix and reinforcement are the key factors influencing the methodology. The gross quantity of material to be made is another main factor. To support high capital investments for rapid and automated manufacturing technology, vast quantities can be used. Cheaper capital investments but higher labour and tooling expenses at a correspondingly slower rate assists the small production quantities.
Many commercially produced composites use a polymer matrix material often called a resin solution. There are many different polymers available depending upon the starting raw ingredients. There are several broad categories, each with numerous variations. The most common are known as polyester, vinyl ester, epoxy, phenolic, polyimide, polyamide, polypropylene, PEEK, and others. The reinforcement materials are often fibres but also commonly ground minerals. The various methods described below have been developed to reduce the resin content of the final product, or the fibre content is increased. As a rule of thumb, lay up results in a product containing 60% resin and 40% fibre, whereas vacuum infusion gives a final product with 40% resin and 60% fibre content. The strength of the product is greatly dependent on this ratio.
Martin Hubbe and Lucian A Lucia consider wood to be a natural composite of cellulose fibres in a matrix of lignin.
Cores in composites
Several layup designs of composite also involve a co-curing or post-curing of the prepreg with many other media, such as foam or honeycomb. Generally, this is known as a sandwich structure. This is a more general layup for the production of cowlings, doors, radomes or non-structural parts.
Open- and closed-cell-structured foams like polyvinyl chloride, polyurethane, polyethylene, or polystyrene foams, balsa wood, syntactic foams, and honeycombs are generally utilized core materials. Open- and closed-cell metal foam can also be utilized as core materials. Recently, 3D graphene structures ( also called graphene foam) have also been employed as core structures. A recent review by Khurram and Xu et al., have provided the summary of the state-of-the-art techniques for fabrication of the 3D structure of graphene, and the examples of the use of these foam like structures as a core for their respective polymer composites.
Semi-crystalline polymers
Although the two phases are chemically equivalent, semi-crystalline polymers can be described both quantitatively and qualitatively as composite materials. The crystalline portion has a higher elastic modulus and provides reinforcement for the less stiff, amorphous phase. Polymeric materials can range from 0% to 100% crystallinity aka volume fraction depending on molecular structure and thermal history. Different processing techniques can be employed to vary the percent crystallinity in these materials and thus the mechanical properties of these materials as described in the physical properties section. This effect is seen in a variety of places from industrial plastics like polyethylene shopping bags to spiders which can produce silks with different mechanical properties. In many cases these materials act like particle composites with randomly dispersed crystals known as spherulites. However they can also be engineered to be anisotropic and act more like fiber reinforced composites. In the case of spider silk, the properties of the material can even be dependent on the size of the crystals, independent of the volume fraction. Ironically, single component polymeric materials are some of the most easily tunable composite materials known.
Methods of fabrication
Normally, the fabrication of composite includes wetting, mixing or saturating the reinforcement with the matrix. The matrix is then induced to bind together (with heat or a chemical reaction) into a rigid structure. Usually, the operation is done in an open or closed forming mould. However, the order and ways of introducing the constituents alters considerably. Composites fabrication is achieved by a wide variety of methods, including advanced fibre placement (automated fibre placement), fibreglass spray lay-up process, filament winding, lanxide process, tailored fibre placement, tufting, and z-pinning.
Overview of mould
The reinforcing and matrix materials are merged, compacted, and cured (processed) within a mould to undergo a melding event. The part shape is fundamentally set after the melding event. However, under particular process conditions, it can deform. The melding event for a thermoset polymer matrix material is a curing reaction that is caused by the possibility of extra heat or chemical reactivity such as an organic peroxide. The melding event for a thermoplastic polymeric matrix material is a solidification from the melted state. The melding event for a metal matrix material such as titanium foil is a fusing at high pressure and a temperature near the melting point.
It is suitable for many moulding methods to refer to one mould piece as a "lower" mould and another mould piece as an "upper" mould. Lower and upper does not refer to the mould's configuration in space, but the different faces of the moulded panel. There is always a lower mould, and sometimes an upper mould in this convention. Part construction commences by applying materials to the lower mould. Lower mould and upper mould are more generalized descriptors than more common and specific terms such as male side, female side, a-side, b-side, tool side, bowl, hat, mandrel, etc. Continuous manufacturing utilizes a different nomenclature.
Usually, the moulded product is referred to as a panel. It can be referred to as casting for certain geometries and material combinations. It can be referred to as a profile for certain continuous processes. Some of the processes are autoclave moulding, vacuum bag moulding, pressure bag moulding, resin transfer moulding, and light resin transfer moulding.
Other fabrication methods
Other types of fabrication include casting, centrifugal casting, braiding (onto a former), continuous casting, filament winding, press moulding, transfer moulding, pultrusion moulding, and slip forming. There are also forming capabilities including CNC filament winding, vacuum infusion, wet lay-up, compression moulding, and thermoplastic moulding, to name a few. The practice of curing ovens and paint booths is also required for some projects.
Finishing methods
The composite parts finishing is also crucial in the final design. Many of these finishes will involve rain-erosion coatings or polyurethane coatings.
Tooling
The mould and mould inserts are referred to as "tooling". The mould/tooling can be built from different materials. Tooling materials include aluminium, carbon fibre, invar, nickel, reinforced silicone rubber and steel. The tooling material selection is normally based on, but not limited to, the coefficient of thermal expansion, expected number of cycles, end item tolerance, desired or expected surface condition, cure method, glass transition temperature of the material being moulded, moulding method, matrix, cost, and other various considerations.
Physical properties
Usually, the composite's physical properties are not isotropic (independent of the direction of applied force) in nature. But they are typically anisotropic (different depending on the direction of the applied force or load). For instance, the composite panel's stiffness will usually depend upon the orientation of the applied forces and/or moments. The composite's strength is bounded by two loading conditions, as shown in the plot to the right.
Isostrain rule of mixtures
If both the fibres and matrix are aligned parallel to the loading direction, the deformation of both phases will be the same (assuming there is no delamination at the fibre-matrix interface). This isostrain condition provides the upper bound for composite strength, and is determined by the rule of mixtures:
where EC is the effective composite Young's modulus, and Vi and Ei are the volume fraction and Young's moduli, respectively, of the composite phases.
For example, a composite material made up of α and β phases as shown in the figure to the right under isostrain, the Young's modulus would be as follows:where Vα and Vβ are the respective volume fractions of each phase.
This can be derived by considering that in the isostrain case, Assuming that the composite has a uniform cross section, the stress on the composite is a weighted average between the two phases, The stresses in the individual phases are given by Hooke's Law, Combining these equations gives that the overall stress in the composite is Then it can be shown that
Isostress rule of mixtures
The lower bound is dictated by the isostress condition, in which the fibres and matrix are oriented perpendicularly to the loading direction:and now the strains become a weighted averageRewriting Hooke's Law for the individual phases This leads toFrom the definition of Hooke's Lawand, in general,
Following the example above, if one had a composite material made up of α and β phases under isostress conditions as shown in the figure to the right, the composition Young's modulus would be: The isostrain condition implies that under an applied load, both phases experience the same strain but will feel different stress. Comparatively, under isostress conditions both phases will feel the same stress but the strains will differ between each phase. A generalized equation for any loading condition between isostrain and isostress can be written as:
where X is a material property such as modulus or stress, c, m, and r stand for the properties of the composite, matrix, and reinforcement materials respectively, and n is a value between 1 and −1.
The above equation can be further generalized beyond a two phase composite to an m-component system:
Though composite stiffness is maximized when fibres are aligned with the loading direction, so is the possibility of fibre tensile fracture, assuming the tensile strength exceeds that of the matrix. When a fibre has some angle of misorientation θ, several fracture modes are possible. For small values of θ the stress required to initiate fracture is increased by a factor of (cos θ)−2 due to the increased cross-sectional area (A cos θ) of the fibre and reduced force (F/cos θ) experienced by the fibre, leading to a composite tensile strength of σparallel /cos2 θ where σparallel is the tensile strength of the composite with fibres aligned parallel with the applied force.
Intermediate angles of misorientation θ lead to matrix shear failure. Again the cross sectional area is modified but since shear stress is now the driving force for failure the area of the matrix parallel to the fibres is of interest, increasing by a factor of 1/sin θ. Similarly, the force parallel to this area again decreases (F/cos θ) leading to a total tensile strength of τmy /sin θ cos θ where τmy is the matrix shear strength.
Finally, for large values of θ (near π/2) transverse matrix failure is the most likely to occur, since the fibres no longer carry the majority of the load. Still, the tensile strength will be greater than for the purely perpendicular orientation, since the force perpendicular to the fibres will decrease by a factor of 1/sin θ and the area decreases by a factor of 1/sin θ producing a composite tensile strength of σperp /sin2θ where σperp is the tensile strength of the composite with fibres align perpendicular to the applied force.
The majority of commercial composites are formed with random dispersion and orientation of the strengthening fibres, in which case the composite Young's modulus will fall between the isostrain and isostress bounds. However, in applications where the strength-to-weight ratio is engineered to be as high as possible (such as in the aerospace industry), fibre alignment may be tightly controlled.
Panel stiffness is also dependent on the design of the panel. For instance, the fibre reinforcement and matrix used, the method of panel build, thermoset versus thermoplastic, and type of weave.
In contrast to composites, isotropic materials (for example, aluminium or steel), in standard wrought forms, possess the same stiffness typically despite the directional orientation of the applied forces and/or moments. The relationship between forces/moments and strains/curvatures for an isotropic material can be described with the following material properties: Young's Modulus, the shear modulus, and the Poisson's ratio, in relatively simple mathematical relationships. For the anisotropic material, it needs the mathematics of a second-order tensor and up to 21 material property constants. For the special case of orthogonal isotropy, there are three distinct material property constants for each of Young's Modulus, Shear Modulus and Poisson's ratio—a total of 9 constants to express the relationship between forces/moments and strains/curvatures.
Techniques that take benefit of the materials' anisotropic properties involve mortise and tenon joints (in natural composites such as wood) and pi joints in synthetic composites.
Mechanical properties of composites
Particle reinforcement
In general, particle reinforcement is strengthening the composites less than fiber reinforcement. It is used to enhance the stiffness of the composites while increasing the strength and the toughness. Because of their mechanical properties, they are used in applications in which wear resistance is required. For example, hardness of cement can be increased by reinforcing gravel particles, drastically. Particle reinforcement a highly advantageous method of tuning mechanical properties of materials since it is very easy implement while being low cost.
The elastic modulus of particle-reinforced composites can be expressed as,
where E is the elastic modulus, V is the volume fraction. The subscripts c, p and m are indicating composite, particle and matrix, respectively. is a constant can be found empirically.
Similarly, tensile strength of particle-reinforced composites can be expressed as,
where T.S. is the tensile strength, and is a constant (not equal to ) that can be found empirically.
Continuous fiber reinforcement
In general, continuous fiber reinforcement is implemented by incorporating a fiber as the strong phase into a weak phase, matrix. The reason for the popularity of fiber usage is materials with extraordinary strength can be obtained in their fiber form. Non-metallic fibers are usually showing a very high strength to density ratio compared to metal fibers because of the covalent nature of their bonds. The most famous example of this is carbon fibers that have many applications extending from sports gear to protective equipment to space industries.
The stress on the composite can be expressed in terms of the volume fraction of the fiber and the matrix.
where is the stress, V is the volume fraction. The subscripts c, f and m are indicating composite, fiber and matrix, respectively.
Although the stress–strain behavior of fiber composites can only be determined by testing, there is an expected trend, three stages of the stress–strain curve. The first stage is the region of the stress–strain curve where both fiber and the matrix are elastically deformed. This linearly elastic region can be expressed in the following form.
where is the stress, is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively.
After passing the elastic region for both fiber and the matrix, the second region of the stress–strain curve can be observed. In the second region, the fiber is still elastically deformed while the matrix is plastically deformed since the matrix is the weak phase. The instantaneous modulus can be determined using the slope of the stress–strain curve in the second region. The relationship between stress and strain can be expressed as,
where is the stress, is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. To find the modulus in the second region derivative of this equation can be used since the slope of the curve is equal to the modulus.
In most cases it can be assumed since the second term is much less than the first one.
In reality, the derivative of stress with respect to strain is not always returning the modulus because of the binding interaction between the fiber and matrix. The strength of the interaction between these two phases can result in changes in the mechanical properties of the composite. The compatibility of the fiber and matrix is a measure of internal stress.
The covalently bonded high strength fibers (e.g. carbon fibers) experience mostly elastic deformation before the fracture since the plastic deformation can happen due to dislocation motion. Whereas, metallic fibers have more space to plastically deform, so their composites exhibit a third stage where both fiber and the matrix are plastically deforming. Metallic fibers have many applications to work at cryogenic temperatures that is one of the advantages of composites with metal fibers over nonmetallic. The stress in this region of the stress–strain curve can be expressed as,
where is the stress, is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. and are for fiber and matrix flow stresses respectively. Just after the third region the composite exhibit necking. The necking strain of composite is happened to be between the necking strain of the fiber and the matrix just like other mechanical properties of the composites. The necking strain of the weak phase is delayed by the strong phase. The amount of the delay depends upon the volume fraction of the strong phase.
Thus, the tensile strength of the composite can be expressed in terms of the volume fraction.
where T.S. is the tensile strength, is the stress, is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. The composite tensile strength can be expressed as
for is less than or equal to (arbitrary critical value of volume fraction)
for is greater than or equal to
The critical value of volume fraction can be expressed as,
Evidently, the composite tensile strength can be higher than the matrix if is greater than .
Thus, the minimum volume fraction of the fiber can be expressed as,
Although this minimum value is very low in practice, it is very important to know since the reason for the incorporation of continuous fibers is to improve the mechanical properties of the materials/composites, and this value of volume fraction is the threshold of this improvement.
The effect of fiber orientation
Aligned fibers
A change in the angle between the applied stress and fiber orientation will affect the mechanical properties of fiber-reinforced composites, especially the tensile strength. This angle, , can be used predict the dominant tensile fracture mechanism.
At small angles, , the dominant fracture mechanism is the same as with load-fiber alignment, tensile fracture. The resolved force acting upon the length of the fibers is reduced by a factor of from rotation. . The resolved area on which the fiber experiences the force is increased by a factor of from rotation. . Taking the effective tensile strength to be and the aligned tensile strength .
At moderate angles, , the material experiences shear failure. The effective force direction is reduced with respect to the aligned direction. . The resolved area on which the force acts is . The resulting tensile strength depends on the shear strength of the matrix, .
At extreme angles, , the dominant mode of failure is tensile fracture in the matrix in the perpendicular direction. As in the isostress case of layered composite materials, the strength in this direction is lower than in the aligned direction. The effective areas and forces act perpendicular to the aligned direction so they both scale by . The resolved tensile strength is proportional to the transverse strength, .
The critical angles from which the dominant fracture mechanism changes can be calculated as,
where is the critical angle between longitudinal fracture and shear failure, and is the critical angle between shear failure and transverse fracture.
By ignoring length effects, this model is most accurate for continuous fibers and does not effectively capture the strength-orientation relationship for short fiber reinforced composites. Furthermore, most realistic systems do not experience the local maxima predicted at the critical angles. The Tsai-Hill criterion provides a more complete description of fiber composite tensile strength as a function of orientation angle by coupling the contributing yield stresses: , , and .
Randomly oriented fibers
Anisotropy in the tensile strength of fiber reinforced composites can be removed by randomly orienting the fiber directions within the material. It sacrifices the ultimate strength in the aligned direction for an overall, isotropically strengthened material.
Where K is an empirically determined reinforcement factor; similar to the particle reinforcement equation. For fibers with randomly distributed orientations in a plane, , and for a random distribution in 3D, .
Stiffness and Compliance Elasticity
For real application, most composite is anisotropic material or orthotropic material. The three-dimension stress tensor is required for stress and strain analysis. The stiffness and compliance can be written as follows
and
In order to simplify the 3D stress direction, the plane stress assumption is apply that the out–of–plane stress and out–of–plane strain are insignificant or zero. That is and .
The stiffness matrix and compliance matrix can be reduced to
and
For fiber-reinforced composite, the fiber orientation in material affect anisotropic properties of the structure. From characterizing technique i.e. tensile testing, the material properties were measured based on sample (1-2) coordinate system. The tensors above express stress-strain relationship in (1-2) coordinate system. While the known material properties is in the principal coordinate system (x-y) of material. Transforming the tensor between two coordinate system help identify the material properties of the tested sample. The transformation matrix with degree rotation is
for for
Types of fibers and mechanical properties
The most common types of fibers used in industry are glass fibers, carbon fibers, and kevlar due to their ease of production and availability. Their mechanical properties are very important to know, therefore the table of their mechanical properties is given below to compare them with S97 steel. The angle of fiber orientation is very important because of the anisotropy of fiber composites (please see the section "Physical properties" for a more detailed explanation). The mechanical properties of the composites can be tested using standard mechanical testing methods by positioning the samples at various angles (the standard angles are 0°, 45°, and 90°) with respect to the orientation of fibers within the composites. In general, 0° axial alignment makes composites resistant to longitudinal bending and axial tension/compression, 90° hoop alignment is used to obtain resistance to internal/external pressure, and ± 45° is the ideal choice to obtain resistance against pure torsion.
Mechanical properties of fiber composite materials
Carbon fiber & fiberglass composites vs. aluminum alloy and steel
Although strenth and stiffness of steel and aluminum alloys are comparable to fiber composites, specific strength and stiffness of composites (i.e. in relation to their weight) are significantly higher.
Failure
Shock, impact of varying speed, or repeated cyclic stresses can provoke the laminate to separate at the interface between two layers, a condition known as delamination. Individual fibres can separate from the matrix, for example, fibre pull-out.
Composites can fail on the macroscopic or microscopic scale. Compression failures can happen at both the macro scale or at each individual reinforcing fibre in compression buckling. Tension failures can be net section failures of the part or degradation of the composite at a microscopic scale where one or more of the layers in the composite fail in tension of the matrix or failure of the bond between the matrix and fibres.
Some composites are brittle and possess little reserve strength beyond the initial onset of failure while others may have large deformations and have reserve energy absorbing capacity past the onset of damage. The distinctions in fibres and matrices that are available and the mixtures that can be made with blends leave a very broad range of properties that can be designed into a composite structure. The most famous failure of a brittle ceramic matrix composite occurred when the carbon-carbon composite tile on the leading edge of the wing of the Space Shuttle Columbia fractured when impacted during take-off. It directed to the catastrophic break-up of the vehicle when it re-entered the Earth's atmosphere on 1 February 2003.
Composites have relatively poor bearing strength compared to metals.
Testing
Composites are tested before and after construction to assist in predicting and preventing failures. Pre-construction testing may adopt finite element analysis (FEA) for ply-by-ply analysis of curved surfaces and predicting wrinkling, crimping and dimpling of composites. Materials may be tested during manufacturing and after construction by various non-destructive methods including ultrasonic, thermography, shearography and X-ray radiography, and laser bond inspection for NDT of relative bond strength integrity in a localized area.
| Technology | Material and chemical | null |
157626 | https://en.wikipedia.org/wiki/Peregrine%20falcon | Peregrine falcon | The peregrine falcon (Falco peregrinus), also known simply as the peregrine, is a cosmopolitan bird of prey (raptor) in the family Falconidae. A large, crow-sized falcon, it has a blue-grey back, barred white underparts, and a black head. The peregrine is renowned for its speed. It can reach over during its characteristic hunting stoop (high-speed dive), making it the fastest animal on the planet. According to a National Geographic TV program, the highest measured speed of a peregrine falcon is . However, radar tracks have never confirmed this, the maximum speed reliably measured is , but nobody has been able to present unimpeachable measurements of speeds even close to the "well-known" . As is typical for bird-eating (avivore) raptors, peregrine falcons are sexually dimorphic, with females being considerably larger than males. Historically, it has also been known as "black-cheeked falcon" in Australia, and "duck hawk" in North America.
The breeding range includes land regions from the Arctic tundra to the tropics. It can be found nearly everywhere on Earth, except extreme polar regions, very high mountains, and most tropical rainforests; the only major ice-free landmass from which it is entirely absent is New Zealand. This makes it the world's most widespread raptor and one of the most widely found wild bird species. In fact, the only land-based bird species found over a larger geographic area owes its success to human-led introduction; the domestic and feral pigeons are both domesticated forms of the rock dove, a major prey species for Eurasian Peregrine populations. Due to their abundance over most other bird species in cities, feral pigeons support many peregrine populations as a staple food source, especially in urban settings.
The peregrine is a highly successful example of urban wildlife in much of its range, taking advantage of tall buildings as nest sites and an abundance of prey such as pigeons and ducks. Both the English and scientific names of this species mean "wandering falcon", referring to the migratory habits of many northern populations. A total of 18 or 19 regional subspecies are accepted, which vary in appearance; disagreement existed in the past over whether the distinctive Barbary falcon was represented by two subspecies of Falco peregrinus or was a separate species, F. pelegrinoides, and several of the other subspecies were originally described as species. The genetic differential between them (and also the difference in their appearance) is very small, only about 0.6–0.8% genetically differentiated, showing the divergence is relatively recent, during the time of the Last Ice Age; all the major ornithological authorities now treat the barbary falcon as a subspecies.
Although its diet consists almost exclusively of medium-sized birds, the peregrine will sometimes hunt small mammals, small reptiles, or even insects. Reaching sexual maturity at one year, it mates for life and nests in a scrape, normally on cliff edges or, in recent times, on tall human-made structures. The peregrine falcon became an endangered species in many areas because of the widespread use of certain pesticides, especially DDT. Since the ban on DDT from the early 1970s, populations have recovered, supported by large-scale protection of nesting places and releases to the wild.
The peregrine falcon is a well-respected falconry bird due to its strong hunting ability, high trainability, versatility, and availability via captive breeding. It is effective on most game bird species, from small to large. It has also been used as a religious, royal, or national symbol across multiple eras and areas of human civilization.
Description
The peregrine falcon has a body length of and a wingspan from . The male and female have similar markings and plumage but, as with many birds of prey, the peregrine falcon displays marked sexual dimorphism in size, with the female measuring up to 30% larger than the male. Males weigh and the noticeably larger females weigh . In most subspecies, males weigh less than and females weigh more than , and cases of females weighing about 50% more than their male breeding mates are not uncommon. The standard linear measurements of peregrines are: the wing chord measures , the tail measures and the tarsus measures .
The back and the long pointed wings of the adult are usually bluish black to slate grey with indistinct darker barring (see "Subspecies" below); the wingtips are black. The white to rusty underparts are barred with thin clean bands of dark brown or black. The tail, coloured like the back but with thin clean bars, is long, narrow, and rounded at the end with a black tip and a white band at the very end. The top of the head and a "moustache" along the cheeks are black, contrasting sharply with the pale sides of the neck and white throat. The cere is yellow, as are the feet, and the beak and claws are black. The upper beak is notched near the tip, an adaptation which enables falcons to kill prey by severing the spinal column at the neck. An immature bird is much browner, with streaked, rather than barred, underparts, and has a pale bluish cere and orbital ring.
A study shows that their black malar stripe exists to reduce glare from solar radiation, allowing them to see better. Photos from The Macaulay Library and iNaturalist showed that the malar stripe is thicker where there is more solar radiation. That supports the solar glare hypothesis.
Taxonomy and systematics
Falco peregrinus was first described under its current binomial name by English ornithologist Marmaduke Tunstall in his 1771 work Ornithologia Britannica. The scientific name Falco peregrinus is a Medieval Latin phrase that was used by Albertus Magnus in 1225. Peregrinus is Latin, meaning "one from abroad" or "coming from foreign parts". It is likely the name was used as juvenile birds were taken while journeying to their breeding location (rather than from the nest), as falcon nests are often difficult to get at. The Latin term for falcon, , is related to , meaning "sickle", in reference to the silhouette of the falcon's long, pointed wings in flight.
The peregrine falcon belongs to a genus whose lineage includes the hierofalcons and the prairie falcon (F. mexicanus). This lineage probably diverged from other falcons towards the end of the Late Miocene or in the Late Pliocene, about 3–8 million years ago (mya). As the peregrine-hierofalcon group includes both Old World and North American species, it is likely that the lineage originated in western Eurasia or Africa. Its relationship to other falcons is not clear, as the issue is complicated by widespread hybridization confounding mtDNA sequence analyses. One genetic lineage of the saker falcon (F. cherrug) is known to have originated from a male saker ancestor producing fertile young with a female peregrine ancestor, and the descendants further breeding with sakers.
Subspecies
Numerous subspecies of Falco peregrinus have been described, with 18 accepted by the IOC World Bird List, and 19 accepted by the 1994 Handbook of the Birds of the World, which considers the Barbary falcon of the Canary Islands and coastal North Africa to be two subspecies (F. p. pelegrinoides and F. p. babylonicus) of Falco peregrinus, rather than a distinct species, F. pelegrinoides. The following map shows the general ranges of these 19 subspecies.
Falco peregrinus anatum, described by Bonaparte in 1838, is known as the American peregrine falcon or "duck hawk"; its scientific name means "duck peregrine falcon". At one time, it was partly included in F. p. leucogenys. It is mainly found in the Rocky Mountains. It was formerly common throughout North America between the tundra and northern Mexico, where current reintroduction efforts are being made to restore the population. Most mature F. p. anatum, except those that breed in more northern areas, winter in their breeding range. Most vagrants that reach western Europe seem to belong to the more northern and strongly migratory F. p. tundrius, only considered distinct since 1968. It is similar to the nominate subspecies but is slightly smaller; adults are somewhat paler and less patterned below, but juveniles are darker and more patterned below. Males weigh , while females weigh . It became regionally extinct in eastern North America in the mid 20th century, and populations there now are hybrids as a result of reintroductions of birds from elsewhere.
Falco peregrinus babylonicus, described by P.L. Sclater in 1861, is found in eastern Iran along the Hindu Kush and the Tian Shan to the Mongolian Altai ranges. A few birds winter in northern and northwestern India, mainly in dry semi-desert habitats. It is paler than F. p. pelegrinoides and similar to a small, pale lanner falcon (Falco biarmicus). Males weigh , while females weigh .
, described by Sharpe in 1873, is also known as the Mediterranean peregrine falcon or the Maltese falcon. It includes F. p. caucasicus and most specimens of the proposed race F. p. punicus, though others may be F. p. pelegrinoides (Barbary falcons), or perhaps the rare hybrids between these two which might occur around Algeria. They occur from the Iberian Peninsula around the Mediterranean, except in arid regions, to the Caucasus. They are non-migratory. It is smaller than the nominate subspecies and the underside usually has a rusty hue. Males weigh around , while females weigh up to .
, described by John Latham in 1790, it was formerly called F. p. leucogenys and includes F. p. caeruleiceps. It breeds in the Arctic tundra of Eurasia from Murmansk Oblast to roughly Yana and Indigirka Rivers, Siberia. It is completely migratory and travels south in winter as far as South Asia and sub-Saharan Africa. It is often seen around wetland habitats. It is paler than the nominate subspecies, especially on the crown. Males weigh , while females weigh .
Falco peregrinus cassini, described by Sharpe in 1873, is also known as the austral peregrine falcon. It includes F. p. kreyenborgi, the pallid falcon, a leucistic colour morph occurring in southernmost South America, which was long believed to be a distinct species. Its range includes South America from Ecuador through Bolivia, northern Argentina and Chile to Tierra del Fuego and the Falkland Islands. It is non-migratory. It is similar to the nominate subspecies, but slightly smaller with a black ear region. The pallid falcon morph F. p. kreyenborgi is medium grey above, has little barring below and has a head pattern like the saker falcon (Falco cherrug), but the ear region is white.
Falco peregrinus ernesti, described by Sharpe in 1894, is found from the Sunda Islands to the Philippines and south to eastern New Guinea and the nearby Bismarck Archipelago. Its geographical separation from F. p. nesiotes requires confirmation. It is non-migratory. It differs from the nominate subspecies in the very dark, dense barring on its underside and its black ear coverts.
Falco peregrinus furuitii, described by Momiyama in 1927, is found on the Izu and Ogasawara Islands south of Honshū, Japan. It is non-migratory. It is very rare and may only remain on a single island. It is a dark form, resembling F. p. pealei in colour, but darker, especially on the tail.
Falco peregrinus japonensis, described by Gmelin in 1788, includes F. p. kleinschmidti, F. p. pleskei, and F. p. harterti, and seems to refer to intergrades with F. p. calidus. It is found from northeast Siberia to Kamchatka (though it is possibly replaced by F. p. pealei on the coast there) and Japan. Northern populations are migratory, while those of Japan are resident. It is similar to the nominate subspecies, but the young are even darker than those of F. p. anatum.
, described by Swainson in 1837, is the Australian peregrine falcon or "black-cheeked falcon". It is found in Australia in all regions except the southwest, where replaced by F. p. submelanogenys; some authorities treat the latter as a synonym of F. p. macropus. It is non-migratory. It is similar to F. p. brookei in appearance, but is slightly smaller and the ear region is entirely black. The feet are proportionally large.
Falco peregrinus madens, described by Ripley and Watson in 1963, is unusual in having some sexual dichromatism. If the Barbary falcon (see below) is considered a distinct species, it is sometimes placed therein. It is found in the Cape Verde Islands and is non-migratory; it is also endangered, with only six to eight pairs surviving. Males have a rufous wash on the crown, nape, ears and back; the underside is conspicuously washed pinkish-brown. Females are tinged rich brown overall, especially on the crown and nape.
Falco peregrinus minor, first described by Bonaparte in 1850. It was formerly often known as F. p. perconfusus. It is sparsely and patchily distributed throughout much of sub-Saharan Africa and widespread in Southern Africa. It apparently reaches north along the Atlantic coast as far as Morocco. It is non-migratory and dark-coloured. This is the smallest subspecies, with smaller males weighing as little as approximately .
, described by Mayr in 1941, is found in Fiji and probably also Vanuatu and New Caledonia. It is non-migratory.
, described by Ridgway in 1873, is Peale's falcon and includes F. p. rudolfi. It is found in the Pacific Northwest of North America, northwards from Puget Sound along the British Columbia coast (including the Haida Gwaii), along the Gulf of Alaska and the Aleutian Islands to the far eastern Bering Sea coast of Russia, and may also occur on the Kuril Islands and the coasts of Kamchatka. It is non-migratory. It is the largest subspecies and it looks like an oversized and darker tundrius or like a strongly barred and large F. p. anatum. The bill is very wide. Juveniles occasionally have pale crowns. Males weigh , while females weigh .
Falco peregrinus pelegrinoides, first described by Temminck in 1829, is found in the Canary Islands through North Africa and the Near East to Mesopotamia. It is most similar to F. p. brookei, but is markedly paler above, with a rusty neck, and is a light buff with reduced barring below. It is smaller than the nominate subspecies; females weigh around .
Falco peregrinus peregrinator, described by Sundevall in 1837, is known as the Indian peregrine falcon, black shaheen, Indian shaheen or shaheen falcon. It was formerly sometimes known as Falco atriceps or Falco shaheen. Its range includes South Asia from across the Indian subcontinent to Sri Lanka and southeastern China. In India, the shaheen falcon is reported from all states except Uttar Pradesh, mainly from rocky and hilly regions. The shaheen falcon is also reported from the Andaman and Nicobar Islands in the Bay of Bengal. It has a clutch size of 3 to 4 eggs, with the chicks fledging time of 48 days with an average nesting success of 1.32 chicks per nest. In India, apart from nesting on cliffs, it has also been recorded as nesting on man-made structures such as buildings and cellphone transmission towers. A population estimate of 40 breeding pairs in Sri Lanka was made in 1996. It is non-migratory and is small and dark, with rufous underparts. In Sri Lanka this species is found to favour the higher hills, while the migrant calidus is more often seen along the coast.
, the nominate (first-named) subspecies, described by Tunstall in 1771, breeds over much of temperate Eurasia between the tundra in the north and the Pyrenees, Mediterranean region and Alpide belt in the south. It is mainly non-migratory in Europe, but migratory in Scandinavia and Asia. Males weigh , while females weigh . It includes F. p. brevirostris, F. p. germanicus, F. p. rhenanus and F. p. riphaeus.
Falco peregrinus radama, described by Hartlaub in 1861, is found in Madagascar and the Comoros. It is non-migratory.
, described by Mathews in 1912, is the Southwest Australian peregrine falcon. It is found in southwestern Australia and is non-migratory. Some authorities consider it a synonym of the widespread Australian subspecies F. p. macropus.
, described by C. M. White in 1968, was at one time included in F. p. leucogenys. It is found in the Arctic tundra of North America to Greenland, and migrates to wintering grounds in Central and South America. Most vagrants that reach western Europe belong to this subspecies, which was previously considered synonymous with F. p. anatum. It is the New World equivalent to F. p. calidus. It is smaller and paler than F. p. anatum; most have a conspicuous white forehead and white in ear region, but the crown and "moustache" are very dark, unlike in F. p. calidus. Juveniles are browner and less grey than in F. p. calidus and paler, sometimes almost sandy, than in F. p. anatum. Males weigh , while females weigh . Despite its current recognition as a valid subspecies, a population genetic study of both pre-decline (i.e., museum) and recovered contemporary populations failed to distinguish F. p. anatum and F. p. tundrius genetically.
Barbary falcon
The Barbary falcon is a subspecies of the peregrine falcon that inhabits parts of North Africa, from the Canary Islands to the Arabian Peninsula. There was discussion concerning the taxonomic status of the bird, with some considering it a subspecies of the peregrine falcon and others considering it a full species with two subspecies.
Compared to the other peregrine falcon subspecies, Barbary falcons have a slimmer body and a distinct plumage pattern. Despite numbers and range of these birds throughout the Canary Islands generally increasing, they are considered endangered, with human interference through falconry and shooting threatening their well-being. Falconry can further complicate the speciation and genetics of these Canary Islands falcons, as the practice promotes genetic mixing between individuals from outside the islands with those originating from the islands. Population density of the Barbary falcons on Tenerife, the biggest of the seven major Canary Islands, was found to be 1.27 pairs/100 km2, with the mean distance between pairs being 5869 ± 3338 m. The falcons were only observed near large and natural cliffs with a mean altitude of 697.6 m. Falcons show an affinity for tall cliffs away from human-mediated establishments and presence.
Barbary falcons have a red neck patch, but otherwise differ in appearance from the peregrine falcon proper merely according to Gloger's rule, relating pigmentation to environmental humidity. The Barbary falcon has a peculiar way of flying, beating only the outer part of its wings as fulmars sometimes do; this also occurs in the peregrine falcon, but less often and far less pronounced. The Barbary falcon's shoulder and pelvis bones are stout by comparison with the peregrine falcon and its feet are smaller. Barbary falcons breed at different times of year than neighboring peregrine falcon subspecies, but they are capable of interbreeding. There is a 0.6–0.7% genetic distance in the peregrine falcon-Barbary falcon ("peregrinoid") complex.
Ecology and behaviour
The peregrine falcon lives mostly along mountain ranges, river valleys, coastlines, and increasingly in cities. In mild-winter regions, it is usually a permanent resident, and some individuals, especially adult males, will remain on the breeding territory. Only populations that breed in Arctic climates typically migrate great distances during the northern winter.
The peregrine falcon reaches faster speeds than any other animal on the planet when performing the stoop, which involves soaring to a great height and then diving steeply at speeds of over , hitting one wing of its prey so as not to harm itself on impact. The air pressure from such a dive could possibly damage a bird's lungs, but small bony tubercles on a falcon's nostrils are theorized to guide the powerful airflow away from the nostrils, enabling the bird to breathe more easily while diving by reducing the change in air pressure. To protect their eyes, the falcons use their nictitating membranes (third eyelids) to spread tears and clear debris from their eyes while maintaining vision. The distinctive malar stripe or 'moustache', a dark area of feathers below the eyes, is thought to reduce solar glare and improve contrast sensitivity when targeting fast moving prey in bright light condition; the malar stripe has been found to be wider and more pronounced in regions of the world with greater solar radiation supporting this solar glare hypothesis. Peregrine falcons have a flicker fusion frequency of 129 Hz (cycles per second), very fast for a bird of its size, and much faster than mammals. A study testing the flight physics of an "ideal falcon" found a theoretical speed limit at for low-altitude flight and for high-altitude flight. In 2005, Ken Franklin recorded a falcon stooping at a top speed of .
The life span of peregrine falcons in the wild is up to 19 years 9 months. Mortality in the first year is 59–70%, declining to 25–32% annually in adults. Apart from such anthropogenic threats as collision with human-made objects, the peregrine may be killed by larger hawks and owls.
The peregrine falcon is host to a range of parasites and pathogens. It is a vector for Avipoxvirus, Newcastle disease virus, Falconid herpesvirus 1 (and possibly other Herpesviridae), and some mycoses and bacterial infections. Endoparasites include Plasmodium relictum (usually not causing malaria in the peregrine falcon), Strigeidae trematodes, Serratospiculum amaculata (nematode), and tapeworms. Known peregrine falcon ectoparasites are chewing lice, Ceratophyllus garei (a flea), and Hippoboscidae flies (Icosta nigra, Ornithoctona erythrocephala).
Feeding
The peregrine falcon's diet varies greatly and is adapted to available prey in different regions. However, it typically feeds on medium-sized birds such as pigeons and doves, waterfowl, gamebirds, songbirds, parrots, seabirds, and waders. Worldwide, it is estimated that between 1,500 and 2,000 bird species, or roughly a fifth of the world's bird species, are predated somewhere by these falcons. The peregrine falcon preys on the most diverse range of bird species of any raptor in North America, with over 300 species and including nearly 100 shorebirds. Its prey can range from hummingbirds (Selasphorus and Archilochus ssp.) to the sandhill crane, although most prey taken by peregrines weigh between (small passerines) and (ducks, geese, loons, gulls, capercaillies, ptarmigans and other grouse). Smaller hawks (such as sharp-shinned hawks) and owls are regularly predated, as well as smaller falcons such as the American kestrel, merlin and, rarely, other peregrines.
In urban areas, where it tends to nest on tall buildings or bridges, it subsists mostly on a variety of pigeons. Among pigeons, the rock dove or feral pigeon comprises 80% or more of the dietary intake of peregrines. Other common city birds are also taken regularly, including mourning doves, common wood pigeons, common swifts, northern flickers, eurasian collared doves, common starlings, American robins, common blackbirds, and corvids such as magpies, jays or carrion, house, and American crows. Coastal populations of the large subspecies pealei feed almost exclusively on seabirds. In the Brazilian mangrove swamp of Cubatão, a wintering falcon of the subspecies tundrius was observed successfully hunting a juvenile scarlet ibis.
Among mammalian prey species, bats in the genera Eptesicus, Myotis, Pipistrellus and Tadarida are the most common prey taken at night. Though peregrines generally do not prefer terrestrial mammalian prey, in Rankin Inlet, peregrines largely take northern collared lemmings (Dicrostonyx groenlandicus) along with a few Arctic ground squirrels (Urocitellus parryii). Other small mammals including shrews, mice, rats, voles, and squirrels are more seldom taken. Peregrines occasionally take rabbits, mainly young individuals and juvenile hares. Additionally, remains of red fox kits and adult female American marten were found among prey remains. Insects and reptiles such as small snakes make up a small proportion of the diet, and salmonid fish have been taken by peregrines.
The peregrine falcon hunts most often at dawn and dusk, when prey are most active, but also nocturnally in cities, particularly during migration periods when hunting at night may become prevalent. Nocturnal migrants taken by peregrines include species as diverse as yellow-billed cuckoo, black-necked grebe, virginia rail, and common quail. The peregrine requires open space in order to hunt, and therefore often hunts over open water, marshes, valleys, fields, and tundra, searching for prey either from a high perch or from the air. Large congregations of migrants, especially species that gather in the open like shorebirds, can be quite attractive to a hunting peregrine. Once prey is spotted, it begins its stoop, folding back the tail and wings, with feet tucked. Prey is typically struck and captured in mid-air; the peregrine falcon strikes its prey with a clenched foot, stunning or killing it with the impact, then turns to catch it in mid-air. If its prey is too heavy to carry, a peregrine will drop it to the ground and eat it there. If they miss the initial strike, peregrines will chase their prey in a twisting flight.
Although previously thought rare, several cases of peregrines contour-hunting, i.e., using natural contours to surprise and ambush prey on the ground, have been reported and even rare cases of prey being pursued on foot. In addition, peregrines have been documented preying on chicks in nests, from birds such as kittiwakes. Prey is plucked before consumption. A 2016 study showed that the presence of peregrines benefits non-preferred species while at the same time causing a decline in its preferred prey. As of 2018, the fastest recorded falcon was at 242 mph (nearly 390 km/h). Researchers at the University of Groningen in the Netherlands and at Oxford University used 3D computer simulations in 2018 to show that the high speed allows peregrines to gain better maneuverability and precision in strikes.
Reproduction
The peregrine falcon is sexually mature at one to three years of age, but in larger populations they breed after two to three years of age. A pair mates for life and returns to the same nesting spot annually. The courtship flight includes a mix of aerial acrobatics, precise spirals, and steep dives. The male passes prey it has caught to the female in mid-air. To make this possible, the female actually flies upside-down to receive the food from the male's talons.
During the breeding season, the peregrine falcon is territorial; nesting pairs are usually more than apart, and often much farther, even in areas with large numbers of pairs. The distance between nests ensures sufficient food supply for pairs and their chicks. Within a breeding territory, a pair may have several nesting ledges; the number used by a pair can vary from one or two up to seven in a 16-year period.
The peregrine falcon nests in a scrape, normally on cliff edges. The female chooses a nest site, where she scrapes a shallow hollow in the loose soil, sand, gravel, or dead vegetation in which to lay eggs. No nest materials are added. Cliff nests are generally located under an overhang, on ledges with vegetation. South-facing sites are favoured. In some regions, as in parts of Australia and on the west coast of northern North America, large tree hollows are used for nesting. Before the demise of most European peregrines, a large population of peregrines in central and western Europe used the disused nests of other large birds. In remote, undisturbed areas such as the Arctic, steep slopes and even low rocks and mounds may be used as nest sites. In many parts of its range, peregrines now also nest regularly on tall buildings or bridges; these human-made structures used for breeding closely resemble the natural cliff ledges that the peregrine prefers for its nesting locations.
The pair defends the chosen nest site against other peregrines, and often against ravens, herons, and gulls, and if ground-nesting, also such mammals as foxes, wolverines, felids, bears, wolves, and mountain lions. Both nests and (less frequently) adults are predated by larger-bodied raptorial birds like eagles, large owls, or gyrfalcons. The most serious predators of peregrine nests in North America and Europe are the great horned owl and the Eurasian eagle-owl. When reintroductions have been attempted for peregrines, the most serious impediments were these two species of owls routinely picking off nestlings, fledglings and adults by night. Peregrines defending their nests have managed to kill raptors as large as golden eagles and bald eagles (both of which they normally avoid as potential predators) that have come too close to the nest by ambushing them in a full stoop. In one instance, when a snowy owl killed a newly fledged peregrine, the larger owl was in turn killed by a stooping peregrine parent.
The date of egg-laying varies according to locality, but is generally from February to March in the Northern Hemisphere, and from July to August in the Southern Hemisphere, although the Australian subspecies F. p. macropus may breed as late as November, and equatorial populations may nest anytime between June and December. If the eggs are lost early in the nesting season, the female usually lays another clutch, although this is extremely rare in the Arctic due to the short summer season. Generally three to four eggs, but sometimes as few as one or as many as five, are laid in the scrape. The eggs are white to buff with red or brown markings. They are incubated for 29 to 33 days, mainly by the female, with the male also helping with the incubation of the eggs during the day, but only the female incubating them at night. The average number of young found in nests is 2.5, and the average number that fledge is about 1.5, due to the occasional production of infertile eggs and various natural losses of nestlings.
After hatching, the chicks (called "es") are covered with creamy-white down and have disproportionately large feet. The male (called the "") and the female (simply called the "falcon") both leave the nest to gather prey to feed the young. The hunting territory of the parents can extend a radius of from the nest site. Chicks fledge 42 to 46 days after hatching, and remain dependent on their parents for up to two months.
Relationship with humans
Use in falconry
The peregrine falcon is a highly admired falconry bird, and has been used in falconry for more than 3,000 years, beginning with nomads in central Asia. Its advantages in falconry include not only its athleticism and eagerness to hunt, but an equable disposition that leads to it being one of the easier falcons to train. The peregrine falcon has the additional advantage of a natural flight style of circling above the falconer ("waiting on") for game to be flushed, and then performing an effective and exciting high-speed diving stoop to take the quarry. The speed of the stoop not only allows the falcon to catch fast flying birds, it also enhances the falcon's ability to execute maneuvers to catch highly agile prey, and allows the falcon to deliver a knockout blow with a fist-like clenched talon against game that may be much larger than itself.
Additionally the versatility of the species, with agility allowing capture of smaller birds and a strength and attacking style allowing capture of game much larger than themselves, combined with the wide size range of the many peregrine subspecies, means there is a subspecies suitable to almost any size and type of game bird. This size range, evolved to fit various environments and prey species, is from the larger females of the largest subspecies to the smaller males of the smallest subspecies, approximately five to one (approximately 1500 g to 300 g). The males of smaller and medium-sized subspecies, and the females of the smaller subspecies, excel in the taking of swift and agile small game birds such as dove, quail, and smaller ducks. The females of the larger subspecies are capable of taking large and powerful game birds such as the largest of duck species, pheasant, and grouse.
Peregrine falcons handled by falconers are also occasionally used to scare away birds at airports to reduce the risk of bird-plane strikes, improving air-traffic safety. They were also used to intercept homing pigeons during World War II.
Peregrine falcons have been successfully bred in captivity, both for falconry and for release into the wild. Until 2004 nearly all peregrines used for falconry in the US were captive-bred from the progeny of falcons taken before the US Endangered Species Act was enacted and from those few infusions of wild genes available from Canada and special circumstances. Peregrine falcons were removed from the United States' endangered species list in 1999. The successful recovery program was aided by the effort and knowledge of falconers – in collaboration with The Peregrine Fund and state and federal agencies – through a technique called hacking. Finally, after years of close work with the US Fish and Wildlife Service, a limited take of wild peregrines was allowed in 2004, the first wild peregrines taken specifically for falconry in over 30 years.
The development of captive breeding methods has led to peregrines being commercially available for falconry use, thus mostly eliminating the need to capture wild birds for support of falconry. The main reason for taking wild peregrines at this point is to maintain healthy genetic diversity in the breeding lines. Hybrids of peregrines and gyrfalcons are also available that can combine the best features of both species to create what many consider to be the ultimate falconry bird for the taking of larger game such as the sage-grouse. These hybrids combine the greater size, strength, and horizontal speed of the gyrfalcon with the natural propensity to stoop and greater warm weather tolerance of the peregrine.
Today, peregrines are regularly paired in captivity with other species such as the lanner falcon (F. biarmicus) to produce the "perilanner", a bird popular in falconry as it combines the peregrine's hunting skill with the lanner's hardiness, or the gyrfalcon to produce large, strikingly coloured birds for the use of falconers.
Decline due to pesticides
The peregrine falcon became an endangered species over much of its range because of the use of organochlorine pesticides, especially DDT, during the 1950s, '60s, and '70s. Pesticide biomagnification caused organochlorine to build up in the falcons' fat tissues, reducing the amount of calcium in their eggshells. With thinner shells, fewer falcon eggs survived until hatching. In addition, the PCB concentrations found in these falcons is dependent upon the age of the falcon. While high levels are still found in young birds (only a few months old) and even higher concentrations are found in more mature falcons, further increasing in adult peregrine falcons. These pesticides caused falcon prey to also have thinner eggshells (one example of prey being the Black Petrels). In several parts of the world, such as the eastern United States and Belgium, this species became locally extinct as a result. An alternate point of view is that populations in the eastern North America had vanished due to hunting and egg collection. Following the ban of organochlorine pesticides, the reproductive success of Peregrines increased in Scotland in terms of territory occupancy and breeding success, although spatial variation in recovery rates indicate that in some areas Peregrines were also impacted by other factors such as persecution.
Recovery efforts
Peregrine falcon recovery teams breed the species in captivity. The chicks are usually fed through a chute or with a hand puppet mimicking a peregrine's head, so they cannot see to imprint on the human trainers. Then, when they are old enough, the rearing box is opened, allowing the bird to train its wings. As the fledgling gets stronger, feeding is reduced, forcing the bird to learn to hunt. This procedure is called hacking back to the wild. To release a captive-bred falcon, the bird is placed in a special cage at the top of a tower or cliff ledge for some days or so, allowing it to acclimate itself to its future environment.
Worldwide recovery efforts have been remarkably successful. The widespread restriction of DDT use eventually allowed released birds to breed successfully. The peregrine falcon was removed from the U.S. Endangered Species list on 25 August 1999.
Some controversy has existed over the origins of captive breeding stock used by the Peregrine Fund in the recovery of peregrine falcons throughout the contiguous United States. Several peregrine subspecies were included in the breeding stock, including birds of Eurasian origin. Due to the local extinction of the eastern population of Falco peregrinus anatum, its near-extinction in the Midwest, and the limited gene pool within North American breeding stock, the inclusion of non-native subspecies was justified to optimize the genetic diversity found within the species as a whole.
During the 1970s, peregrine falcons in Finland experienced a population bottleneck as a result of large declines associated with bio-accumulation of organochloride pesticides. However, the genetic diversity of peregrines in Finland is similar to other populations, indicating that high dispersal rates have maintained the genetic diversity of this species.
Since peregrine falcon eggs and chicks are still often targeted by illegal poachers, it is common practice not to publicise unprotected nest locations.
Current status
Populations of the peregrine falcon have bounced back in most parts of the world. In the United Kingdom, there has been a recovery of populations since the crash of the 1960s. This has been greatly assisted by conservation and protection work led by the Royal Society for the Protection of Birds. The RSPB estimated that there were 1,402 breeding pairs in the UK in 2011. In Canada, where peregrines were identified as endangered in 1978 (in the Yukon territory of northern Canada that year, only a single breeding pair was identified), the Committee on the Status of Endangered Wildlife in Canada declared the species no longer at risk in December 2017.
Peregrines now breed in many mountainous and coastal areas, especially in the west and north, and nest in some urban areas, capitalising on the urban feral pigeon populations for food. Additionally, falcons benefit from artificial illumination, which allows the raptors to extend their hunting periods into the dusk when natural illumination would otherwise be too low for them to pursue prey. In England, this has allowed them to prey on nocturnal migrants such as redwings, fieldfares, starlings, and woodcocks.
In many parts of the world peregrine falcons have adapted to urban habitats, nesting on cathedrals, skyscraper window ledges, tower blocks, and the towers of suspension bridges. Many of these nesting birds are encouraged, sometimes gathering media attention and often monitored by cameras.
In England, peregrine falcons have become increasingly urban in distribution, particularly in southern areas where inland cliffs suitable as nesting sites are scarce. The first recorded urban breeding pair was observed nesting on the Swansea Guildhall in the 1980s. In Southampton, a nest prevented restoration of mobile telephony services for several months in 2013, after Vodafone engineers despatched to repair a faulty transmitter mast discovered a nest in the mast, and were prevented by the Wildlife and Countryside Act – on pain of a possible prison sentence – from proceeding with repairs until the chicks fledged.
In Oregon, Portland houses ten percent of the state's peregrine nests, despite only covering around 0.1 percent of the state's land area.
Cultural significance
Due to its striking hunting technique, the peregrine has often been associated with aggression and martial prowess. The Ancient Egyptian solar deity Ra was often represented as a man with the head of a peregrine falcon adorned with the solar disk, although most Egyptologists agree that it is most likely a Lanner falcon. Native Americans of the Mississippian culture (c. 800–1500) used the peregrine, along with several other birds of prey, in imagery as a symbol of "aerial (celestial) power" and buried men of high status in costumes associating to the ferocity of raptorial birds. In the late Middle Ages, the Western European nobility that used peregrines for hunting, considered the bird associated with princes in formal hierarchies of birds of prey, just below the gyrfalcon associated with kings. It was considered "a royal bird, more armed by its courage than its claws". Terminology used by peregrine breeders also used the Old French term , "of noble birth; aristocratic", particularly with the peregrine.
Since 1927, the peregrine falcon has been the official mascot of Bowling Green State University in Bowling Green, Ohio. The 2007 U.S. Idaho state quarter features a peregrine falcon. The peregrine falcon has been designated the official city bird of Chicago.
The Peregrine, by J. A. Baker, is widely regarded as one of the best nature books in English written in the twentieth century. Admirers of the book include Robert Macfarlane, Mark Cocker, who regards the book as "one of the most outstanding books on nature in the twentieth century" and Werner Herzog, who called it "the one book I would ask you to read if you want to make films", and said elsewhere "it has prose of the calibre that we have not seen since Joseph Conrad". In the book, Baker recounts, in diary form, his detailed observations of peregrines (and their interaction with other birds) near his home in Chelmsford, Essex, over a single winter from October to April.
An episode of the hour-long TV series Starman in 1986 titled "Peregrine" was about an injured peregrine falcon and the endangered species program. It was filmed with the assistance of the University of California's peregrine falcon project in Santa Cruz.
| Biology and health sciences | Accipitriformes and Falconiformes | null |
157700 | https://en.wikipedia.org/wiki/Moment%20of%20inertia | Moment of inertia | The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is defined relative to a rotational axis. It is the ratio between the torque applied and the resulting angular acceleration about that axis. It plays the same role in rotational motion as mass does in linear motion. A body's moment of inertia about a particular axis depends both on the mass and its distribution relative to the axis, increasing with mass & distance from the axis.
It is an extensive (additive) property: for a point mass the moment of inertia is simply the mass times the square of the perpendicular distance to the axis of rotation. The moment of inertia of a rigid composite system is the sum of the moments of inertia of its component subsystems (all taken about the same axis). Its simplest definition is the second moment of mass with respect to distance from an axis.
For bodies constrained to rotate in a plane, only their moment of inertia about an axis perpendicular to the plane, a scalar value, matters. For bodies free to rotate in three dimensions, their moments can be described by a symmetric 3-by-3 matrix, with a set of mutually perpendicular principal axes for which this matrix is diagonal and torques around the axes act independently of each other.
In mechanical engineering, simply "inertia" is often used to refer to "inertial mass" or "moment of inertia".
Introduction
When a body is free to rotate around an axis, torque must be applied to change its angular momentum. The amount of torque needed to cause any given angular acceleration (the rate of change in angular velocity) is proportional to the moment of inertia of the body. Moments of inertia may be expressed in units of kilogram metre squared (kg·m2) in SI units and pound-foot-second squared (lbf·ft·s2) in imperial or US units.
The moment of inertia plays the role in rotational kinetics that mass (inertia) plays in linear kinetics—both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, and will vary depending on the chosen axis. For a point-like mass, the moment of inertia about some axis is given by , where is the distance of the point from the axis, and is the mass. For an extended rigid body, the moment of inertia is just the sum of all the small pieces of mass multiplied by the square of their distances from the axis in rotation. For an extended body of a regular shape and uniform density, this summation sometimes produces a simple expression that depends on the dimensions, shape and total mass of the object.
In 1673, Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, known as a compound pendulum. The term moment of inertia ("momentum inertiae" in Latin) was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Euler's second law.
The natural frequency of oscillation of a compound pendulum is obtained from the ratio of the torque imposed by gravity on the mass of the pendulum to the resistance to acceleration defined by the moment of inertia. Comparison of this natural frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body.
The moment of inertia also appears in momentum, kinetic energy, and in Newton's laws of motion for a rigid body as a physical parameter that combines its shape and mass. There is an interesting difference in the way moment of inertia appears in planar and spatial movement. Planar movement has a single scalar that defines the moment of inertia, while for spatial movement the same calculations yield a 3 × 3 matrix of moments of inertia, called the inertia matrix or inertia tensor.
The moment of inertia of a rotating flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. The moment of inertia of an airplane about its longitudinal, horizontal and vertical axes determine how steering forces on the control surfaces of its wings, elevators and rudder(s) affect the plane's motions in roll, pitch and yaw.
Definition
The moment of inertia is defined as the product of mass of section and the square of the distance between the reference axis and the centroid of the section.
The moment of inertia is also defined as the ratio of the net angular momentum of a system to its angular velocity around a principal axis, that is
If the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their outstretched arms or divers curl their bodies into a tuck position during a dive, to spin faster.
If the shape of the body does not change, then its moment of inertia appears in Newton's law of motion as the ratio of an applied torque on a body to the angular acceleration around a principal axis, that is
For a simple pendulum, this definition yields a formula for the moment of inertia in terms of the mass of the pendulum and its distance from the pivot point as,
Thus, the moment of inertia of the pendulum depends on both the mass of a body and its geometry, or shape, as defined by the distance to the axis of rotation.
This simple formula generalizes to define moment of inertia for an arbitrarily shaped body as the sum of all the elemental point masses each multiplied by the square of its perpendicular distance to an axis . An arbitrary object's moment of inertia thus depends on the spatial distribution of its mass.
In general, given an object of mass , an effective radius can be defined, dependent on a particular axis of rotation, with such a value that its moment of inertia around the axis is
where is known as the radius of gyration around the axis.
Examples
Simple pendulum
Mathematically, the moment of inertia of a simple pendulum is the ratio of the torque due to gravity about the pivot of a pendulum to its angular acceleration about that pivot point. For a simple pendulum, this is found to be the product of the mass of the particle with the square of its distance to the pivot, that is
This can be shown as follows:
The force of gravity on the mass of a simple pendulum generates a torque around the axis perpendicular to the plane of the pendulum movement. Here is the distance vector from the torque axis to the pendulum center of mass, and is the net force on the mass. Associated with this torque is an angular acceleration, , of the string and mass around this axis. Since the mass is constrained to a circle the tangential acceleration of the mass is . Since the torque equation becomes:
where is a unit vector perpendicular to the plane of the pendulum. (The second to last step uses the vector triple product expansion with the perpendicularity of and .) The quantity is the moment of inertia of this single mass around the pivot point.
The quantity also appears in the angular momentum of a simple pendulum, which is calculated from the velocity of the pendulum mass around the pivot, where is the angular velocity of the mass about the pivot point. This angular momentum is given by
using a similar derivation to the previous equation.
Similarly, the kinetic energy of the pendulum mass is defined by the velocity of the pendulum around the pivot to yield
This shows that the quantity is how mass combines with the shape of a body to define rotational inertia. The moment of inertia of an arbitrarily shaped body is the sum of the values for all of the elements of mass in the body.
Compound pendulums
A compound pendulum is a body formed from an assembly of particles of continuous shape that rotates rigidly around a pivot. Its moment of inertia is the sum of the moments of inertia of each of the particles that it is composed of. The natural frequency () of a compound pendulum depends on its moment of inertia, ,
where is the mass of the object, is local acceleration of gravity, and is the distance from the pivot point to the center of mass of the object. Measuring this frequency of oscillation over small angular displacements provides an effective way of measuring moment of inertia of a body.
Thus, to determine the moment of inertia of the body, simply suspend it from a convenient pivot point so that it swings freely in a plane perpendicular to the direction of the desired moment of inertia, then measure its natural frequency or period of oscillation (), to obtain
where is the period (duration) of oscillation (usually averaged over multiple periods).
Center of oscillation
A simple pendulum that has the same natural frequency as a compound pendulum defines the length from the pivot to a point called the center of oscillation of the compound pendulum. This point also corresponds to the center of percussion. The length is determined from the formula,
or
The seconds pendulum, which provides the "tick" and "tock" of a grandfather clock, takes one second to swing from side-to-side. This is a period of two seconds, or a natural frequency of for the pendulum. In this case, the distance to the center of oscillation, , can be computed to be
Notice that the distance to the center of oscillation of the seconds pendulum must be adjusted to accommodate different values for the local acceleration of gravity. Kater's pendulum is a compound pendulum that uses this property to measure the local acceleration of gravity, and is called a gravimeter.
Measuring moment of inertia
The moment of inertia of a complex system such as a vehicle or airplane around its vertical axis can be measured by suspending the system from three points to form a trifilar pendulum. A trifilar pendulum is a platform supported by three wires designed to oscillate in torsion around its vertical centroidal axis. The period of oscillation of the trifilar pendulum yields the moment of inertia of the system.
Moment of inertia of area
Moment of inertia of area is also known as the second moment of area and its physical meaning is completely different from the mass moment of inertia.
These calculations are commonly used in civil engineering for structural design of beams and columns. Cross-sectional areas calculated for vertical moment of the x-axis and horizontal moment of the y-axis .
Height (h) and breadth (b) are the linear measures, except for circles, which are effectively half-breadth derived,
Sectional areas moment calculated thus
Square:
Rectangular: and;
Triangular:
Circular:
Motion in a fixed plane
Point mass
The moment of inertia about an axis of a body is calculated by summing for every particle in the body, where is the perpendicular distance to the specified axis. To see how moment of inertia arises in the study of the movement of an extended body, it is convenient to consider a rigid assembly of point masses. (This equation can be used for axes that are not principal axes provided that it is understood that this does not fully describe the moment of inertia.)
Consider the kinetic energy of an assembly of masses that lie at the distances from the pivot point , which is the nearest point on the axis of rotation. It is the sum of the kinetic energy of the individual masses,
This shows that the moment of inertia of the body is the sum of each of the terms, that is
Thus, moment of inertia is a physical property that combines the mass and distribution of the particles around the rotation axis. Notice that rotation about different axes of the same body yield different moments of inertia.
The moment of inertia of a continuous body rotating about a specified axis is calculated in the same way, except with infinitely many point particles. Thus the limits of summation are removed, and the sum is written as follows:
Another expression replaces the summation with an integral,
Here, the function gives the mass density at each point , is a vector perpendicular to the axis of rotation and extending from a point on the rotation axis to a point in the solid, and the integration is evaluated over the volume of the body . The moment of inertia of a flat surface is similar with the mass density being replaced by its areal mass density with the integral evaluated over its area.
Note on second moment of area: The moment of inertia of a body moving in a plane and the second moment of area of a beam's cross-section are often confused. The moment of inertia of a body with the shape of the cross-section is the second moment of this area about the -axis perpendicular to the cross-section, weighted by its density. This is also called the polar moment of the area, and is the sum of the second moments about the - and -axes. The stresses in a beam are calculated using the second moment of the cross-sectional area around either the -axis or -axis depending on the load.
Examples
The moment of inertia of a compound pendulum constructed from a thin disc mounted at the end of a thin rod that oscillates around a pivot at the other end of the rod, begins with the calculation of the moment of inertia of the thin rod and thin disc about their respective centers of mass.
The moment of inertia of a thin rod with constant cross-section and density and with length about a perpendicular axis through its center of mass is determined by integration. Align the -axis with the rod and locate the origin its center of mass at the center of the rod, then where is the mass of the rod.
The moment of inertia of a thin disc of constant thickness , radius , and density about an axis through its center and perpendicular to its face (parallel to its axis of rotational symmetry) is determined by integration. Align the -axis with the axis of the disc and define a volume element as , then where is its mass.
The moment of inertia of the compound pendulum is now obtained by adding the moment of inertia of the rod and the disc around the pivot point as, where is the length of the pendulum. Notice that the parallel axis theorem is used to shift the moment of inertia from the center of mass to the pivot point of the pendulum.
A list of moments of inertia formulas for standard body shapes provides a way to obtain the moment of inertia of a complex body as an assembly of simpler shaped bodies. The parallel axis theorem is used to shift the reference point of the individual bodies to the reference point of the assembly.
As one more example, consider the moment of inertia of a solid sphere of constant density about an axis through its center of mass. This is determined by summing the moments of inertia of the thin discs that can form the sphere whose centers are along the axis chosen for consideration. If the surface of the sphere is defined by the equation
then the square of the radius of the disc at the cross-section along the -axis is
Therefore, the moment of inertia of the sphere is the sum of the moments of inertia of the discs along the -axis,
where is the mass of the sphere.
Rigid body
If a mechanical system is constrained to move parallel to a fixed plane, then the rotation of a body in the system occurs around an axis parallel to this plane. In this case, the moment of inertia of the mass in this system is a scalar known as the polar moment of inertia. The definition of the polar moment of inertia can be obtained by considering momentum, kinetic energy and Newton's laws for the planar movement of a rigid system of particles.
If a system of particles, , are assembled into a rigid body, then the momentum of the system can be written in terms of positions relative to a reference point , and absolute velocities :
where is the angular velocity of the system and is the velocity of .
For planar movement the angular velocity vector is directed along the unit vector which is perpendicular to the plane of movement. Introduce the unit vectors from the reference point to a point , and the unit vector , so
This defines the relative position vector and the velocity vector for the rigid system of the particles moving in a plane.
Note on the cross product: When a body moves parallel to a ground plane, the trajectories of all the points in the body lie in planes parallel to this ground plane. This means that any rotation that the body undergoes must be around an axis perpendicular to this plane. Planar movement is often presented as projected onto this ground plane so that the axis of rotation appears as a point. In this case, the angular velocity and angular acceleration of the body are scalars and the fact that they are vectors along the rotation axis is ignored. This is usually preferred for introductions to the topic. But in the case of moment of inertia, the combination of mass and geometry benefits from the geometric properties of the cross product. For this reason, in this section on planar movement the angular velocity and accelerations of the body are vectors perpendicular to the ground plane, and the cross product operations are the same as used for the study of spatial rigid body movement.
Angular momentum
The angular momentum vector for the planar movement of a rigid system of particles is given by
Use the center of mass as the reference point so
and define the moment of inertia relative to the center of mass as
then the equation for angular momentum simplifies to
The moment of inertia about an axis perpendicular to the movement of the rigid system and through the center of mass is known as the polar moment of inertia. Specifically, it is the second moment of mass with respect to the orthogonal distance from an axis (or pole).
For a given amount of angular momentum, a decrease in the moment of inertia results in an increase in the angular velocity. Figure skaters can change their moment of inertia by pulling in their arms. Thus, the angular velocity achieved by a skater with outstretched arms results in a greater angular velocity when the arms are pulled in, because of the reduced moment of inertia. A figure skater is not, however, a rigid body.
Kinetic energy
The kinetic energy of a rigid system of particles moving in the plane is given by
Let the reference point be the center of mass of the system so the second term becomes zero, and introduce the moment of inertia so the kinetic energy is given by
The moment of inertia is the polar moment of inertia of the body.
Newton's laws
Newton's laws for a rigid system of particles, , can be written in terms of a resultant force and torque at a reference point , to yield
where denotes the trajectory of each particle.
The kinematics of a rigid body yields the formula for the acceleration of the particle in terms of the position and acceleration of the reference particle as well as the angular velocity vector and angular acceleration vector of the rigid system of particles as,
For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors from the reference point to a point and the unit vectors , so
This yields the resultant torque on the system as
where , and is the unit vector perpendicular to the plane for all of the particles .
Use the center of mass as the reference point and define the moment of inertia relative to the center of mass , then the equation for the resultant torque simplifies to
Motion in space of a rigid body, and the inertia matrix
The scalar moments of inertia appear as elements in a matrix when a system of particles is assembled into a rigid body that moves in three-dimensional space. This inertia matrix appears in the calculation of the angular momentum, kinetic energy and resultant torque of the rigid system of particles.
Let the system of particles, be located at the coordinates with velocities relative to a fixed reference frame. For a (possibly moving) reference point , the relative positions are
and the (absolute) velocities are
where is the angular velocity of the system, and is the velocity of .
Angular momentum
Note that the cross product can be equivalently written as matrix multiplication by combining the first operand and the operator into a skew-symmetric matrix, , constructed from the components of :
The inertia matrix is constructed by considering the angular momentum, with the reference point of the body chosen to be the center of mass :
where the terms containing () sum to zero by the definition of center of mass.
Then, the skew-symmetric matrix obtained from the relative position vector , can be used to define,
where defined by
is the symmetric inertia matrix of the rigid system of particles measured relative to the center of mass .
Kinetic energy
The kinetic energy of a rigid system of particles can be formulated in terms of the center of mass and a matrix of mass moments of inertia of the system. Let the system of particles be located at the coordinates with velocities , then the kinetic energy is
where is the position vector of a particle relative to the center of mass.
This equation expands to yield three terms
Since the center of mass is defined by
, the second term in this equation is zero. Introduce the skew-symmetric matrix so the kinetic energy becomes
Thus, the kinetic energy of the rigid system of particles is given by
where is the inertia matrix relative to the center of mass and is the total mass.
Resultant torque
The inertia matrix appears in the application of Newton's second law to a rigid assembly of particles. The resultant torque on this system is,
where is the acceleration of the particle . The kinematics of a rigid body yields the formula for the acceleration of the particle in terms of the position and acceleration of the reference point, as well as the angular velocity vector and angular acceleration vector of the rigid system as,
Use the center of mass as the reference point, and introduce the skew-symmetric matrix to represent the cross product , to obtain
The calculation uses the identity
obtained from the Jacobi identity for the triple cross product as shown in the proof below:
Thus, the resultant torque on the rigid system of particles is given by
where is the inertia matrix relative to the center of mass.
Parallel axis theorem
The inertia matrix of a body depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the center of mass and the inertia matrix relative to another point . This relationship is called the parallel axis theorem.
Consider the inertia matrix obtained for a rigid system of particles measured relative to a reference point , given by
Let be the center of mass of the rigid system, then
where is the vector from the center of mass to the reference point . Use this equation to compute the inertia matrix,
Distribute over the cross product to obtain
The first term is the inertia matrix relative to the center of mass. The second and third terms are zero by definition of the center of mass . And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix constructed from .
The result is the parallel axis theorem,
where is the vector from the center of mass to the reference point .
Note on the minus sign: By using the skew symmetric matrix of position vectors relative to the reference point, the inertia matrix of each particle has the form , which is similar to the that appears in planar movement. However, to make this to work out correctly a minus sign is needed. This minus sign can be absorbed into the term , if desired, by using the skew-symmetry property of .
Scalar moment of inertia in a plane
The scalar moment of inertia, , of a body about a specified axis whose direction is specified by the unit vector and passes through the body at a point is as follows:
where is the moment of inertia matrix of the system relative to the reference point , and is the skew symmetric matrix obtained from the vector .
This is derived as follows. Let a rigid assembly of particles, , have coordinates . Choose as a reference point and compute the moment of inertia around a line L defined by the unit vector through the reference point , . The perpendicular vector from this line to the particle is obtained from by removing the component that projects onto .
where is the identity matrix, so as to avoid confusion with the inertia matrix, and is the outer product matrix formed from the unit vector along the line .
To relate this scalar moment of inertia to the inertia matrix of the body, introduce the skew-symmetric matrix such that , then we have the identity
noting that is a unit vector.
The magnitude squared of the perpendicular vector is
The simplification of this equation uses the triple scalar product identity
where the dot and the cross products have been interchanged. Exchanging products, and simplifying by noting that and are orthogonal:
Thus, the moment of inertia around the line through in the direction is obtained from the calculation
where is the moment of inertia matrix of the system relative to the reference point .
This shows that the inertia matrix can be used to calculate the moment of inertia of a body around any specified rotation axis in the body.
Inertia tensor
For the same object, different axes of rotation will have different moments of inertia about those axes. In general, the moments of inertia are not equal unless the object is symmetric about all axes. The moment of inertia tensor is a convenient way to summarize all moments of inertia of an object with one quantity. It may be calculated with respect to any point in space, although for practical purposes the center of mass is most commonly used.
Definition
For a rigid object of point masses , the moment of inertia tensor is given by
Its components are defined as
where
, is equal to 1, 2 or 3 for , , and , respectively,
is the vector to the point mass from the point about which the tensor is calculated and
is the Kronecker delta.
Note that, by the definition, is a symmetric tensor.
The diagonal elements are more succinctly written as
while the off-diagonal elements, also called the , are
Here denotes the moment of inertia around the -axis when the objects are rotated around the x-axis, denotes the moment of inertia around the -axis when the objects are rotated around the -axis, and so on.
These quantities can be generalized to an object with distributed mass, described by a mass density function, in a similar fashion to the scalar moment of inertia. One then has
where is their outer product, E3 is the 3×3 identity matrix, and V is a region of space completely containing the object.
Alternatively it can also be written in terms of the angular momentum operator :
The inertia tensor can be used in the same way as the inertia matrix to compute the scalar moment of inertia about an arbitrary axis in the direction ,
where the dot product is taken with the corresponding elements in the component tensors. A product of inertia term such as is obtained by the computation
and can be interpreted as the moment of inertia around the -axis when the object rotates around the -axis.
The components of tensors of degree two can be assembled into a matrix. For the inertia tensor this matrix is given by,
It is common in rigid body mechanics to use notation that explicitly identifies the , , and -axes, such as and , for the components of the inertia tensor.
Alternate inertia convention
There are some CAD and CAE applications such as SolidWorks, Unigraphics NX/Siemens NX and MSC Adams that use an alternate convention for the products of inertia. According to this convention, the minus sign is removed from the product of inertia formulas and instead inserted in the inertia matrix:
Determine inertia convention (Principal axes method)
If one has the inertia data without knowing which inertia convention that has been used, it can be determined if one also has the principal axes. With the principal axes method, one makes inertia matrices from the following two assumptions:
The standard inertia convention has been used .
The alternate inertia convention has been used .
Next, one calculates the eigenvectors for the two matrices. The matrix whose eigenvectors are parallel to the principal axes corresponds to the inertia convention that has been used.
Derivation of the tensor components
The distance of a particle at from the axis of rotation passing through the origin in the direction is , where is unit vector. The moment of inertia on the axis is
Rewrite the equation using matrix transpose:
where E3 is the 3×3 identity matrix.
This leads to a tensor formula for the moment of inertia
For multiple particles, we need only recall that the moment of inertia is additive in order to see that this formula is correct.
Inertia tensor of translation
Let be the inertia tensor of a body calculated at its center of mass, and be the displacement vector of the body. The inertia tensor of the translated body respect to its original center of mass is given by:
where is the body's mass, E3 is the 3 × 3 identity matrix, and is the outer product.
Inertia tensor of rotation
Let be the matrix that represents a body's rotation. The inertia tensor of the rotated body is given by:
Inertia matrix in different reference frames
The use of the inertia matrix in Newton's second law assumes its components are computed relative to axes parallel to the inertial frame and not relative to a body-fixed reference frame. This means that as the body moves the components of the inertia matrix change with time. In contrast, the components of the inertia matrix measured in a body-fixed frame are constant.
Body frame
Let the body frame inertia matrix relative to the center of mass be denoted , and define the orientation of the body frame relative to the inertial frame by the rotation matrix , such that,
where vectors in the body fixed coordinate frame have coordinates in the inertial frame. Then, the inertia matrix of the body measured in the inertial frame is given by
Notice that changes as the body moves, while remains constant.
Principal axes
Measured in the body frame, the inertia matrix is a constant real symmetric matrix. A real symmetric matrix has the eigendecomposition into the product of a rotation matrix and a diagonal matrix , given by
where
The columns of the rotation matrix define the directions of the principal axes of the body, and the constants , , and are called the principal moments of inertia. This result was first shown by J. J. Sylvester (1852), and is a form of Sylvester's law of inertia. The principal axis with the highest moment of inertia is sometimes called the figure axis or axis of figure.
A toy top is an example of a rotating rigid body, and the word top is used in the names of types of rigid bodies. When all principal moments of inertia are distinct, the principal axes through center of mass are uniquely specified and the rigid body is called an asymmetric top. If two principal moments are the same, the rigid body is called a symmetric top and there is no unique choice for the two corresponding principal axes. If all three principal moments are the same, the rigid body is called a spherical top (although it need not be spherical) and any axis can be considered a principal axis, meaning that the moment of inertia is the same about any axis.
The principal axes are often aligned with the object's symmetry axes. If a rigid body has an axis of symmetry of order , meaning it is symmetrical under rotations of about the given axis, that axis is a principal axis. When , the rigid body is a symmetric top. If a rigid body has at least two symmetry axes that are not parallel or perpendicular to each other, it is a spherical top, for example, a cube or any other Platonic solid.
The motion of vehicles is often described in terms of yaw, pitch, and roll which usually correspond approximately to rotations about the three principal axes. If the vehicle has bilateral symmetry then one of the principal axes will correspond exactly to the transverse (pitch) axis.
A practical example of this mathematical phenomenon is the routine automotive task of balancing a tire, which basically means adjusting the distribution of mass of a car wheel such that its principal axis of inertia is aligned with the axle so the wheel does not wobble.
Rotating molecules are also classified as asymmetric, symmetric, or spherical tops, and the structure of their rotational spectra is different for each type.
Ellipsoid
The moment of inertia matrix in body-frame coordinates is a quadratic form that defines a surface in the body called Poinsot's ellipsoid. Let be the inertia matrix relative to the center of mass aligned with the principal axes, then the surface
or
defines an ellipsoid in the body frame. Write this equation in the form,
to see that the semi-principal diameters of this ellipsoid are given by
Let a point on this ellipsoid be defined in terms of its magnitude and direction, , where is a unit vector. Then the relationship presented above, between the inertia matrix and the scalar moment of inertia around an axis in the direction , yields
Thus, the magnitude of a point in the direction on the inertia ellipsoid is
| Physical sciences | Classical mechanics | null |
157736 | https://en.wikipedia.org/wiki/Hybrid%20vehicle | Hybrid vehicle | A hybrid vehicle is one that uses two or more distinct types of power, such as submarines that use diesel when surfaced and batteries when submerged. Other means to store energy include pressurized fluid in hydraulic hybrids.
Hybrid powertrains are designed to switch from one power source to another to maximize both fuel efficiency and energy efficiency. In hybrid electric vehicles, for instance, the electric motor is more efficient at producing torque, or turning power, while the combustion engine is better for maintaining high speed. Improved efficiency, lower emissions, and reduced running costs relative to non-hybrid vehicles are three primary benefits of hybridization.
Vehicle types
Two-wheeled and cycle-type vehicles
Mopeds, electric bicycles, and even electric kick scooters are a simple form of a hybrid, powered by an internal combustion engine or electric motor and the rider's muscles. Early prototype motorcycles in the late 19th century used the same principle.
In a parallel hybrid bicycle human and motor torques are mechanically coupled at the pedal or one of the wheels, e.g. using a hub motor, a roller pressing onto a tire, or a connection to a wheel using a transmission element. Most motorized bicycles, mopeds are of this type.
In a series hybrid bicycle (SHB) (a kind of chainless bicycle) the user pedals a generator, charging a battery or feeding the motor, which delivers all of the torque required. They are commercially available, being simple in theory and manufacturing.
The first published prototype of an SHB is by Augustus Kinzel (US Patent 3'884'317) in 1975. In 1994 Bernie Macdonalds conceived the Electrilite SHB with power electronics allowing regenerative braking and pedaling while stationary. In 1995 Thomas Muller designed and built a "Fahrrad mit elektromagnetischem Antrieb" for his 1995 diploma thesis. In 1996 Jürg Blatter and Andreas Fuchs of Berne University of Applied Sciences built an SHB and in 1998 modified a Leitra tricycle (European patent EP 1165188). Until 2005 they built several prototype SH tricycles and quadricycles. In 1999 Harald Kutzke described an "active bicycle": the aim is to approach the ideal bicycle weighing nothing and having no drag by electronic compensation.
A series hybrid electric–petroleum bicycle (SHEPB) is powered by pedals, batteries, a petrol generator, or plug-in charger—providing flexibility and range enhancements over electric-only bicycles.
A SHEPB prototype made by David Kitson in Australia in 2014 used a lightweight brushless DC electric motor from an aerial drone and small hand-tool sized internal combustion engine, and a 3D printed drive system and lightweight housing, altogether weighing less than 4.5 kg. Active cooling keeps plastic parts from softening. The prototype uses a regular electric bicycle charge port.
Heavy vehicle
Hybrid power trains use diesel–electric or turbo-electric to power railway locomotives, buses, heavy goods vehicles, mobile hydraulic machinery, and ships. A diesel/turbine engine drives an electric generator or hydraulic pump, which powers electric/hydraulic motors—strictly an electric/hydraulic transmission (not a hybrid), unless it can accept power from outside. With large vehicles, conversion losses decrease and the advantages in distributing power through wires or pipes rather than mechanical elements become more prominent, especially when powering multiple drives—e.g. driven wheels or propellers. Until recently most heavy vehicles had little secondary energy storage, e.g. batteries/hydraulic accumulators—excepting non-nuclear submarines, one of the oldest production hybrids, running on diesel while surfaced and batteries when submerged. Both series and parallel setups were used in World War II-era submarines.
Rail transport
Europe
The new Autorail à grande capacité (AGC or high-capacity railcar) built by the Canadian company Bombardier for service in France is diesel/electric motors, using 1500 or 25,000 V on different rail systems. It was tested in Rotterdam, the Netherlands with Railfeeding, a Genesee & Wyoming company.
China
The First Hybrid Evaluating locomotive was designed by rail research center Matrai in 1999 and built in 2000. It was an EMD G12 locomotive upgraded with batteries, a 200 kW diesel generator, and four AC motors.
Japan
Japan's first hybrid train with significant energy storage is the KiHa E200, with roof-mounted lithium-ion batteries.
India
Indian railway launched one of its kind CNG-Diesel hybrid trains in January 2015. The train has a 1400 hp engine which uses fumigation technology. The first of these trains is set to run on the 81 km long Rewari-Rohtak route. CNG is less-polluting alternative for diesel and petrol and is popular as an alternative fuel in India. Already many transport vehicles such as auto-rickshaws and buses run on CNG fuel.
North America
In the US, General Electric made a locomotive with sodium–nickel chloride (Na-NiCl2) battery storage. They expect ≥10% fuel economy.
Variant diesel electric locomotive include the Green Goat (GG) and Green Kid (GK) switching/yard engines built by Canada's Railpower Technologies, with lead acid (Pba) batteries and 1000 to 2000 hp electric motors, and a new clean-burning ≈160 hp diesel generator. No fuel is wasted for idling: ≈60–85% of the time for these types of locomotives. It is unclear if regenerative braking is used; but in principle, it is easily utilized.
Since these engines typically need extra weight for traction purposes anyway the battery pack's weight is a negligible penalty. The diesel generator and batteries are normally built on an existing "retired" "yard" locomotive's frame. The existing motors and running gear are all rebuilt and reused. Fuel savings of 40–60% and up to 80% pollution reductions are claimed over a "typical" older switching/yard engine. The advantages hybrid cars have for frequent starts and stops and idle periods apply to typical switching yard use. "Green Goat" locomotives have been purchased by Canadian Pacific, BNSF, Kansas City Southern Railway and Union Pacific among others.
Cranes
Railpower Technologies engineers working with TSI Terminal Systems are testing a hybrid diesel–electric power unit with battery storage for use in Rubber Tyred Gantry (RTG) cranes. RTG cranes are typically used for loading and unloading shipping containers onto trains or trucks in ports and container storage yards. The energy used to lift the containers can be partially regained when they are lowered. Diesel fuel and emission reductions of 50–70% are predicted by Railpower engineers. First systems are expected to be operational in 2007.
Road transport, commercial vehicles
Hybrid systems are regularly in use for trucks, buses and other heavy highway vehicles. Small fleet sizes and installation costs are compensated by fuel savings, with advances such as higher capacity, lowered battery cost, etc. Toyota, Ford, GM and others are introducing hybrid pickups and SUVs. Kenworth Truck Company recently introduced the Kenworth T270 Class 6 that for city usage seems to be competitive. FedEx and others are investing in hybrid delivery vehicles—particularly for city use where hybrid technology may pay off first. FedEx is trialling two delivery trucks with Wrightspeed electric motors and diesel generators; the retrofit kits are claimed to pay for themselves in a few years. The diesel engines run at a constant RPM for peak efficiency.
In 1978 students at Minneapolis, Minnesota's Hennepin Vocational Technical Center, converted a Volkswagen Beetle to a petro-hydraulic hybrid with off-the shelf components. A car rated at 32 mpg was returning 75 mpg with the 60 hp engine replaced by a 16 hp engine, and reached 70 mph.
In the 1990s, engineers at EPA's National Vehicle and Fuel Emissions Laboratory developed a petro-hydraulic powertrain for a typical American sedan car. The test car achieved over 80 mpg on combined EPA city/highway driving cycles. Acceleration was 0-60 mph in 8 seconds, using a 1.9-liter diesel engine. No lightweight materials were used. The EPA estimated that produced in high volumes the hydraulic components would add only $700 to the cost. Under EPA testing, a hydraulic hybrid Ford Expedition returned 32 mpg (7.4 L/100 km) City, and 22 mpg (11 L/100 km) highway. UPS currently has two trucks in service using this technology.
Military off-road vehicles
Since 1985, the US military has been testing serial hybrid Humvees and have found them to deliver faster acceleration, a stealth mode with low thermal signature, near silent operation, and greater fuel economy.
Ships
Ships with both mast-mounted sails and steam engines were an early form of a hybrid vehicle. Another example is the diesel–electric submarine. This runs on batteries when submerged and the batteries can be recharged by the diesel engine when the craft is on the surface.
, there are 550 ships with an average of 1.6 MWh of batteries. The average was 500 kWh in 2016.
Newer hybrid ship-propulsion schemes include large towing kites manufactured by companies such as SkySails. Towing kites can fly at heights several times higher than the tallest ship masts, capturing stronger and steadier winds.
Aircraft
The Boeing Fuel Cell Demonstrator Airplane has a Proton-Exchange Membrane (PEM) fuel cell/lithium-ion battery hybrid system to power an electric motor, which is coupled to a conventional propeller. The fuel cell provides all power for the cruise phase of flight. During takeoff and climb, the flight segment that requires the most power, the system draws on lightweight lithium-ion batteries.
The demonstrator aircraft is a Dimona motor glider, built by Diamond Aircraft Industries of Austria, which also carried out structural modifications to the aircraft. With a wingspan of , the airplane will be able to cruise at about on power from the fuel cell.
Hybrid FanWings have been designed. A FanWing is created by two engines with the capability to autorotate and landing like a helicopter.
Engine type
Hybrid electric-petroleum vehicles
When the term hybrid vehicle is used, it most often refers to a Hybrid electric vehicle. These encompass such vehicles as the Saturn Vue, Toyota Prius, Toyota Yaris, Toyota Camry Hybrid, Ford Escape Hybrid, Ford Fusion Hybrid, Toyota Highlander Hybrid, Honda Insight, Honda Civic Hybrid, Lexus RX 400h, and 450h, Hyundai Ioniq Hybrid, Hyundai Sonata Hybrid, Hyundai Elantra Hybrid, Kia Sportage Hybrid, Kia Niro Hybrid, Kia Sorento Hybrid and others. A petroleum-electric hybrid most commonly uses internal combustion engines (using a variety of fuels, generally gasoline or Diesel engines) and electric motors to power the vehicle. The energy is stored in the fuel of the internal combustion engine and an electric battery set. There are many types of petroleum-electric hybrid drivetrains, from Full hybrid to Mild hybrid, which offer varying advantages and disadvantages.
William H. Patton filed a patent application for a gasoline-electric hybrid rail-car propulsion system in early 1889, and for a similar hybrid boat propulsion system in mid 1889. There is no evidence that his hybrid boat met with any success, but he built a prototype hybrid tram and sold a small hybrid locomotive.
In 1899, Henri Pieper developed the world's first petro-electric hybrid automobile. In 1900, Ferdinand Porsche developed a series-hybrid using two motor-in-wheel-hub arrangements with an internal combustion generator set providing the electric power; Porsche's hybrid set two-speed records.
While liquid fuel/electric hybrids date back to the late 19th century, the braking regenerative hybrid was invented by David Arthurs, an electrical engineer from Springdale, Arkansas, in 1978–79. His home-converted Opel GT was reported to return as much as 75 mpg with plans still sold to this original design, and the "Mother Earth News" modified version on their website.
The plug-in-electric-vehicle (PEV) is becoming more and more common. It has the range needed in locations where there are wide gaps with no services. The batteries can be plugged into house (mains) electricity for charging, as well being charged while the engine is running.
Continuously outboard recharged electric vehicle
Some battery electric vehicles can be recharged while the user drives. Such a vehicle establishes contact with an electrified rail, plate, or overhead wires on the highway via an attached conducting wheel or other similar mechanisms (see conduit current collection). The vehicle's batteries are recharged by this process—on the highway—and can then be used normally on other roads until the battery is discharged. For example, some of the battery-electric locomotives used for maintenance trains on the London Underground are capable of this mode of operation.
Developing an infrastructure for battery electric vehicles would provide the advantage of virtually unrestricted highway range. Since many destinations are within 100 km of a major highway, this technology could reduce the need for expensive battery systems. However, private use of the existing electrical system is almost universally prohibited. Besides, the technology for such electrical infrastructure is largely outdated and, outside some cities, not widely distributed (see Conduit current collection, trams, electric rail, trolleys, third rail). Updating the required electrical and infrastructure costs could perhaps be funded by toll revenue or by dedicated transportation taxes.
Hybrid fuel (dual mode)
In addition to vehicles that use two or more different devices for propulsion, some also consider vehicles that use distinct energy sources or input types ("fuels") using the same engine to be hybrids, although to avoid confusion with hybrids as described above and to use correctly the terms, these are perhaps more correctly described as dual mode vehicles:
Some trolleybuses can switch between an onboard diesel engine and overhead electrical power depending on conditions (see dual-mode bus). In principle, this could be combined with a battery subsystem to create a true plug-in hybrid trolleybus, although , no such design seems to have been announced.
Flexible-fuel vehicles can use a mixture of input fuels mixed in one tank—typically gasoline and ethanol, methanol, or biobutanol.
Bi-fuel vehicle: Liquified petroleum gas and natural gas are very different from petroleum or diesel and cannot be used in the same tanks, so it would be challenging to build an (LPG or NG) flexible fuel system. Instead vehicles are built with two, parallel, fuel systems feeding one engine. For example, some Chevrolet Silverado 2500 HDs can effortlessly switch between petroleum and natural gas, offering a range of over 1000 km (650 miles). While the duplicated tanks cost space in some applications, the increased range, decreased cost of fuel, and flexibility where LPG or CNG infrastructure is incomplete may be a significant incentive to purchase. While the US Natural gas infrastructure is partially incomplete, it is increasing and in 2013 had 2600 CNG stations in place. Rising gas prices may push consumers to purchase these vehicles. In 2013 when gas prices traded around US, the price of gasoline was US, compared to natural gas's . On a per unit of energy comparative basis, this makes natural gas much cheaper than gasoline.
Some vehicles have been modified to use another fuel source if it is available, such as cars modified to run on autogas (LPG) and diesels modified to run on waste vegetable oil that has not been processed into biodiesel.
Power-assist mechanisms for bicycles and other human-powered vehicles are also included (see Motorized bicycle).
Fluid power hybrid
Hydraulic hybrid and pneumatic hybrid vehicles use an engine or regenerative braking (or both) to charge a pressure accumulator to drive the wheels via hydraulic (liquid) or pneumatic (compressed gas) drive units. In most cases the engine is detached from the drivetrain, serving solely to charge the energy accumulator. The transmission is seamless. Regenerative braking can be used to recover some of the supplied drive energy back into the accumulator.
Petro-air hybrid
A French company, MDI, has designed and has running models of a petro-air hybrid engine car. The system does not use air motors to drive the vehicle, being directly driven by a hybrid engine. The engine uses a mixture of compressed air and gasoline injected into the cylinders. A key aspect of the hybrid engine is the "active chamber", which is a compartment heating air via fuel doubling the energy output. Tata Motors of India assessed the design phase towards full production for the Indian market and moved into "completing detailed development of the compressed air engine into specific vehicle and stationary applications".
Petro-hydraulic hybrid
Petro-hydraulic configurations have been common in trains and heavy vehicles for decades. The auto industry recently focused on this hybrid configuration as it now shows promise for introduction into smaller vehicles.
In petro-hydraulic hybrids, the energy recovery rate is high and therefore the system is more efficient than electric battery charged hybrids using the current electric battery technology, demonstrating a 60% to 70% increase in energy economy in US Environmental Protection Agency (EPA) testing. The charging engine needs only to be sized for average usage with acceleration bursts using the stored energy in the hydraulic accumulator, which is charged when in low energy demanding vehicle operation. The charging engine runs at optimum speed and load for efficiency and longevity. Under tests undertaken by the US Environmental Protection Agency (EPA), a hydraulic hybrid Ford Expedition returned City, and highway. UPS currently has two trucks in service using this technology.
Although petro-hydraulic hybrid technology has been known for decades and used in trains as well as very large construction vehicles, the high costs of the equipment precluded the systems from lighter trucks and cars. In the modern sense, an experiment proved the viability of small petro-hydraulic hybrid road vehicles in 1978. A group of students at Minneapolis, Minnesota's Hennepin Vocational Technical Center, converted a Volkswagen Beetle car to run as a petro-hydraulic hybrid using off-the-shelf components. A car rated at was returning with the 60 hp engine replaced by a 16 hp engine. The experimental car reached .
In the 1990s, a team of engineers working at EPA's National Vehicle and Fuel Emissions Laboratory succeeded in developing a revolutionary type of petro-hydraulic hybrid powertrain that would propel a typical American sedan car. The test car achieved over 80 mpg on combined EPA city/highway driving cycles. Acceleration was 0-60 mph in 8 seconds, using a 1.9 L diesel engine. No lightweight materials were used. The EPA estimated that produced in high volumes the hydraulic components would add only $700 to the base cost of the vehicle.
The petro-hydraulic hybrid system has a faster and more efficient charge/discharge cycling than petro-electric hybrids and is also cheaper to build. The accumulator vessel size dictates total energy storage capacity and may require more space than an electric battery set. Any vehicle space consumed by a larger size of accumulator vessel may be offset by the need for a smaller sized charging engine, in HP and physical size.
Research is underway in large corporations and small companies. The focus has now switched to smaller vehicles. The system components were expensive which precluded installation in smaller trucks and cars. A drawback was that the power driving motors were not efficient enough at part load. A British company (Artemis Intelligent Power) made a breakthrough introducing an electronically controlled hydraulic motor/pump, the Digital Displacement® motor/pump. The pump is highly efficient at all speed ranges and loads, giving feasibility to small applications of petro-hydraulic hybrids. The company converted a BMW car as a test bed to prove viability. The BMW 530i gave double the mpg in city driving compared to the standard car. This test was using the standard 3,000 cc engine, with a smaller engine the figures would have been more impressive. The design of petro-hydraulic hybrids using well sized accumulators allows downsizing an engine to average power usage, not peak power usage. Peak power is provided by the energy stored in the accumulator. A smaller more efficient constant speed engine reduces weight and liberates space for a larger accumulator.
Current vehicle bodies are designed around the mechanicals of existing engine/transmission setups. It is restrictive and far from ideal to install petro-hydraulic mechanicals into existing bodies not designed for hydraulic setups. One research project's goal is to create a blank paper design new car, to maximize the packaging of petro-hydraulic hybrid components in the vehicle. All bulky hydraulic components are integrated into the chassis of the car. One design has claimed to return 130 mpg in tests by using a large hydraulic accumulator which is also the structural chassis of the car. The small hydraulic driving motors are incorporated within the wheel hubs driving the wheels and reversing to claw-back kinetic braking energy. The hub motors eliminate the need for friction brakes, mechanical transmissions, driveshafts, and U-joints, reducing costs and weight. Hydrostatic drive with no friction brakes is used in industrial vehicles. The aim is 170 mpg in average driving conditions. The energy created by shock absorbers and kinetic braking energy that normally would be wasted assists in charging the accumulator. A small fossil-fuelled piston engine sized for average power use charges the accumulator. The accumulator is sized at running the car for 15 minutes when fully charged. The aim is a fully charged accumulator that will produce a 0-60 mph acceleration speed of under 5 seconds using four wheel drive.
In January 2011 industry giant Chrysler announced a partnership with the US Environmental Protection Agency (EPA) to design and develop an experimental petro-hydraulic hybrid powertrain suitable for use in large passenger cars. In 2012 an existing production minivan was adapted to the new hydraulic powertrain for assessment.
PSA Peugeot Citroën exhibited an experimental "Hybrid Air" engine at the 2013 Geneva Motor Show. The vehicle uses nitrogen gas compressed by energy harvested from braking or deceleration to power a hydraulic drive which supplements power from its conventional gasoline engine. The hydraulic and electronic components were supplied by Robert Bosch GmbH. Mileage was estimated to be about on the Euro test cycle if installed in a Citroën C3 type of body. PSA Although the car was ready for production and was proven and feasible delivering the claimed results, Peugeot Citroën were unable to attract a major manufacturer to share the high development costs and are shelving the project until a partnership can be arranged.
Electric-human power hybrid vehicle
Another form of a hybrid vehicle are the human-powered electric vehicles. These include such vehicles as the Sinclair C5, Twike, electric bicycles, electric skateboards, and Electric motorcycles and scooters
Hybrid vehicle power train configurations
Parallel hybrid
In a parallel hybrid vehicle, an electric motor and an internal combustion engine are coupled such that they can power the vehicle either individually or together. Most commonly the internal combustion engine, the electric motor and gearbox are coupled by automatically controlled clutches. For electric driving, the clutch between the internal combustion engine is open while the clutch to the gearbox is engaged. While in combustion mode the engine and motor run at the same speed.
The first mass-production parallel hybrid sold outside Japan was the 1st generation Honda Insight.
The Mercedes-Benz E 300 BlueTEC HYBRID released in 2012 only in European markets is a very rare mass-produced diesel hybrid vehicle powered by a Mercedes-Benz OM651 engine developing paired with a electric motor, positioned between the engine and the gearbox, for a combined output of . The vehicle has a fuel consumption rate of .
Mild parallel hybrid
These types use a generally compact electric motor (usually <20 kW) to provide auto-stop/start features and to provide extra power assist during the acceleration, and to generate on the deceleration phase (also known as regenerative braking).
On-road examples include Honda Civic Hybrid, Honda Insight 2nd generation, Honda CR-Z, Honda Accord Hybrid, Mercedes Benz S400 BlueHYBRID, BMW 7 Series hybrids, General Motors BAS Hybrids, Suzuki S-Cross, Suzuki Wagon R and Smart fortwo with micro hybrid drive.
Power-split or series-parallel hybrid
In a power-split hybrid electric drive train, there are two motors: a traction electric motor and an internal combustion engine. The power from these two motors can be shared to drive the wheels via a power split device, which is a simple planetary gear set. The ratio can be from 100% for the combustion engine to 100% for the traction electric motor, or anything in between. The combustion engine can act as a generator charging the batteries.
Modern versions such as the Toyota Hybrid Synergy Drive have a second electric motor/generator connected to the planetary gear. In cooperation with the traction motor/generator and the power-split device, this provides a continuously variable transmission.
On the open road, the primary power source is the internal combustion engine. When maximum power is required, for example, to overtake, the traction electric motor is used to assist. This increases the available power for a short period, giving the effect of having a larger engine than actually installed. In most applications, the combustion engine is switched off when the car is slow or stationary thereby reducing curbside emissions.
Passenger car installations include Toyota Prius, Ford Escape and Fusion, as well as Lexus RX400h, RX450h, GS450h, LS600h, and CT200h.
Series hybrid
A series- or serial-hybrid vehicle is driven by an electric motor, functioning as an electric vehicle while the battery pack energy supply is sufficient, with an engine tuned for running as a generator when the battery pack is insufficient. There is typically no mechanical connection between the engine and the wheels, and the primary purpose of the range extender is to charge the battery. Series-hybrids have also been referred to as extended range electric vehicle, range-extended electric vehicle, or electric vehicle-extended range (EREV/REEV/EVER).
The BMW i3 with range extender is a production series-hybrid. It operates as an electric vehicle until the battery charge is low, and then activates an engine-powered generator to maintain power, and is also available without the range extender. The Fisker Karma was the first series-hybrid production vehicle.
When describing cars, the battery of a series-hybrid is usually charged by being plugged in—but a series-hybrid may also allow for a battery to only act as a buffer (and for regeneration purposes), and for the electric motor's power to be supplied constantly by a supporting engine. Series arrangements have been common in diesel-electric locomotives and ships. Ferdinand Porsche effectively invented this arrangement in speed-record-setting racing cars in the early 20th century, such as the Lohner–Porsche Mixte Hybrid. Porsche named his arrangement "System Mixt" and it was a wheel hub motor design, where each of the two front wheels was powered by a separate motor. This arrangement was sometimes referred to as an electric transmission, as the electric generator and driving motor replaced a mechanical transmission. The vehicle could not move unless the internal combustion engine was running.
In 1997 Toyota released the first series-hybrid bus sold in Japan. GM introduced the Chevy Volt series plug-in hybrid in 2010, aiming for an all-electric range of , though this car also has a mechanical connection between the engine and drivetrain. Supercapacitors combined with a lithium-ion battery bank have been used by AFS Trinity in a converted Saturn Vue SUV vehicle. Using supercapacitors they claim up to 150 mpg in a series-hybrid arrangement.
Nissan Note e-power is an example of a series hybrid technology since 2016 in Japan.
Plug-in hybrid electric vehicle
Another subtype of hybrid vehicles is the plug-in hybrid electric vehicle. The plug-in hybrid is usually a general fuel-electric (parallel or serial) hybrid with increased energy storage capacity, usually through a lithium-ion battery, which allows the vehicle to drive on all-electric mode a distance that depends on the battery size and its mechanical layout (series or parallel). It may be connected to mains electricity supply at the end of the journey to avoid charging using the on-board internal combustion engine.
This concept is attractive to those seeking to minimize on-road emissions by avoiding—or at least minimizing—the use of ICE during daily driving. As with pure electric vehicles, the total emissions saving, for example in CO2 terms, is dependent upon the energy source of the electricity generating company.
For some users, this type of vehicle may also be financially attractive so long as the electrical energy being used is cheaper than the petrol/diesel that they would have otherwise used. Current tax systems in many European countries use mineral oil taxation as a major income source. This is generally not the case for electricity, which is taxed uniformly for the domestic customer, however that person uses it. Some electricity suppliers also offer price benefits for off-peak night users, which may further increase the attractiveness of the plug-in option for commuters and urban motorists.
Road safety for cyclists, pedestrians
A 2009 National Highway Traffic Safety Administration report examined hybrid electric vehicle accidents that involved pedestrians and cyclists and compared them to accidents involving internal combustion engine vehicles (ICEV). The findings showed that, in certain road situations, HEVs are more dangerous for those on foot or bicycle. For accidents where a vehicle was slowing or stopping, backing up, entering, or leaving a parking space (when the sound difference between HEVs and ICEVs is most pronounced), HEVs were twice as likely to be involved in a pedestrian crash than ICEVs. For crashes involving cyclists or pedestrians, there was a higher incident rate for HEVs than ICEVs when a vehicle was turning a corner. However, there was no statistically significant difference between the types of vehicles when they were driving straight.
Several automakers developed electric vehicle warning sounds designed to alert pedestrians to the presence of electric drive vehicles such as hybrid electric vehicle, plug-in hybrid electric vehicles and all-electric vehicles (EVs) travelling at low speeds. Their purpose is to make pedestrians, cyclists, the blind, and others aware of the vehicle's presence while operating in all-electric mode.
Vehicles in the market with such safety devices include the Nissan Leaf, Chevrolet Volt, Fisker Karma, Honda FCX Clarity, Nissan Fuga Hybrid/Infiniti M35, Hyundai ix35 FCEV, Hyundai Sonata Hybrid, 2012 Honda Fit EV, the 2012 Toyota Camry Hybrid, 2012 Lexus CT200h, and all the Prius family of cars.
Environmental issues
Fuel consumption and emissions reductions
The hybrid vehicle typically achieves greater fuel economy and lower emissions than conventional internal combustion engine vehicles (ICEVs), resulting in fewer emissions being generated. These savings are primarily achieved by three elements of a typical hybrid design:
Relying on both the engine and the electric motors for peak power needs, resulting in a smaller engine size more for average usage rather than peak power usage. A smaller engine can have fewer internal losses and lower weight.
Having significant battery storage capacity to store and reuse recaptured energy, especially in stop-and-go traffic typical of the city driving cycle.
Recapturing significant amounts of energy during braking that are normally wasted as heat. This regenerative braking reduces vehicle speed by converting some of its kinetic energy into electricity, depending upon the power rating of the motor/generator;
Other techniques that are not necessarily 'hybrid' features, but that are frequently found on hybrid vehicles include:
Using Atkinson cycle engines instead of Otto cycle engines for improved fuel economy.
Shutting down the engine during traffic stops or while coasting or during other idle periods.
Improving aerodynamics; (part of the reason that SUVs get such bad fuel economy is the drag on the car. A box-shaped car or truck has to exert more force to move through the air causing more stress on the engine making it work harder). Improving the shape and aerodynamics of a car is a good way to help better the fuel economy and also improve vehicle handling at the same time.
Using low rolling resistance tires (tires were often made to give a quiet, smooth ride, high grip, etc., but efficiency was a lower priority). Tires cause mechanical drag, once again making the engine work harder, consuming more fuel. Hybrid cars may use special tires that are more inflated than regular tires and stiffer or by choice of carcass structure and rubber compound have lower rolling resistance while retaining acceptable grip, and so improving fuel economy whatever the power source.
Powering the a/c, power steering, and other auxiliary pumps electrically as and when needed; this reduces mechanical losses when compared with driving them continuously with traditional engine belts.
These features make a hybrid vehicle particularly efficient for city traffic where there are frequent stops, coasting, and idling periods. In addition noise emissions are reduced, particularly at idling and low operating speeds, in comparison to conventional engine vehicles. For continuous high-speed highway use, these features are much less useful in reducing emissions.
Hybrid vehicle emissions
Hybrid vehicle emissions today are getting close to or even lower than the recommended level set by the EPA (Environmental Protection Agency). The recommended levels they suggest for a typical passenger vehicle should be equated to 5.5 metric tons of . The three most popular hybrid vehicles, Honda Civic, Honda Insight and Toyota Prius, set the standards even higher by producing 4.1, 3.5, and 3.5 tons showing a major improvement in carbon dioxide emissions. Hybrid vehicles can reduce air emissions of smog-forming pollutants by up to 90% and cut carbon dioxide emissions in half.
An increase in fossil fuels is needed to build hybrid vehicles versus conventional cars. This increase is more than offset by reduced emissions when running the vehicle.
Hybrid emissions have been understated when comparing certification cycles to real-world driving. In one study using real-world driving data, it was shown they use on average 120 g of per km instead of the 44 g per km in official tests.
Toyota states that three Hybrid vehicles equal one battery electric vehicle in reduction effect from carbon neutrality viewpoint which means reducing emissions to zero throughout the entire life cycle of a product, starting from procurement of raw materials, manufacturing, and transportation to use, recycling, and disposal.
Environmental impact of hybrid car battery
Though hybrid cars consume less fuel than conventional cars, there is still an issue regarding the environmental damage of the hybrid car battery. Today, most hybrid car batteries are Lithium-ion, which has higher energy density than nickel–metal hydride batteries and is more environmentally friendly than lead-based batteries which constitute the bulk of petrol car starter batteries today.
There are many types of batteries. Some are far more toxic than others. Lithium-ion is the least toxic of the batteries mentioned above.
The toxicity levels and environmental impact of nickel metal hydride batteries—the type previously used in hybrids—are much lower than batteries like lead acid or nickel cadmium according to one source. Another source claims nickel metal hydride batteries are much more toxic than lead batteries, also that recycling them and disposing of them safely is difficult. In general various soluble and insoluble nickel compounds, such as nickel chloride and nickel oxide, have known carcinogenic effects in chick embryos and rats. The main nickel compound in NiMH batteries is nickel oxyhydroxide (NiOOH), which is used as the positive electrode. However Nickel Metal Hydride Batteries have fallen out of favour in hybrid vehicles as various lithium-ion chemistries have become more mature to market.
The lithium-ion battery has become a market leader in this segment due to its high energy density, stability, and cost when compared to other technologies. A market leader in this area is Panasonic with their partnership with Tesla
The lithium-ion batteries are appealing because they have the highest energy density of any rechargeable batteries and can produce a voltage more than three times that of nickel–metal hydride battery cell while simultaneously storing large quantities of electricity as well. The batteries also produce higher output (boosting vehicle power), higher efficiency (avoiding wasteful use of electricity), and provides excellent durability, compared with the life of the battery being roughly equivalent to the life of the vehicle. Additionally, the use of lithium-ion batteries reduces the overall weight of the vehicle and also achieves improved fuel economy of 30% better than petro-powered vehicles with a consequent reduction in CO2 emissions helping to prevent global warming.
Lithium-ion batteries are also safer to recycle, with Volkswagen Group pioneering processes to recycle lithium-ion batteries; this is also being chased by various other large companies, such as BMW, Audi, Mercedes-Benz and Tesla. The main goal within many of these companies is to combat disinformation about the nature of lithium batteries, primarily that they are not recyclable, which primarily stem from articles discussing the difficulties of recycling.
Charging
There are two different levels of charging in plug-in hybrids. Level one charging is the slower method as it uses a 120 V/15 A single-phase grounded outlet. Level two is a faster method; existing Level 2 equipment offers charging from 208 V or 240 V (at up to 80 A, 19.2 kW). It may require dedicated equipment and a connection installation for home or public units. The optimum charging window for lithium-ion batteries is 3–4.2 V. Recharging with a 120-volt household outlet takes several hours, a 240-volt charger takes 1–4 hours, and a quick charge takes approximately 30 minutes to achieve 80% charge. Three important factors—distance on charge, cost of charging, and time to charge
In order for hybrids to run on electrical power, the car must perform the action of braking in order to generate some electricity. The electricity then gets discharged most effectively when the car accelerates or climbs up an incline.
In 2014, hybrid electric car batteries can run on solely electricity for 70–130 miles (110–210 km) on a single charge. Hybrid battery capacity currently ranges from 4.4 kWh to 85 kWh on a fully electric car. On a hybrid car, the battery packs currently range from 0.6 kWh to 2.4 kWh representing a large difference in use of electricity in hybrid cars.
Raw materials increasing costs
There is an impending increase in the costs of many rare materials used in the manufacture of hybrid cars. For example, the rare-earth element dysprosium is required to fabricate many of the advanced electric motors and battery systems in hybrid propulsion systems. Neodymium is another rare earth metal which is a crucial ingredient in high-strength magnets that are found in permanent magnet electric motors.
Nearly all the rare-earth elements in the world come from China, and many analysts believe that an overall increase in Chinese electronics manufacturing will consume this entire supply by 2012. In addition, export quotas on Chinese rare-earth elements have resulted in an unknown amount of supply.
A few non-Chinese sources such as the advanced Hoidas Lake project in northern Canada as well as Mount Weld in Australia are currently under development; however, the barriers to entry are high and require years to go online.
How hybrid-electric vehicles work
Hybrid-electric vehicles (HEVs) combine the advantage of gasoline engines and electric motors. The key areas for efficiency or performance gains are regenerative braking, dual power sources, and less idling.
Regenerative braking. The electric motor normally converts electricity into physical motion. Used in reverse as a generator, it can also convert physical motion into electricity. This both slows the car (braking) and recharges (regenerates) the batteries.
Dual power. Power can come from either the engine, motor, or both depending on driving circumstances. Additional power to assist the engine in accelerating or climbing might be provided by the electric motor. Or more commonly, a smaller electric motor provides all of the power for low-speed driving conditions and is augmented by the engine at higher speeds.
Automatic start/shutoff. It automatically shuts off the engine when the vehicle comes to a stop and restarts it when the accelerator is pressed down. This automation is much simpler with an electric motor. Also, see dual power above.
Alternative green vehicles
Other types of green vehicles include other vehicles that go fully or partly on alternative energy sources than fossil fuel. Another option is to use alternative fuel composition (i.e. biofuels) in conventional fossil fuel-based vehicles, making them go partly on renewable energy sources.
Other approaches include personal rapid transit, a public transportation concept that offers automated on-demand non-stop transportation, on a network of specially built guideways.
Marketing
Adoption
Automakers spend around $US8 million in marketing Hybrid vehicles each year. With combined effort from many car companies, the Hybrid industry has sold millions of Hybrids.
Hybrid car companies like Toyota, Honda, Ford, and BMW have pulled together to create a movement of Hybrid vehicle sales pushed by Washington lobbyists to lower the world's emissions and become less reliant on our petroleum consumption.
In 2005, sales went beyond 200,000 Hybrids, but in retrospect that only reduced the global use for gasoline consumption by 200,000 gallons per day—a tiny fraction of the 360 million gallons used per day. According to Bradley Berman author of Driving Change—One Hybrid at a time, "cold economics shows that in real dollars, except for a brief spike in the 1970s, gas prices have remained remarkably steady and cheap. Fuel continues to represent a small part of the overall cost of owning and operating a personal vehicle". Other marketing tactics include greenwashing which is the "unjustified appropriation of environmental virtue."
Temma Ehrenfeld explained in an article by Newsweek. Hybrids may be more efficient than many other gasoline motors as far as gasoline consumption is concerned but as far as being green and good for the environment is completely inaccurate.
Hybrid car companies have a long time to go if they expect to really go green. According to Harvard business professor Theodore Levitt states "managing products" and "meeting customers' needs", "you must adapt to consumer expectations and anticipation of future desires." This means people buy what they want, if they want a fuel efficient car they buy a Hybrid without thinking about the actual efficiency of the product. This "green myopia" as Ottman calls it, fails because marketers focus on the greenness of the product and not on the actual effectiveness.
Researchers and analysts say people are drawn to the new technology, as well as the convenience of fewer fill-ups. Secondly, people find it rewarding to own the better, newer, flashier, and so-called greener car.
Misleading advertising
In 2019 the term self-charging hybrid became prevalent in advertising, though cars referred to by this name do not offer any different functionality than a standard hybrid electric vehicle provides. The only self-charging effect is in energy recovery via regenerative braking, which is also true of plug-in hybrids, fuel cell electric vehicles and battery electric vehicles.
In January 2020, using this term has been prohibited in Norway, for misleading advertising by Toyota and Lexus. "Our claim is based on the fact that customers never have to charge the battery of their vehicle, as it is recharged during the vehicle use. There is no intention to mislead customers, on the contrary: the point is to clearly explain the difference with plug-in hybrid vehicles."
Adoption rate
While the adoption rate for hybrids in the US is small today (2.2% of new car sales in 2011), this compares with a 17.1% share of new car sales in Japan in 2011, and it has the potential to be very large over time as more models are offered and incremental costs decline due to learning and scale benefits. However, forecasts vary widely. For instance, Bob Lutz, a long-time skeptic of hybrids, indicated he expects hybrids "will never comprise more than 10% of the US auto market." Other sources also expect hybrid penetration rates in the US will remain under 10% for many years.
More optimistic views as of 2006 include predictions that hybrids would dominate new car sales in the US and elsewhere over the next 10 to 20 years. Another approach, taken by Saurin Shah, examines the penetration rates (or S-curves) of four analogs (historical and current) to hybrid and electrical vehicles in an attempt to gauge how quickly the vehicle stock could be hybridized and/or electrified in the United States. The analogs are (1) the electric motors in US factories in the early 20th century, (2) diesel-electric locomotives on US railways in the 1920–1945 period, (3) a range of new automotive features/technologies introduced in the US over the past fifty years, and 4) e-bike purchases in China over the past few years. These analogs collectively suggest it would take at least 30 years for hybrid and electric vehicles to capture 80% of the US passenger vehicle stock.
The EPA expects the combined market share of new gasoline hybrid light-duty vehicles to reach 13.6% for the 2023 model year from 10.2% in the 2022 model year.
European Union 2020 regulation standards
The European Parliament, Council, and European Commission have reached an agreement which is aimed at reducing the average CO2 passenger car emissions to 95 g/km by 2020, according to a European Commission press release.
According to the release, the key details of the agreement are as follows:
Emissions target: The agreement will reduce average CO2 emissions from new cars to 95 g/km from 2020, as proposed by the commission. This is a 40% reduction from the mandatory 2015 target of 130 g/km. The target is an average for each manufacturer's new car fleet; it allows OEMs to build some vehicles that emit less than the average and some that emit more.
2025 target: The commission is required to propose a further emissions reduction target by the end-2015 to take effect in 2025. This target will be in line with the EU's long-term climate goals.
Super credits for low-emission vehicles: The Regulation will give manufacturers additional incentives to produce cars with CO2 emissions of 50 g/km or less (which will be electric or plug-in hybrid cars). Each of these vehicles will be counted as two vehicles in 2020, 1.67 in 2021, 1.33 in 2022, and then as one vehicle from 2023 onwards. These super credits will help manufacturers further reduce the average emissions of their new car fleet. However, to prevent the scheme from undermining the environmental integrity of the legislation, there will be a 2.5 g/km cap per manufacturer on the contribution that super credits can make to their target in any year.
| Technology | Basics_7 | null |
157774 | https://en.wikipedia.org/wiki/Permafrost | Permafrost | Permafrost () is soil or underwater sediment which continuously remains below for two years or more: the oldest permafrost had been continuously frozen for around 700,000 years. Whilst the shallowest permafrost has a vertical extent of below a meter (3 ft), the deepest is greater than . Similarly, the area of individual permafrost zones may be limited to narrow mountain summits or extend across vast Arctic regions. The ground beneath glaciers and ice sheets is not usually defined as permafrost, so on land, permafrost is generally located beneath a so-called active layer of soil which freezes and thaws depending on the season.
Around 15% of the Northern Hemisphere or 11% of the global surface is underlain by permafrost, covering a total area of around . This includes large areas of Alaska, Canada, Greenland, and Siberia. It is also located in high mountain regions, with the Tibetan Plateau being a prominent example. Only a minority of permafrost exists in the Southern Hemisphere, where it is consigned to mountain slopes like in the Andes of Patagonia, the Southern Alps of New Zealand, or the highest mountains of Antarctica.
Permafrost contains large amounts of dead biomass that have accumulated throughout millennia without having had the chance to fully decompose and release their carbon, making tundra soil a carbon sink. As global warming heats the ecosystem, frozen soil thaws and becomes warm enough for decomposition to start anew, accelerating the permafrost carbon cycle. Depending on conditions at the time of thaw, decomposition can release either carbon dioxide or methane, and these greenhouse gas emissions act as a climate change feedback. The emissions from thawing permafrost will have a sufficient impact on the climate to impact global carbon budgets. It is difficult to accurately predict how much greenhouse gases the permafrost releases because of the different thaw processes are still uncertain. There is widespread agreement that the emissions will be smaller than human-caused emissions and not large enough to result in runaway warming. Instead, the annual permafrost emissions are likely comparable with global emissions from deforestation, or to annual emissions of large countries such as Russia, the United States or China.
Apart from its climate impact, permafrost thaw brings more risks. Formerly frozen ground often contains enough ice that when it thaws, hydraulic saturation is suddenly exceeded, so the ground shifts substantially and may even collapse outright. Many buildings and other infrastructure were built on permafrost when it was frozen and stable, and so are vulnerable to collapse if it thaws. Estimates suggest nearly 70% of such infrastructure is at risk by 2050, and that the associated costs could rise to tens of billions of dollars in the second half of the century. Furthermore, between 13,000 and 20,000 sites contaminated with toxic waste are present in the permafrost, as well as the natural mercury deposits, which are all liable to leak and pollute the environment as the warming progresses. Lastly, concerns have been raised about the potential for pathogenic microorganisms surviving the thaw and contributing to future pandemics. However, this is considered unlikely, and a scientific review on the subject describes the risks as "generally low".
Classification and extent
Permafrost is soil, rock or sediment that is frozen for more than two consecutive years. In practice, this means that permafrost occurs at a mean annual temperature of or below. In the coldest regions, the depth of continuous permafrost can exceed . It typically exists beneath the so-called active layer, which freezes and thaws annually, and so can support plant growth, as the roots can only take hold in the soil that's thawed. Active layer thickness is measured during its maximum extent at the end of summer: as of 2018, the average thickness in the Northern Hemisphere is ~, but there are significant regional differences. Northeastern Siberia, Alaska and Greenland have the most solid permafrost with the lowest extent of active layer (less than on average, and sometimes only ), while southern Norway and the Mongolian Plateau are the only areas where the average active layer is deeper than , with the record of . The border between active layer and permafrost itself is sometimes called permafrost table.
Around 15% of Northern Hemisphere land that is not completely covered by ice is directly underlain by permafrost; 22% is defined as part of a permafrost zone or region. This is because only slightly more than half of this area is defined as a continuous permafrost zone, where 90%–100% of the land is underlain by permafrost. Around 20% is instead defined as discontinuous permafrost, where the coverage is between 50% and 90%. Finally, the remaining <30% of permafrost regions consists of areas with 10%–50% coverage, which are defined as sporadic permafrost zones, and some areas that have isolated patches of permafrost covering 10% or less of their area. Most of this area is found in Siberia, northern Canada, Alaska and Greenland. Beneath the active layer annual temperature swings of permafrost become smaller with depth. The greatest depth of permafrost occurs right before the point where geothermal heat maintains a temperature above freezing. Above that bottom limit there may be permafrost with a consistent annual temperature—"isothermal permafrost".
Continuity of coverage
Permafrost typically forms in any climate where the mean annual air temperature is lower than the freezing point of water. Exceptions are found in humid boreal forests, such as in Northern Scandinavia and the North-Eastern part of European Russia west of the Urals, where snow acts as an insulating blanket. Glaciated areas may also be exceptions. Since all glaciers are warmed at their base by geothermal heat, temperate glaciers, which are near the pressure melting point throughout, may have liquid water at the interface with the ground and are therefore free of underlying permafrost. "Fossil" cold anomalies in the geothermal gradient in areas where deep permafrost developed during the Pleistocene persist down to several hundred metres. This is evident from temperature measurements in boreholes in North America and Europe.
Discontinuous permafrost
The below-ground temperature varies less from season to season than the air temperature, with mean annual temperatures tending to increase with depth due to the geothermal crustal gradient. Thus, if the mean annual air temperature is only slightly below , permafrost will form only in spots that are sheltered (usually with a northern or southern aspect, in the north and south hemispheres respectively) creating discontinuous permafrost. Usually, permafrost will remain discontinuous in a climate where the mean annual soil surface temperature is between . In the moist-wintered areas mentioned before, there may not even be discontinuous permafrost down to . Discontinuous permafrost is often further divided into extensive discontinuous permafrost, where permafrost covers between 50 and 90 percent of the landscape and is usually found in areas with mean annual temperatures between , and sporadic permafrost, where permafrost cover is less than 50 percent of the landscape and typically occurs at mean annual temperatures between .
In soil science, the sporadic permafrost zone is abbreviated SPZ and the extensive discontinuous permafrost zone DPZ. Exceptions occur in un-glaciated Siberia and Alaska where the present depth of permafrost is a relic of climatic conditions during glacial ages where winters were up to colder than those of today.
Continuous permafrost
At mean annual soil surface temperatures below the influence of aspect can never be sufficient to thaw permafrost and a zone of continuous permafrost (abbreviated to CPZ) forms. A line of continuous permafrost in the Northern Hemisphere represents the most southern border where land is covered by continuous permafrost or glacial ice. The line of continuous permafrost varies around the world northward or southward due to regional climatic changes. In the southern hemisphere, most of the equivalent line would fall within the Southern Ocean if there were land there. Most of the Antarctic continent is overlain by glaciers, under which much of the terrain is subject to basal melting. The exposed land of Antarctica is substantially underlain with permafrost, some of which is subject to warming and thawing along the coastline.
Alpine permafrost
A range of elevations in both the Northern and Southern Hemisphere are cold enough to support perennially frozen ground: some of the best-known examples include the Canadian Rockies, the European Alps, Himalaya and the Tien Shan. In general, it has been found that extensive alpine permafrost requires mean annual air temperature of , though this can vary depending on local topography, and some mountain areas are known to support permafrost at . It is also possible for subsurface alpine permafrost to be covered by warmer, vegetation-supporting soil.
Alpine permafrost is particularly difficult to study, and systematic research efforts did not begin until the 1970s. Consequently, there remain uncertainties about its geography As recently as 2009, permafrost had been discovered in a new area – Africa's highest peak, Mount Kilimanjaro ( above sea level and approximately 3° south of the equator). In 2014, a collection of regional estimates of alpine permafrost extent had established a global extent of . Yet, by 2014, alpine permafrost in the Andes has not been fully mapped, although its extent has been modeled to assess the amount of water bound up in these areas.
Subsea permafrost
Subsea permafrost occurs beneath the seabed and exists in the continental shelves of the polar regions. These areas formed during the last Ice Age, when a larger portion of Earth's water was bound up in ice sheets on land and when sea levels were low. As the ice sheets melted to again become seawater during the Holocene glacial retreat, coastal permafrost became submerged shelves under relatively warm and salty boundary conditions, compared to surface permafrost. Since then, these conditions led to the gradual and ongoing decline of subsea permafrost extent. Nevertheless, its presence remains an important consideration for the "design, construction, and operation of coastal facilities, structures founded on the seabed, artificial islands, sub-sea pipelines, and wells drilled for exploration and production". Subsea permafrost can also overlay deposits of methane clathrate, which were once speculated to be a major climate tipping point in what was known as a clathrate gun hypothesis, but are now no longer believed to play any role in projected climate change.
Past extent of permafrost
At the Last Glacial Maximum, continuous permafrost covered a much greater area than it does today, covering all of ice-free Europe south to about Szeged (southeastern Hungary) and the Sea of Azov (then dry land) and East Asia south to present-day Changchun and Abashiri. In North America, only an extremely narrow belt of permafrost existed south of the ice sheet at about the latitude of New Jersey through southern Iowa and northern Missouri, but permafrost was more extensive in the drier western regions where it extended to the southern border of Idaho and Oregon. In the Southern Hemisphere, there is some evidence for former permafrost from this period in central Otago and Argentine Patagonia, but was probably discontinuous, and is related to the tundra. Alpine permafrost also occurred in the Drakensberg during glacial maxima above about .
Manifestations
Base depth
Permafrost extends to a base depth where geothermal heat from the Earth and the mean annual temperature at the surface achieve an equilibrium temperature of . This base depth of permafrost can vary wildly – it is less than a meter (3 ft) in the areas where it is shallowest, yet reaches in the northern Lena and Yana River basins in Siberia. Calculations indicate that the formation time of permafrost greatly slows past the first several metres. For instance, over half a million years was required to form the deep permafrost underlying Prudhoe Bay, Alaska, a time period extending over several glacial and interglacial cycles of the Pleistocene.
Base depth is affected by the underlying geology, and particularly by thermal conductivity, which is lower for permafrost in soil than in bedrock. Lower conductivity leaves permafrost less affected by the geothermal gradient, which is the rate of increasing temperature with respect to increasing depth in the Earth's interior. It occurs as the Earth's internal thermal energy is generated by radioactive decay of unstable isotopes and flows to the surface by conduction at a rate of ~47 terawatts (TW). Away from tectonic plate boundaries, this is equivalent to an average heat flow of 25–30 °C/km (124–139 °F/mi) near the surface.
Massive ground ice
When the ice content of a permafrost exceeds 250 percent (ice to dry soil by mass) it is classified as massive ice. Massive ice bodies can range in composition, in every conceivable gradation from icy mud to pure ice. Massive icy beds have a minimum thickness of at least 2 m and a short diameter of at least 10 m. First recorded North American observations of this phenomenon were by European scientists at Canning River (Alaska) in 1919. Russian literature provides an earlier date of 1735 and 1739 during the Great North Expedition by P. Lassinius and Khariton Laptev, respectively. Russian investigators including I.A. Lopatin, B. Khegbomov, S. Taber and G. Beskow had also formulated the original theories for ice inclusion in freezing soils.
While there are four categories of ice in permafrost – pore ice, ice wedges (also known as vein ice), buried surface ice and intrasedimental (sometimes also called constitutional) ice – only the last two tend to be large enough to qualify as massive ground ice. These two types usually occur separately, but may be found together, like on the coast of Tuktoyaktuk in western Arctic Canada, where the remains of Laurentide Ice Sheet are located.
Buried surface ice may derive from snow, frozen lake or sea ice, aufeis (stranded river ice) and even buried glacial ice from the former Pleistocene ice sheets. The latter hold enormous value for paleoglaciological research, yet even as of 2022, the total extent and volume of such buried ancient ice is unknown. Notable sites with known ancient ice deposits include Yenisei River valley in Siberia, Russia as well as Banks and Bylot Island in Canada's Nunavut and Northwest Territories. Some of the buried ice sheet remnants are known to host thermokarst lakes.
Intrasedimental or constitutional ice has been widely observed and studied across Canada. It forms when subterranean waters freeze in place, and is subdivided into intrusive, injection and segregational ice. The latter is the dominant type, formed after crystallizational differentiation in wet sediments, which occurs when water migrates to the freezing front under the influence of van der Waals forces. This is a slow process, which primarily occurs in silts with salinity less than 20% of seawater: silt sediments with higher salinity and clay sediments instead have water movement prior to ice formation dominated by rheological processes. Consequently, it takes between 1 and 1000 years to form intrasedimental ice in the top 2.5 meters of clay sediments, yet it takes between 10 and 10,000 years for peat sediments and between 1,000 and 1,000,000 years for silt sediments.
Landforms
Permafrost processes such as thermal contraction generating cracks which eventually become ice wedges and solifluction – gradual movement of soil down the slope as it repeatedly freezes and thaws – often lead to the formation of ground polygons, rings, steps and other forms of patterned ground found in arctic, periglacial and alpine areas. In ice-rich permafrost areas, melting of ground ice initiates thermokarst landforms such as thermokarst lakes, thaw slumps, thermal-erosion gullies, and active layer detachments. Notably, unusually deep permafrost in Arctic moorlands and bogs often attracts meltwater in warmer seasons, which pools and freezes to form ice lenses, and the surrounding ground begins to jut outward at a slope. This can eventually result in the formation of large-scale land forms around this core of permafrost, such as palsas – long (), wide () yet shallow (< tall) peat mounds – and the even larger pingos, which can be high and in diameter.
Ecology
Only plants with shallow roots can survive in the presence of permafrost. Black spruce tolerates limited rooting zones, and dominates flora where permafrost is extensive. Likewise, animal species which live in dens and burrows have their habitat constrained by the permafrost, and these constraints also have a secondary impact on interactions between species within the ecosystem.
While permafrost soil is frozen, it is not completely inhospitable to microorganisms, though their numbers can vary widely, typically from 1 to 1000 million per gram of soil.
The permafrost carbon cycle (Arctic Carbon Cycle) deals with the transfer of carbon from permafrost soils to terrestrial vegetation and microbes, to the atmosphere, back to vegetation, and finally back to permafrost soils through burial and sedimentation due to cryogenic processes. Some of this carbon is transferred to the ocean and other portions of the globe through the global carbon cycle. The cycle includes the exchange of carbon dioxide and methane between terrestrial components and the atmosphere, as well as the transfer of carbon between land and water as methane, dissolved organic carbon, dissolved inorganic carbon, particulate inorganic carbon and particulate organic carbon.
Most of the bacteria and fungi found in permafrost cannot be cultured in the laboratory, but the identity of the microorganisms can be revealed by DNA-based techniques. For instance, analysis of 16S rRNA genes from late Pleistocene permafrost samples in eastern Siberia's Kolyma Lowland revealed eight phylotypes, which belonged to the phyla Actinomycetota and Pseudomonadota. "Muot-da-Barba-Peider", an alpine permafrost site in eastern Switzerland, was found to host a diverse microbial community in 2016. Prominent bacteria groups included phylum Acidobacteriota, Actinomycetota, AD3, Bacteroidota, Chloroflexota, Gemmatimonadota, OD1, Nitrospirota, Planctomycetota, Pseudomonadota, and Verrucomicrobiota, in addition to eukaryotic fungi like Ascomycota, Basidiomycota, and Zygomycota. In the presently living species, scientists observed a variety of adaptations for sub-zero conditions, including reduced and anaerobic metabolic processes.
Construction on permafrost
There are only two large cities in the world built in areas of continuous permafrost (where the frozen soil forms an unbroken, below-zero sheet) and both are in Russia – Norilsk in Krasnoyarsk Krai and Yakutsk in the Sakha Republic. Building on permafrost is difficult because the heat of the building (or pipeline) can spread to the soil, thawing it. As ice content turns to water, the ground's ability to provide structural support is weakened, until the building is destabilized. For instance, during the construction of the Trans-Siberian Railway, a steam engine factory complex built in 1901 began to crumble within a month of operations for these reasons. Additionally, there is no groundwater available in an area underlain with permafrost. Any substantial settlement or installation needs to make some alternative arrangement to obtain water.
A common solution is placing foundations on wood piles, a technique pioneered by Soviet engineer Mikhail Kim in Norilsk. However, warming-induced change of friction on the piles can still cause movement through creep, even as the soil remains frozen. The Melnikov Permafrost Institute in Yakutsk found that pile foundations should extend down to to avoid the risk of buildings sinking. At this depth the temperature does not change with the seasons, remaining at about .
Two other approaches are building on an extensive gravel pad (usually thick); or using anhydrous ammonia heat pipes. The Trans-Alaska Pipeline System uses heat pipes built into vertical supports to prevent the pipeline from sinking and the Qingzang railway in Tibet employs a variety of methods to keep the ground cool, both in areas with frost-susceptible soil. Permafrost may necessitate special enclosures for buried utilities, called "utilidors".
Impacts of climate change
Increasing active layer thickness
Globally, permafrost warmed by about between 2007 and 2016, with stronger warming observed in the continuous permafrost zone relative to the discontinuous zone. Observed warming was up to in parts of Northern Alaska (early 1980s to mid-2000s) and up to in parts of the Russian European North (1970–2020). This warming inevitably causes permafrost to thaw: active layer thickness has increased in the European and Russian Arctic across the 21st century and at high elevation areas in Europe and Asia since the 1990s.
Between 2000 and 2018, the average active layer thickness had increased from ~ to ~, at an average annual rate of ~.
In Yukon, the zone of continuous permafrost might have moved poleward since 1899, but accurate records only go back 30 years. The extent of subsea permafrost is decreasing as well; as of 2019, ~97% of permafrost under Arctic ice shelves is becoming warmer and thinner.
Based on high agreement across model projections, fundamental process understanding, and paleoclimate evidence, it is virtually certain that permafrost extent and volume will continue to shrink as the global climate warms, with the extent of the losses determined by the magnitude of warming.
Permafrost thaw is associated with a wide range of issues, and International Permafrost Association (IPA) exists to help address them. It convenes International Permafrost Conferences and maintains Global Terrestrial Network for Permafrost, which undertakes special projects such as preparing databases, maps, bibliographies, and glossaries, and coordinates international field programmes and networks.
Climate change feedback
As recent warming deepens the active layer subject to permafrost thaw, this exposes formerly stored carbon to biogenic processes which facilitate its entrance into the atmosphere as carbon dioxide and methane. Because carbon emissions from permafrost thaw contribute to the same warming which facilitates the thaw, it is a well-known example of a positive climate change feedback. Permafrost thaw is sometimes included as one of the major tipping points in the climate system due to the exhibition of local thresholds and its effective irreversibility. However, while there are self-perpetuating processes that apply on the local or regional scale, it is debated as to whether it meets the strict definition of a global tipping point as in aggregate permafrost thaw is gradual with warming.
In the northern circumpolar region, permafrost contains organic matter equivalent to 1400–1650 billion tons of pure carbon, which was built up over thousands of years. This amount equals almost half of all organic material in all soils, and it is about twice the carbon content of the atmosphere, or around four times larger than the human emissions of carbon between the start of the Industrial Revolution and 2011. Further, most of this carbon (~1,035 billion tons) is stored in what is defined as the near-surface permafrost, no deeper than below the surface. However, only a fraction of this stored carbon is expected to enter the atmosphere. In general, the volume of permafrost in the upper 3 m of ground is expected to decrease by about 25% per of global warming, yet even under the RCP8.5 scenario associated with over of global warming by the end of the 21st century, about 5% to 15% of permafrost carbon is expected to be lost "over decades and centuries".
The exact amount of carbon that will be released due to warming in a given permafrost area depends on depth of thaw, carbon content within the thawed soil, physical changes to the environment, and microbial and vegetation activity in the soil. Notably, estimates of carbon release alone do not fully represent the impact of permafrost thaw on climate change. This is because carbon can be released through either aerobic or anaerobic respiration, which results in carbon dioxide (CO2) or methane (CH4) emissions, respectively. While methane lasts less than 12 years in the atmosphere, its global warming potential is around 80 times larger than that of CO2 over a 20-year period and about 28 times larger over a 100-year period. While only a small fraction of permafrost carbon will enter the atmosphere as methane, those emissions will cause 40–70% of the total warming caused by permafrost thaw during the 21st century. Much of the uncertainty about the eventual extent of permafrost methane emissions is caused by the difficulty of accounting for the recently discovered abrupt thaw processes, which often increase the fraction of methane emitted over carbon dioxide in comparison to the usual gradual thaw processes.
Another factor which complicates projections of permafrost carbon emissions is the ongoing "greening" of the Arctic. As climate change warms the air and the soil, the region becomes more hospitable to plants, including larger shrubs and trees which could not survive there before. Thus, the Arctic is losing more and more of its tundra biomes, yet it gains more plants, which proceed to absorb more carbon. Some of the emissions caused by permafrost thaw will be offset by this increased plant growth, but the exact proportion is uncertain. It is considered very unlikely that this greening could offset all of the emissions from permafrost thaw during the 21st century, and even less likely that it could continue to keep pace with those emissions after the 21st century. Further, climate change also increases the risk of wildfires in the Arctic, which can substantially accelerate emissions of permafrost carbon.
Impact on global temperatures
Altogether, it is expected that cumulative greenhouse gas emissions from permafrost thaw will be smaller than the cumulative anthropogenic emissions, yet still substantial on a global scale, with some experts comparing them to emissions caused by deforestation. The IPCC Sixth Assessment Report estimates that carbon dioxide and methane released from permafrost could amount to the equivalent of 14–175 billion tonnes of carbon dioxide per of warming. For comparison, by 2019, annual anthropogenic emissions of carbon dioxide alone stood around 40 billion tonnes. A major review published in the year 2022 concluded that if the goal of preventing of warming was realized, then the average annual permafrost emissions throughout the 21st century would be equivalent to the year 2019 annual emissions of Russia. Under RCP4.5, a scenario considered close to the current trajectory and where the warming stays slightly below , annual permafrost emissions would be comparable to year 2019 emissions of Western Europe or the United States, while under the scenario of high global warming and worst-case permafrost feedback response, they would approach year 2019 emissions of China.
Fewer studies have attempted to describe the impact directly in terms of warming. A 2018 paper estimated that if global warming was limited to , gradual permafrost thaw would add around to global temperatures by 2100, while a 2022 review concluded that every of global warming would cause and from abrupt thaw by the year 2100 and 2300. Around of global warming, abrupt (around 50 years) and widespread collapse of permafrost areas could occur, resulting in an additional warming of .
Thaw-induced ground instability
As the water drains or evaporates, soil structure weakens and sometimes becomes viscous until it regains strength with decreasing moisture content. One visible sign of permafrost degradation is the random displacement of trees from their vertical orientation in permafrost areas. Global warming has been increasing permafrost slope disturbances and sediment supplies to fluvial systems, resulting in exceptional increases in river sediment. On the other hands, disturbance of formerly hard soil increases drainage of water reservoirs in northern wetlands. This can dry them out and compromise the survival of plants and animals used to the wetland ecosystem.
In high mountains, much of the structural stability can be attributed to glaciers and permafrost. As climate warms, permafrost thaws, decreasing slope stability and increasing stress through buildup of pore-water pressure, which may ultimately lead to slope failure and rockfalls. Over the past century, an increasing number of alpine rock slope failure events in mountain ranges around the world have been recorded, and some have been attributed to permafrost thaw induced by climate change. The 1987 Val Pola landslide that killed 22 people in the Italian Alps is considered one such example. In 2002, massive rock and ice falls (up to 11.8 million m3), earthquakes (up to 3.9 Richter), floods (up to 7.8 million m3 water), and rapid rock-ice flow to long distances (up to 7.5 km at 60 m/s) were attributed to slope instability in high mountain permafrost.
Permafrost thaw can also result in the formation of frozen debris lobes (FDLs), which are defined as "slow-moving landslides composed of soil, rocks, trees, and ice". This is a notable issue in the Alaska's southern Brooks Range, where some FDLs measured over in width, in height, and in length by 2012. As of December 2021, there were 43 frozen debris lobes identified in the southern Brooks Range, where they could potentially threaten both the Trans Alaska Pipeline System (TAPS) corridor and the Dalton Highway, which is the main transport link between the Interior Alaska and the Alaska North Slope.
Infrastructure
As of 2021, there are 1162 settlements located directly atop the Arctic permafrost, which host an estimated 5 million people. By 2050, permafrost layer below 42% of these settlements is expected to thaw, affecting all their inhabitants (currently 3.3 million people). Consequently, a wide range of infrastructure in permafrost areas is threatened by the thaw. By 2050, it's estimated that nearly 70% of global infrastructure located in the permafrost areas would be at high risk of permafrost thaw, including 30–50% of "critical" infrastructure. The associated costs could reach tens of billions of dollars by the second half of the century. Reducing greenhouse gas emissions in line with the Paris Agreement is projected to stabilize the risk after mid-century; otherwise, it'll continue to worsen.
In Alaska alone, damages to infrastructure by the end of the century would amount to $4.6 billion (at 2015 dollar value) if RCP8.5, the high-emission climate change scenario, were realized. Over half stems from the damage to buildings ($2.8 billion), but there's also damage to roads ($700 million), railroads ($620 million), airports ($360 million) and pipelines ($170 million). Similar estimates were done for RCP4.5, a less intense scenario which leads to around by 2100, a level of warming similar to the current projections. In that case, total damages from permafrost thaw are reduced to $3 billion, while damages to roads and railroads are lessened by approximately two-thirds (from $700 and $620 million to $190 and $220 million) and damages to pipelines are reduced more than ten-fold, from $170 million to $16 million. Unlike the other costs stemming from climate change in Alaska, such as damages from increased precipitation and flooding, climate change adaptation is not a viable way to reduce damages from permafrost thaw, as it would cost more than the damage incurred under either scenario.
In Canada, Northwest Territories have a population of only 45,000 people in 33 communities, yet permafrost thaw is expected to cost them $1.3 billion over 75 years, or around $51 million a year. In 2006, the cost of adapting Inuvialuit homes to permafrost thaw was estimated at $208/m2 if they were built at pile foundations, and $1,000/m2 if they didn't. At the time, the average area of a residential building in the territory was around 100 m2. Thaw-induced damage is also unlikely to be covered by home insurance, and to address this reality, territorial government currently funds Contributing Assistance for Repairs and Enhancements (CARE) and Securing Assistance for Emergencies (SAFE) programs, which provide long- and short-term forgivable loans to help homeowners adapt. It is possible that in the future, mandatory relocation would instead take place as the cheaper option. However, it would effectively tear the local Inuit away from their ancestral homelands. Right now, their average personal income is only half that of the median NWT resident, meaning that adaptation costs are already disproportionate for them.
By 2022, up to 80% of buildings in some Northern Russia cities had already experienced damage. By 2050, the damage to residential infrastructure may reach $15 billion, while total public infrastructure damages could amount to 132 billion. This includes oil and gas extraction facilities, of which 45% are believed to be at risk.
Outside of the Arctic, Qinghai–Tibet Plateau (sometimes known as "the Third Pole"), also has an extensive permafrost area. It is warming at twice the global average rate, and 40% of it is already considered "warm" permafrost, making it particularly unstable. Qinghai–Tibet Plateau has a population of over 10 million people – double the population of permafrost regions in the Arctic – and over 1 million m2 of buildings are located in its permafrost area, as well as 2,631 km of power lines, and 580 km of railways. There are also 9,389 km of roads, and around 30% are already sustaining damage from permafrost thaw. Estimates suggest that under the scenario most similar to today, SSP2-4.5, around 60% of the current infrastructure would be at high risk by 2090 and simply maintaining it would cost $6.31 billion, with adaptation reducing these costs by 20.9% at most. Holding the global warming to would reduce these costs to $5.65 billion, and fulfilling the optimistic Paris Agreement target of would save a further $1.32 billion. In particular, fewer than 20% of railways would be at high risk by 2100 under , yet this increases to 60% at , while under SSP5-8.5, this level of risk is met by mid-century.
Release of toxic pollutants
For much of the 20th century, it was believed that permafrost would "indefinitely" preserve anything buried there, and this made deep permafrost areas popular locations for hazardous waste disposal. In places like Canada's Prudhoe Bay oil field, procedures were developed documenting the "appropriate" way to inject waste beneath the permafrost. This means that as of 2023, there are ~4500 industrial facilities in the Arctic permafrost areas which either actively process or store hazardous chemicals. Additionally, there are between 13,000 and 20,000 sites which have been heavily contaminated, 70% of them in Russia, and their pollution is currently trapped in the permafrost.
About a fifth of both the industrial and the polluted sites (1000 and 2200–4800) are expected to start thawing in the future even if the warming does not increase from its 2020 levels. Only about 3% more sites would start thawing between now and 2050 under the climate change scenario consistent with the Paris Agreement goals, RCP2.6, but by 2100, about 1100 more industrial facilities and 3500 to 5200 contaminated sites are expected to start thawing even then. Under the very high emission scenario RCP8.5, 46% of industrial and contaminated sites would start thawing by 2050, and virtually all of them would be affected by the thaw by 2100.
Organochlorines and other persistent organic pollutants are of a particular concern, due to their potential to repeatedly reach local communities after their re-release through biomagnification in fish. At worst, future generations born in the Arctic would enter life with weakened immune systems due to pollutants accumulating across generations.
A notable example of pollution risks associated with permafrost was the 2020 Norilsk oil spill, caused by the collapse of diesel fuel storage tank at Norilsk-Taimyr Energy's thermal power plant No. 3. It spilled 6,000 tonnes of fuel into the land and 15,000 into the water, polluting Ambarnaya, Daldykan and many smaller rivers on Taimyr Peninsula, even reaching lake Pyasino, which is a crucial water source in the area. State of emergency at the federal level was declared. The event has been described as the second-largest oil spill in modern Russian history.
Another issue associated with permafrost thaw is the release of natural mercury deposits. An estimated 800,000 tons of mercury are frozen in the permafrost soil. According to observations, around 70% of it is simply taken up by vegetation after the thaw. However, if the warming continues under RCP8.5, then permafrost emissions of mercury into the atmosphere would match the current global emissions from all human activities by 2200. Mercury-rich soils also pose a much greater threat to humans and the environment if they thaw near rivers. Under RCP8.5, enough mercury will enter the Yukon River basin by 2050 to make its fish unsafe to eat under the EPA guidelines. By 2100, mercury concentrations in the river will double. Contrastingly, even if mitigation is limited to RCP4.5 scenario, mercury levels will increase by about 14% by 2100, and will not breach the EPA guidelines even by 2300.
Revival of ancient organisms
Microorganisms
Bacteria are known for being able to remain dormant to survive adverse conditions, and viruses are not metabolically active outside of host cells in the first place. This has motivated concerns that permafrost thaw could free previously unknown microorganisms, which may be capable of infecting either humans or important livestock and crops, potentially resulting in damaging epidemics or pandemics. Further, some scientists argue that horizontal gene transfer could occur between the older, formerly frozen bacteria, and modern ones, and one outcome could be the introduction of novel antibiotic resistance genes into the genome of current pathogens, exacerbating what is already expected to become a difficult issue in the future.
At the same time, notable pathogens like influenza and smallpox appear unable to survive being thawed, and other scientists argue that the risk of ancient microorganisms being both able to survive the thaw and to threaten humans is not scientifically plausible. Likewise, some research suggests that antimicrobial resistance capabilities of ancient bacteria would be comparable to, or even inferior to modern ones.
Plants
In 2012, Russian researchers proved that permafrost can serve as a natural repository for ancient life forms by reviving a sample of Silene stenophylla from 30,000-year-old tissue found in an Ice Age squirrel burrow in the Siberian permafrost. This is the oldest plant tissue ever revived. The resultant plant was fertile, producing white flowers and viable seeds. The study demonstrated that living tissue can survive ice preservation for tens of thousands of years.
History of scientific research
Between the middle of the 19th century and the middle of the 20th century, most of the literature on basic permafrost science and the engineering aspects of permafrost was written in Russian. One of the earliest written reports describing the existence of permafrost dates to 1684, when well excavation efforts in Yakutsk were stumped by its presence. A significant role in the initial permafrost research was played by Alexander von Middendorff (1815–1894) and Karl Ernst von Baer, a Baltic German scientist at the University of Königsberg, and a member of the St Petersburg Academy of Sciences. Baer began publishing works on permafrost in 1838 and is often considered the "founder of scientific permafrost research." Baer laid the foundation for modern permafrost terminology by compiling and analyzing all available data on ground ice and permafrost.
Baer is also known to have composed the world's first permafrost textbook in 1843, "materials for the study of the perennial ground-ice", written in his native language. However, it was not printed then, and a Russian translation wasn't ready until 1942. The original German textbook was believed to be lost until the typescript from 1843 was discovered in the library archives of the University of Giessen. The 234-page text was available online, with additional maps, preface and comments. Notably, Baer's southern limit of permafrost in Eurasia drawn in 1843 corresponds well with the actual southern limit verified by modern research.
Beginning in 1942, Siemon William Muller delved into the relevant Russian literature held by the Library of Congress and the U.S. Geological Survey Library so that he was able to furnish the government an engineering field guide and a technical report about permafrost by 1943. That report coined the English term as a contraction of permanently frozen ground, in what was considered a direct translation of the Russian term (). In 1953, this translation was criticized by another USGS researcher Inna Poiré, as she believed the term had created unrealistic expectations about its stability: more recently, some researchers have argued that "perpetually refreezing" would be a more suitable translation. The report itself was classified (as U.S. Army. Office of the Chief of Engineers, Strategic Engineering Study, no. 62, 1943), until a revised version was released in 1947, which is regarded as the first North American treatise on the subject.
Between 11 and 15 November 1963, the First International Conference on Permafrost took place on the grounds of Purdue University in the American town of West Lafayette, Indiana. It involved 285 participants (including "engineers, manufacturers and builders" who attended alongside the researchers) from a range of countries (Argentina, Austria, Canada, Germany, Great Britain, Japan, Norway, Poland, Sweden, Switzerland, the US and the USSR). This marked the beginning of modern scientific collaboration on the subject. Conferences continue to take place every five years. During the Fourth conference in 1983, a special meeting between the "Big Four" participant countries (US, USSR, China, and Canada) officially created the International Permafrost Association.
In recent decades, permafrost research has attracted more attention than ever due to its role in climate change. Consequently, there has been a massive acceleration in published scientific literature. Around 1990, almost no papers were released containing the words "permafrost" and "carbon": by 2020, around 400 such papers were published every year.
| Physical sciences | Glaciology | null |
157819 | https://en.wikipedia.org/wiki/Meteor%20shower | Meteor shower | A meteor shower is a celestial event in which a number of meteors are observed to radiate, or originate, from one point in the night sky. These meteors are caused by streams of cosmic debris called meteoroids entering Earth's atmosphere at extremely high speeds on parallel trajectories. Most meteors are smaller than a grain of sand, so almost all of them disintegrate and never hit the Earth's surface. Very intense or unusual meteor showers are known as meteor outbursts and meteor storms, which produce at least 1,000 meteors an hour, most notably from the Leonids. The Meteor Data Centre lists over 900 suspected meteor showers of which about 100 are well established. Several organizations point to viewing opportunities on the Internet. NASA maintains a daily map of active meteor showers.
Historical developments
A meteor shower in August 1583 was recorded in the Timbuktu manuscripts.
In the modern era, the first great meteor storm was the Leonids of November 1833. One estimate is a peak rate of over one hundred thousand meteors an hour, but another, done as the storm abated, estimated more than two hundred thousand meteors during the 9 hours of the storm, over the entire region of North America east of the Rocky Mountains. American Denison Olmsted (1791–1859) explained the event most accurately. After spending the last weeks of 1833 collecting information, he presented his findings in January 1834 to the American Journal of Science and Arts, published in January–April 1834, and January 1836. He noted the shower was of short duration and was not seen in Europe, and that the meteors radiated from a point in the constellation of Leo. He speculated the meteors had originated from a cloud of particles in space. Work continued, yet coming to understand the annual nature of showers though the occurrences of storms perplexed researchers.
The actual nature of meteors was still debated during the 19th century. Meteors were conceived as an atmospheric phenomenon by many scientists (Alexander von Humboldt, Adolphe Quetelet, Julius Schmidt) until the Italian astronomer Giovanni Schiaparelli ascertained the relation between meteors and comets in his work " | Physical sciences | Planetary science | null |
157841 | https://en.wikipedia.org/wiki/Viola%20%28plant%29 | Viola (plant) | Viola is a genus of flowering plants in the violet family Violaceae. It is the largest genus in the family, containing over 680 species. Most species are found in the temperate Northern Hemisphere; however, some are also found in widely divergent areas such as Hawaii, Australasia, and the Andes.
Some Viola species are perennial plants, some are annual plants, and a few are small shrubs. Many species, varieties and cultivars are grown in gardens for their ornamental flowers. In horticulture, the term pansy is normally used for those multi-colored large-flowered cultivars which are raised annually or biennially from seed and used extensively in bedding. The terms viola and violet are normally reserved for small-flowered annuals or perennials, including the wild species.
Description
Annual or perennial caulescent or acaulescent (with or without a visible plant stem above the ground) herbs, shrubs or very rarely treelets. In acaulescent taxa the foliage and flowers appear to rise from the ground. The remainder have short stems with foliage and flowers produced in the axils of the leaves (axillary).
Viola typically have heart-shaped or reniform (kidney-shaped), scalloped leaves, though a number have linear or palmate leaves. The simple leaves of plants with either habit are arranged alternately; the acaulescent species produce basal rosettes. Plants always have leaves with stipules that are often leaf-like.
The flowers of the vast majority of the species are strongly zygomorphic with bilateral symmetry and solitary, but occasionally form cymes. The flowers are formed from five petals; four are upswept or fan-shaped with two per side, and there is one, broad, lobed lower petal pointing downward. This petal may be slightly or much shorter than the others and is weakly differentiated. The shape of the petals and placement defines many species, for example, some species have a "spur" on the end of each petal while most have a spur on the lower petal. The spur may vary from scarcely exserted (projecting) to very long, such as in Viola rostrata.
Solitary flowers end long stalks with a pair of bracteoles. The flowers have five sepals that persist after blooming, and in some species the sepals enlarge after blooming. The corolla ranges from white to yellow, orange or various shades of blue and violet or multicolored, often blue and yellow, with or without a yellow throat.
The flowers have five free stamens with short free filaments that are oppressed against the ovary, with a dorsal connective appendage that is large, entire and oblong to ovate. Only the lower two stamens are calcarate (possessing nectary spurs that are inserted on the lowest petal into the spur or a pouch). The styles are filiform (threadlike) or clavate (clubshaped), thickened at their tip, being globose to rostellate (beaked). The stigmas are head-like, narrowed or often beaked. The flowers have a superior ovary with one cell, which has three placentae, containing many ovules.
After flowering, fruit capsules are produced that are thick walled, with few to many seeds per carpel, and dehisce (split open) by way of three valves. On drying, the capsules may eject seeds with considerable force to distances of several meters. The nutlike seeds, which are obovoid to globose, are typically arillate (with a specialized outgrowth) and have straight embryos, flat cotyledons, and soft fleshy endosperm that is oily.
Phytochemistry
One characteristic of some Viola is the elusive scent of their flowers; along with terpenes, a major component of the scent is a ketone compound called ionone, which temporarily desensitizes the receptors of the nose, thus preventing any further scent being detected from the flower until the nerves recover.
Taxonomy
History
First formally described by Carl Linnaeus in 1753 with 19 species, the genus Viola bears his botanical authority, L. When Jussieu established the hierarchical system of families (1789), he placed Viola in the Cisti (rock roses), though by 1811 he suggested Viola be separated from these. However, in 1802 Batsch had already established a separate family, which he called Violariae based on Viola as the type genus, with seven other genera. Although Violariae continued to be used by some authors, such as Bentham and Hooker in 1862 (as Violarieae), most authors adopted the alternative name Violaceae, first proposed by de Lamarck and de Candolle in 1805, and Gingins (1823) and Saint-Hilaire (1824). However de Candolle also used Violarieae in his 1824 Prodromus.
Phylogeny
Viola is one of about 25 genera and about 600 species in the large eudicot family Violaceae, divided into subfamilies and tribes. While most genera are monotypic, Viola is a very large genus, variously circumscribed as having between 500 and 600 species. Historically it was placed in subfamily Violoideae, tribe Violeae. But these divisions have been shown to be artificial and not monophyletic. Molecular phylogenetic studies show that Viola occurs in Clade I of the family, as Viola, Schweiggeria, Noisettia and Allexis, in which Schweiggeria and Noisettia are monotypic and form a sister group to Viola.
Subdivision
Viola is a large genus that has traditionally been treated in sections. One of these was that of Gingins (1823), based on stigma morphology, with five sections (Nomimium, Dischidium, Chamaemelanium, Melanium, Leptidium). The extensive taxonomic studies of Wilhelm Becker, culminating in his 1925 conspectus, resulted in 14 sections and many infrasectional groups. The largest and most diverse, being section Viola, with 17 subsections. In addition to subsections, series were also described. Alternatively, some authors have preferred to subdivide the genus into subgenera. Subsequent treatments were by Gershoy (1934) and Clausen (1964), using subsections and series. These were all based on morphological characteristics. Subsequent studies using molecular phylogenetic methods, such as that of Ballard et al. (1998) have shown that many of these traditional divisions are not monophyletic, the problem being related to a high degree of hybridization. In particular section Nomimium was dismembered into several new sections and transferring part of it to section Viola. Section Viola s. lat. is represented by four sections, Viola sensu stricto, Plagiostigma s. str., Nosphinium sensu lato. and the V. spathulata group. In that analysis, the S American sections appear to be the basal groups, starting with Rubellium, then Leptidium. However, the exact phylogenetic relationships remain unresolved, as a consequence many different taxonomic nomenclatures are in use, including groupings referred to as Grex. Marcussen et al. place the five S American sections, Andinium, Leptidium, Tridens, Rubellium and Chilenium at the base of the phylogenetic tree, in that order. These are followed by the single Australian section, Erpetion, as sister group to Chilenium, the northern hemisphere sections and finally the single African section, V. abyssinica. These sections are morphologically, chromosomally, and geographically distinct.
Sections
Seventeen sections are recognized, listed alphabetically (approximate no. species);
Sect. Andinium W.Becker (113) S America
Sect. Chamaemelanium Ging. s.lat. (61) N America, northeast Asia (includes Dischidium, Orbiculares)
Subsect. Chamaemelanium
Subsect. Nudicaules
Subsect. Nuttalianae
Sect. Chilenium W.Becker (8) southern S America
Sect. Danxiaviola W. B. Liao et Q. Fan (1) China
Sect. Delphiniopsis W.Becker (3) western Eurasia: southern Spain; Balkans
Sect. Erpetion (Banks) W.Becker (11–18) eastern Australia; Tasmania
Sect. Leptidium Ging. (19) S America
Sect. Melanium Ging. (125) western Eurasia (pansies)
Sect. Nosphinium W.Becker s.lat. (31–50) N, C and northern S America; Beringia; Hawaii
Sect. nov. A (V. abyssinica group) (1–3) Africa: equatorial high mountains
Sect. nov. B (V. spathulata group) (7–9) western and central Asia: northern Iraq to Mongolia
Sect. Plagiostigma Godr. (120) northern hemisphere (includes Diffusae)
Grex Primulifolia
Sect. Rubellium W.Becker (3–6) S America: Chile
Sect. Sclerosium W.Becker (1–4) northeastern Africa to southwestern Asia
Sect. Tridens W.Becker (2) southern S America
Sect. Viola s.str. (Rostellatae nom. illeg.) (75) northern hemisphere (violets) (includes Repentes)
Subsect. Rostratae Kupffer (W.Becker)
Subsect. Viola
Sect. Xylinosium W.Becker (3–4) Mediterranean region
Species
The genus includes dog violets, a group of scentless species which are the most common Viola in many areas, sweet violet (Viola odorata) (named from its sweet scent), and many other species whose common name includes the word "violet". But not other "violets": Neither Streptocarpus sect. Saintpaulia ("African violets", Gesneriaceae) nor Erythronium dens-canis ("dogtooth violets", Liliaceae) are related to Viola.
List of selected species
Section Danxiaviola
Viola hybanthoides
Section Delphiniopsis
Viola cazorlensis
Viola delphinantha
Viola kosaninii
Section Erpetion
Viola banksii – Australian native violet, ivy-leaved violet
Viola hederacea – Australian native violet, ivy-leaved violet
Section Leptidium
Viola stipularis
Section Melanium (pansies)
Viola arvensis – field pansy
Viola bicolor
Viola pedunculata – yellow pansy, Pacific coast.
Viola bertolonii
Viola calcarata
Viola cheiranthifolia – Teide violet
Viola cornuta
Viola lutea
Viola tricolor – wild pansy, heartsease
Section Nosphinium
Viola pedata
Section A (V. abyssinica group)
Viola abyssinica
Section B (V. spathulata group)
Viola spathulata
Section Plagiostigma
Viola epipsila
Section Rubellium
Viola capillaris
Viola portalesia
Viola rubella
Section Sclerosium
Viola cinerea
Section Tridens
Viola tridentata – mountain violet
Section Viola (violets)
Viola canina – heath dog violet
Viola hirta – hairy violet
Viola labradorica – alpine violet
Viola odorata – sweet violet
Viola persicifolia – fen violet
Viola riviniana – common dog violet
Viola rostrata – long-spurred violet
Viola sororia – common blue violet, hooded violet
Section Xylinosium
Viola decumbens
Evolution and biogeography
One fossil seed of †Viola rimosa has been extracted from borehole samples of the Middle Miocene fresh water deposits in Nowy Sacz Basin, West Carpathians, Poland. The genus is thought to have arisen in S America, most likely the Andes.
Genetics
Habitat fragmentation has been shown to have minimal effect on the genetic diversity and gene flow of the North American woodland violet Viola pubescens. This may be partially attributed to the ability of Viola pubescens to continue to persist within a largely agricultural matrix. This trend of unexpectedly high genetic diversity is also observed in Viola palmensis, a Canary Island endemic known only from a 15 square kilometer range on La palma island. High levels of genetic diversity within these species indicate that these plants are outcrossing, even though many violet species can produce many clonal offspring throughout the year via cleistogamous flowers. Plants that produce copious amounts of clonal seeds from cleistogamous flowers often experience increased levels of inbreeding. These reportedly high rates of outcrossing and genetic diversity indicate that these violets are strong competitors for pollinators during the early spring when they are in bloom and that those pollinators can travel considerable distances between often fragmented populations.
Distribution and habitat
The worldwide northern temperate distribution of the genus distinguishes it from the remaining largely tropical Violaceae genera, restricted to either Old World or New World species, while in the tropics the distribution is primarily in high mountainous areas. Centres of diversity occur mainly in the northern hemisphere, in mountainous regions of eastern Asia, Melanesia, and southern Europe, but also occur in the Andes and the southern Patagonian cone of South America. One of the highest species concentrations is in the former USSR. Australia is home to a number of Viola species, including Viola hederacea, Viola betonicifolia and Viola banksii, first collected by Joseph Banks and Daniel Solander on the Cook voyage to Botany Bay.
Ecology
Viola species are used as food plants by the larvae of some Lepidoptera species, including the giant leopard moth, large yellow underwing, lesser broad-bordered yellow underwing, high brown fritillary, small pearl-bordered fritillary, pearl-bordered fritillary, regal fritillary, cardinal, and Setaceous Hebrew character. The larvae of many fritilary butterfly species use violets as an obligate host plant, although these butterflies do not always ovaposit directly onto violets. While the ecology of this genera is extremely diverse, violets are mainly pollinated by members within the orders Diptera and Hymenoptera. Showy flowers are produced in early spring, and clonal cleistogamous flowers are produced from late spring until the end of the growing season under favorable conditions. Cleistogamy allows plants to produce offspring year round and have more chances for establishment. This system is especially important in violets, as these plants are often weak competitors for pollination due to their small size.
Many violet species exhibit two modes of seed dispersal. Once seed capsules have matured, seeds are dispelled around the plant through explosive dehiscence. Viola pedata seeds have been reported being dispersed distances of up to 5 meters away from the parent plant. Often, seeds are then further dispersed by ants through a process called myrmecochory. Violets whose seeds are dispersed this way have specialized structures on the exterior of the seeds called elaiosomes. This interaction allows violet seed to germinate and establish in a protected, stable environment.
Many violet seeds exhibit physiological dormancy and require some period of cold stratification to induce germination under ex situ conditions. Rates of germination are often quite poor, especially when seeds are stored for extended periods of time. In North American habitat restoration, native violets are in high demand due to their relationship with the aforementioned fritillary butterflies.
Violet species occupy a diverse array of habitats, from bogs (Viola lanceolata) to dry hill prairies (V. pedata) to woodland understories (V. labradorica). While many of these species are indicators of high quality habitat, some violets are capable of thriving in a human altered landscape. Two species of zinc violet (V. calaminaria and V. guestphalica) are capable of living in soils severely contaminated with heavy metals. Many violets form relationships with arbuscular mycorrhizal fungi, and in the case of the zinc violets, this allows them to tolerate such highly contaminated soils.
Flowering is often profuse, and may last for much of the spring and summer. Viola are most often spring-blooming with chasmogamous flowers that have well-developed petals pollinated by insects. Many species also produce self-pollinated cleistogamous flowers in summer and autumn that do not open and lack petals. In some species the showy chasmogamous flowers are infertile (e.g.,Viola sororia).
Horticultural uses
The international registration authority for the genus is the American Violet Society, where growers register new Viola cultivars. A coding system is used for cultivar description of ten horticultural divisions, such as Violet (Vt) and Violetta (Vtta). Examples include Viola 'Little David' (Vtta) and Viola 'Königin Charlotte' (Vt).
In this system violets (Vt) are defined as "stoloniferous perennials with small, highly fragrant, self-coloured purple, blue or white flowers in late winter and early spring".
Species and cultivars
Many species, varieties and cultivars are grown in gardens for their ornamental flowers. In horticulture the term pansy is normally used for those multi-colored, large-flowered cultivars which are raised annually or biennially from seed and used extensively in bedding. The terms viola and violet are normally reserved for small-flowered annuals or perennials, including the wild species.
Cultivars of Viola cornuta, Viola cucullata, and Viola odorata, are commonly grown from seed. Other species often grown include Viola labradorica, Viola pedata, and Viola rotundifolia.
The modern garden pansy (V. × wittrockiana) is a plant of complex hybrid origin involving at least three species, V. tricolor (wild pansy or heartsease), V. altaica, and V. lutea (mountain pansy). The hybrid horned pansy (V. × williamsii) originates from hybridization involving garden pansy and Viola cornuta.
Bedding plants
In 2005 in the United States, Viola cultivars (including pansies) were one of the top three bedding plant crops and 111 million dollars worth of flats of Viola were produced for the bedding flower market. Pansies and violas used for bedding are generally raised from seed, and F1 hybrid seed strains have been developed which produce compact plants of reasonably consistent flower coloring and appearance. Bedding plants are usually discarded after one growing season.
Perennial cultivars
There are hundreds of perennial viola and violetta cultivars; many of these do not breed true from seed and therefore have to be propagated from cuttings. Violettas can be distinguished from violas by the lack of ray markings on their petals. The following cultivars, of mixed or uncertain parentage, have gained the Royal Horticultural Society's Award of Garden Merit:
'Aspasia'
'Clementina'
'Huntercombe Purple'
'Jackanapes'
'Molly Sanderson'
'Moonlight'
'Nellie Britton'
Other popular examples include:
'Ardross Gem' (viola)
'Blackjack'
'Buttercup' (violetta)
'Columbine' (viola)
'Dawn' (violetta)
'Etain' (viola)
'Irish Molly' (viola)
'Maggie Mott' (viola)
'Martin' (viola)
'Rebecca' (violetta)
'Vita' (viola)
'Zoe' (violetta)
Other uses
Culinary
When newly opened, Viola flowers may be used to decorate salads or in stuffings for poultry or fish. Soufflés, cream, and similar desserts can be flavoured with essence of Viola flowers. The young leaves are edible raw or cooked as a mild-tasting leaf vegetable. The flowers and leaves of the cultivar 'Rebecca', one of the Violetta violets, have a distinct vanilla flavor with hints of wintergreen. The pungent perfume of some varieties of V. odorata adds inimitable sweetness to desserts, fruit salads, and teas while the mild pea flavor of V. tricolor combines equally well with sweet or savory foods, like grilled meats and steamed vegetables. The heart-shaped leaves of V. odorata provide a free source of greens throughout a long growing season, while the petals are used for fragrant flavoring in milk puddings and ice cream or in salads and as garnishes.
A candied violet or crystallized violet is a flower, usually of Viola odorata, preserved by a coating of egg white and crystallised sugar. Alternatively, hot syrup is poured over the fresh flower (or the flower is immersed in the syrup) and stirred until the sugar recrystallizes and has dried. This method is still used for rose petals and was applied to orange flowers in the past (when almonds or orange peel are treated this way they are called pralines). Candied violets are still made commercially in Toulouse, France, where they are known as violettes de Toulouse. They are used as decorating cakes or trifles or included in aromatic desserts.
The French are also known for their violet syrup, most commonly made from an extract of violets. In the United States, this French violet syrup is used to make violet scones and marshmallows. Viola essence flavours the liqueurs Creme Yvette, Creme de Violette, and Parfait d'Amour. It is also used in confectionery, such as Parma Violets and C. Howard's Violet candies.
Medicinal
Many Viola species contain antioxidants called anthocyanins. Fourteen anthocyanins from V. yedoensis and V. prionantha have been identified. Some anthocyanins show strong antioxidant activities. Most violas tested and many other plants of the family Violaceae contain cyclotides, which have a diverse range of in vitro biological activities when isolated from the plant, including uterotonic, anti-HIV, antimicrobial, and insecticidal activities. Viola canescens, a species from India, exhibited in vitro activity against Trypanosoma cruzi.
Viola has been evaluated in different clinical indications in human studies. A double blind clinical trial showed that the adjuvant use of Viola odorata syrup with short-acting β-agonists can improve the cough suppression in children with asthma. In another study intranasal administration of Viola odorata extract oil showed to be effective in patients with insomnia. Topical use of an herbal formulation containing Viola tricolor extract also showed promising effects in patients with mild-to-moderate atopic dermatitis.
Perfume
Viola odorata is used as a source for scents in the perfume industry. Violet is known to have a 'flirty' scent as its fragrance comes and goes. Ionone is present in the flowers, which turns off the ability for humans to smell the fragrant compound for moments at a time.
Cultural associations
Birth
Violet is the traditional birth flower for February in English tradition.
Geographical territories
In the United States, the common blue violet Viola sororia is the state flower of Illinois, Rhode Island, New Jersey and Wisconsin. In Canada, the Viola cucullata is the provincial flower of New Brunswick, adopted in 1936. In the United Kingdom, Viola riviniana is the county flower of Lincolnshire.
Lesbian and bisexual culture
Violets became symbolically associated with romantic love between women. This connection originates from fragments of a poem by Sappho about a lost love, in which she describes her as "Close by my side you put around yourself [many wreaths] of violets and roses." In another poem, Sappho describes her lost love as wearing "violet tiaras, braided rosebuds, dill and crocus twined around" her neck. In 1926, one of the first plays to involve a lesbian relationship, La Prisonnière by Édouard Bourdet, used a bouquet of violets to signify lesbian love.
Tributes
Violets, and badges depicting them,
were sold in fund-raising efforts in Australia and New Zealand on and around Violet Day in commemoration of the lost soldiers of World War I.
| Biology and health sciences | Malpighiales | null |
157898 | https://en.wikipedia.org/wiki/Eye | Eye | An eye is a sensory organ that allows an organism to perceive visual information. It detects light and converts it into electro-chemical impulses in neurons (neurones). It is part of an organism's visual system.
In higher organisms, the eye is a complex optical system that collects light from the surrounding environment, regulates its intensity through a diaphragm, focuses it through an adjustable assembly of lenses to form an image, converts this image into a set of electrical signals, and transmits these signals to the brain through neural pathways that connect the eye via the optic nerve to the visual cortex and other areas of the brain.
Eyes with resolving power have come in ten fundamentally different forms, classified into compound eyes and non-compound eyes. Compound eyes are made up of multiple small visual units, and are common on insects and crustaceans. Non-compound eyes have a single lens and focus light onto the retina to form a single image. This type of eye is common in mammals, including humans.
The simplest eyes are pit eyes. They are eye-spots which may be set into a pit to reduce the angle of light that enters and affects the eye-spot, to allow the organism to deduce the angle of incoming light.
Eyes enable several photo response functions that are independent of vision. In an organism that has more complex eyes, retinal photosensitive ganglion cells send signals along the retinohypothalamic tract to the suprachiasmatic nuclei to effect circadian adjustment and to the pretectal area to control the pupillary light reflex.
Overview
Complex eyes distinguish shapes and colours. The visual fields of many organisms, especially predators, involve large areas of binocular vision for depth perception. In other organisms, particularly prey animals, eyes are located to maximise the field of view, such as in rabbits and horses, which have monocular vision.
The first proto-eyes evolved among animals about the time of the Cambrian explosion. The last common ancestor of animals possessed the biochemical toolkit necessary for vision, and more advanced eyes have evolved in 96% of animal species in six of the ~35 main phyla. In most vertebrates and some molluscs, the eye allows light to enter and project onto a light-sensitive layer of cells known as the retina. The cone cells (for colour) and the rod cells (for low-light contrasts) in the retina detect and convert light into neural signals which are transmitted to the brain via the optic nerve to produce vision. Such eyes are typically spheroid, filled with the transparent gel-like vitreous humour, possess a focusing lens, and often an iris. Muscles around the iris change the size of the pupil, regulating the amount of light that enters the eye and reducing aberrations when there is enough light. The eyes of most cephalopods, fish, amphibians and snakes have fixed lens shapes, and focusing is achieved by telescoping the lens in a similar manner to that of a camera.
The compound eyes of the arthropods are composed of many simple facets which, depending on anatomical detail, may give either a single pixelated image or multiple images per eye. Each sensor has its own lens and photosensitive cell(s). Some eyes have up to 28,000 such sensors arranged hexagonally, which can give a full 360° field of vision. Compound eyes are very sensitive to motion. Some arthropods, including many Strepsiptera, have compound eyes of only a few facets, each with a retina capable of creating an image. With each eye producing a different image, a fused, high-resolution image is produced in the brain.
The mantis shrimp has the world's most complex colour vision system. It has detailed hyperspectral colour vision.
Trilobites, now extinct, had unique compound eyes. Clear calcite crystals formed the lenses of their eyes. They differ in this from most other arthropods, which have soft eyes. The number of lenses in such an eye varied widely; some trilobites had only one while others had thousands of lenses per eye.
In contrast to compound eyes, simple eyes have a single lens. Jumping spiders have one pair of large simple eyes with a narrow field of view, augmented by an array of smaller eyes for peripheral vision. Some insect larvae, like caterpillars, have a type of simple eye (stemmata) which usually provides only a rough image, but (as in sawfly larvae) can possess resolving powers of 4 degrees of arc, be polarization-sensitive, and capable of increasing its absolute sensitivity at night by a factor of 1,000 or more. Ocelli, some of the simplest eyes, are found in animals such as some of the snails. They have photosensitive cells but no lens or other means of projecting an image onto those cells. They can distinguish between light and dark but no more, enabling them to avoid direct sunlight. In organisms dwelling near deep-sea vents, compound eyes are adapted to see the infra-red light produced by the hot vents, allowing the creatures to avoid being boiled alive.
Types
There are ten different eye layouts. Eye types can be categorised into "simple eyes", with one concave photoreceptive surface, and "compound eyes", which comprise a number of individual lenses laid out on a convex surface. "Simple" does not imply a reduced level of complexity or acuity. Indeed, any eye type can be adapted for almost any behaviour or environment. The only limitations specific to eye types are that of resolution—the physics of compound eyes prevents them from achieving a resolution better than 1°. Also, superposition eyes can achieve greater sensitivity than apposition eyes, so are better suited to dark-dwelling creatures.
Eyes also fall into two groups on the basis of their photoreceptor's cellular construction, with the photoreceptor cells either being ciliated (as in the vertebrates) or rhabdomeric. These two groups are not monophyletic; the Cnidaria also possess ciliated cells, and some gastropods and annelids possess both.
Some organisms have photosensitive cells that do nothing but detect whether the surroundings are light or dark, which is sufficient for the entrainment of circadian rhythms. These are not considered eyes because they lack enough structure to be considered an organ, and do not produce an image.
Every technological method of capturing an optical image that humans commonly use occurs in nature, with the exception of zoom and Fresnel lenses.
Non-compound eyes
Simple eyes are rather ubiquitous, and lens-bearing eyes have evolved at least seven times in vertebrates, cephalopods, annelids, crustaceans and Cubozoa.
Pit eyes
Pit eyes, also known as stemmata, are eye-spots which may be set into a pit to reduce the angles of light that enters and affects the eye-spot, to allow the organism to deduce the angle of incoming light. Found in about 85% of phyla, these basic forms were probably the precursors to more advanced types of "simple eyes". They are small, comprising up to about 100 cells covering about 100 μm. The directionality can be improved by reducing the size of the aperture, by incorporating a reflective layer behind the receptor cells, or by filling the pit with a refractile material.
Pit vipers have developed pits that function as eyes by sensing thermal infra-red radiation, in addition to their optical wavelength eyes like those of other vertebrates (see infrared sensing in snakes). However, pit organs are fitted with receptors rather different from photoreceptors, namely a specific transient receptor potential channel (TRP channels) called TRPV1. The main difference is that photoreceptors are G-protein coupled receptors but TRP are ion channels.
Spherical lens eye
The resolution of pit eyes can be greatly improved by incorporating a material with a higher refractive index to form a lens, which may greatly reduce the blur radius encountered—hence increasing the resolution obtainable. The most basic form, seen in some gastropods and annelids, consists of a lens of one refractive index. A far sharper image can be obtained using materials with a high refractive index, decreasing to the edges; this decreases the focal length and thus allows a sharp image to form on the retina. This also allows a larger aperture for a given sharpness of image, allowing more light to enter the lens; and a flatter lens, reducing spherical aberration. Such a non-homogeneous lens is necessary for the focal length to drop from about 4 times the lens radius, to 2.5 radii.
So-called under-focused lens eyes, found in gastropods and polychaete worms, have eyes that are intermediate between lens-less cup eyes and real camera eyes. Also box jellyfish have eyes with a spherical lens, cornea and retina, but the vision is blurry.
Heterogeneous eyes have evolved at least nine times: four or more times in gastropods, once in the copepods, once in the annelids, once in the cephalopods, and once in the chitons, which have aragonite lenses. No extant aquatic organisms possess homogeneous lenses; presumably the evolutionary pressure for a heterogeneous lens is great enough for this stage to be quickly "outgrown".
This eye creates an image that is sharp enough that motion of the eye can cause significant blurring. To minimise the effect of eye motion while the animal moves, most such eyes have stabilising eye muscles.
The ocelli of insects bear a simple lens, but their focal point usually lies behind the retina; consequently, those can not form a sharp image. Ocelli (pit-type eyes of arthropods) blur the image across the whole retina, and are consequently excellent at responding to rapid changes in light intensity across the whole visual field; this fast response is further accelerated by the large nerve bundles which rush the information to the brain. Focusing the image would also cause the sun's image to be focused on a few receptors, with the possibility of damage under the intense light; shielding the receptors would block out some light and thus reduce their sensitivity. This fast response has led to suggestions that the ocelli of insects are used mainly in flight, because they can be used to detect sudden changes in which way is up (because light, especially UV light which is absorbed by vegetation, usually comes from above).
Multiple lenses
Some marine organisms bear more than one lens; for instance the copepod Pontella has three. The outer has a parabolic surface, countering the effects of spherical aberration while allowing a sharp image to be formed. Another copepod, Copilia, has two lenses in each eye, arranged like those in a telescope. Such arrangements are rare and poorly understood, but represent an alternative construction.
Multiple lenses are seen in some hunters such as eagles and jumping spiders, which have a refractive cornea: these have a negative lens, enlarging the observed image by up to 50% over the receptor cells, thus increasing their optical resolution.
Refractive cornea
In the eyes of most mammals, birds, reptiles, and most other terrestrial vertebrates (along with spiders and some insect larvae) the vitreous fluid has a higher refractive index than the air. In general, the lens is not spherical. Spherical lenses produce spherical aberration. In refractive corneas, the lens tissue is corrected with inhomogeneous lens material (see Luneburg lens), or with an aspheric shape. Flattening the lens has a disadvantage; the quality of vision is diminished away from the main line of focus. Thus, animals that have evolved with a wide field-of-view often have eyes that make use of an inhomogeneous lens.
As mentioned above, a refractive cornea is only useful out of water. In water, there is little difference in refractive index between the vitreous fluid and the surrounding water. Hence creatures that have returned to the water—penguins and seals, for example—lose their highly curved cornea and return to lens-based vision. An alternative solution, borne by some divers, is to have a very strongly focusing cornea.
A unique feature of most mammal eyes is the presence of eyelids which wipe the eye and spread tears across the cornea to prevent dehydration. These eyelids are also supplemented by the presence of eyelashes, multiple rows of highly innervated and sensitive hairs which grow from the eyelid margins to protect the eye from fine particles and small irritants such as insects.
Reflector eyes
An alternative to a lens is to line the inside of the eye with "mirrors", and reflect the image to focus at a central point. The nature of these eyes means that if one were to peer into the pupil of an eye, one would see the same image that the organism would see, reflected back out.
Many small organisms such as rotifers, copepods and flatworms use such organs, but these are too small to produce usable images. Some larger organisms, such as scallops, also use reflector eyes. The scallop Pecten has up to 100 millimetre-scale reflector eyes fringing the edge of its shell. It detects moving objects as they pass successive lenses.
There is at least one vertebrate, the spookfish, whose eyes include reflective optics for focusing of light. Each of the two eyes of a spookfish collects light from both above and below; the light coming from above is focused by a lens, while that coming from below, by a curved mirror composed of many layers of small reflective plates made of guanine crystals.
Compound eyes
A compound eye may consist of thousands of individual photoreceptor units or ommatidia (ommatidium, singular). The image perceived is a combination of inputs from the numerous ommatidia (individual "eye units"), which are located on a convex surface, thus pointing in slightly different directions. Compared with simple eyes, compound eyes possess a very large view angle, and can detect fast movement and, in some cases, the polarisation of light. Because the individual lenses are so small, the effects of diffraction impose a limit on the possible resolution that can be obtained (assuming that they do not function as phased arrays). This can only be countered by increasing lens size and number. To see with a resolution comparable to our simple eyes, humans would require very large compound eyes, around in radius.
Compound eyes fall into two groups: apposition eyes, which form multiple inverted images, and superposition eyes, which form a single erect image. Compound eyes are common in arthropods, annelids and some bivalved molluscs. Compound eyes in arthropods grow at their margins by the addition of new ommatidia.
Apposition eyes
Apposition eyes are the most common form of eyes and are presumably the ancestral form of compound eyes. They are found in all arthropod groups, although they may have evolved more than once within this phylum. Some annelids and bivalves also have apposition eyes. They are also possessed by Limulus, the horseshoe crab, and there are suggestions that other chelicerates developed their simple eyes by reduction from a compound starting point. (Some caterpillars appear to have evolved compound eyes from simple eyes in the opposite fashion.)
Apposition eyes work by gathering a number of images, one from each eye, and combining them in the brain, with each eye typically contributing a single point of information. The typical apposition eye has a lens focusing light from one direction on the rhabdom, while light from other directions is absorbed by the dark wall of the ommatidium.
Superposition eyes
The second type is named the superposition eye. The superposition eye is divided into three types:
refracting,
reflecting and
parabolic superposition
The refracting superposition eye has a gap between the lens and the rhabdom, and no side wall. Each lens takes light at an angle to its axis and reflects it to the same angle on the other side. The result is an image at half the radius of the eye, which is where the tips of the rhabdoms are. This type of compound eye, for which a minimal size exists below which effective superposition cannot occur, is normally found in nocturnal insects, because it can create images up to 1000 times brighter than equivalent apposition eyes, though at the cost of reduced resolution. In the parabolic superposition compound eye type, seen in arthropods such as mayflies, the parabolic surfaces of the inside of each facet focus light from a reflector to a sensor array. Long-bodied decapod crustaceans such as shrimp, prawns, crayfish and lobsters are alone in having reflecting superposition eyes, which also have a transparent gap but use corner mirrors instead of lenses.
Parabolic superposition
This eye type functions by refracting light, then using a parabolic mirror to focus the image; it combines features of superposition and apposition eyes.
Other
Another kind of compound eye, found in males of Order Strepsiptera, employs a series of simple eyes—eyes having one opening that provides light for an entire image-forming retina. Several of these eyelets together form the strepsipteran compound eye, which is similar to the 'schizochroal' compound eyes of some trilobites. Because each eyelet is a simple eye, it produces an inverted image; those images are combined in the brain to form one unified image. Because the aperture of an eyelet is larger than the facets of a compound eye, this arrangement allows vision under low light levels.
Good fliers such as flies or honey bees, or prey-catching insects such as praying mantis or dragonflies, have specialised zones of ommatidia organised into a fovea area which gives acute vision. In the acute zone, the eyes are flattened and the facets larger. The flattening allows more ommatidia to receive light from a spot and therefore higher resolution. The black spot that can be seen on the compound eyes of such insects, which always seems to look directly at the observer, is called a pseudopupil. This occurs because the ommatidia which one observes "head-on" (along their optical axes) absorb the incident light, while those to one side reflect it.
There are some exceptions from the types mentioned above. Some insects have a so-called single lens compound eye, a transitional type which is something between a superposition type of the multi-lens compound eye and the single lens eye found in animals with simple eyes. Then there is the mysid shrimp, Dioptromysis paucispinosa. The shrimp has an eye of the refracting superposition type, in the rear behind this in each eye there is a single large facet that is three times in diameter the others in the eye and behind this is an enlarged crystalline cone. This projects an upright image on a specialised retina. The resulting eye is a mixture of a simple eye within a compound eye.
Another version is a compound eye often referred to as "pseudofaceted", as seen in Scutigera. This type of eye consists of a cluster of numerous ommatidia on each side of the head, organised in a way that resembles a true compound eye.
The body of Ophiocoma wendtii, a type of brittle star, is covered with ommatidia, turning its whole skin into a compound eye. The same is true of many chitons. The tube feet of sea urchins contain photoreceptor proteins, which together act as a compound eye; they lack screening pigments, but can detect the directionality of light by the shadow cast by its opaque body.
Nutrients
The ciliary body is triangular in horizontal section and is coated by a double layer, the ciliary epithelium. The inner layer is transparent and covers the vitreous body, and is continuous from the neural tissue of the retina. The outer layer is highly pigmented, continuous with the retinal pigment epithelium, and constitutes the cells of the dilator muscle.
The vitreous is the transparent, colourless, gelatinous mass that fills the space between the lens of the eye and the retina lining the back of the eye. It is produced by certain retinal cells. It is of rather similar composition to the cornea, but contains very few cells (mostly phagocytes which remove unwanted cellular debris in the visual field, as well as the hyalocytes of Balazs of the surface of the vitreous, which reprocess the hyaluronic acid), no blood vessels, and 98–99% of its volume is water (as opposed to 75% in the cornea) with salts, sugars, vitrosin (a type of collagen), a network of collagen type II fibres with the mucopolysaccharide hyaluronic acid, and also a wide array of proteins in micro amounts. Amazingly, with so little solid matter, it tautly holds the eye.
Evolution
Photoreception is phylogenetically very old, with various theories of phylogenesis. The common origin (monophyly) of all animal eyes is now widely accepted as fact. This is based upon the shared genetic features of all eyes; that is, all modern eyes, varied as they are, have their origins in a proto-eye believed to have evolved some 650-600 million years ago, and the PAX6 gene is considered a key factor in this. The majority of the advancements in early eyes are believed to have taken only a few million years to develop, since the first predator to gain true imaging would have touched off an "arms race" among all species that did not flee the photopic environment. Prey animals and competing predators alike would be at a distinct disadvantage without such capabilities and would be less likely to survive and reproduce. Hence multiple eye types and subtypes developed in parallel (except those of groups, such as the vertebrates, that were only forced into the photopic environment at a late stage).
Eyes in various animals show adaptation to their requirements. For example, the eye of a bird of prey has much greater visual acuity than a human eye, and in some cases can detect ultraviolet radiation. The different forms of eye in, for example, vertebrates and molluscs are examples of parallel evolution, despite their distant common ancestry. Phenotypic convergence of the geometry of cephalopod and most vertebrate eyes creates the impression that the vertebrate eye evolved from an imaging cephalopod eye, but this is not the case, as the reversed roles of their respective ciliary and rhabdomeric opsin classes and different lens crystallins show.
The very earliest "eyes", called eye-spots, were simple patches of photoreceptor protein in unicellular animals. In multicellular beings, multicellular eyespots evolved, physically similar to the receptor patches for taste and smell. These eyespots could only sense ambient brightness: they could distinguish light and dark, but not the direction of the light source.
Through gradual change, the eye-spots of species living in well-lit environments depressed into a shallow "cup" shape. The ability to slightly discriminate directional brightness was achieved by using the angle at which the light hit certain cells to identify the source. The pit deepened over time, the opening diminished in size, and the number of photoreceptor cells increased, forming an effective pinhole camera that was capable of dimly distinguishing shapes. However, the ancestors of modern hagfish, thought to be the protovertebrate, were evidently pushed to very deep, dark waters, where they were less vulnerable to sighted predators, and where it is advantageous to have a convex eye-spot, which gathers more light than a flat or concave one. This would have led to a somewhat different evolutionary trajectory for the vertebrate eye than for other animal eyes.
The thin overgrowth of transparent cells over the eye's aperture, originally formed to prevent damage to the eyespot, allowed the segregated contents of the eye chamber to specialise into a transparent humour that optimised colour filtering, blocked harmful radiation, improved the eye's refractive index, and allowed functionality outside of water. The transparent protective cells eventually split into two layers, with circulatory fluid in between that allowed wider viewing angles and greater imaging resolution, and the thickness of the transparent layer gradually increased, in most species with the transparent crystallin protein.
The gap between tissue layers naturally formed a biconvex shape, an optimally ideal structure for a normal refractive index. Independently, a transparent layer and a nontransparent layer split forward from the lens: the cornea and iris. Separation of the forward layer again formed a humour, the aqueous humour. This increased refractive power and again eased circulatory problems. Formation of a nontransparent ring allowed more blood vessels, more circulation, and larger eye sizes.
Relationship to life requirements
Eyes are generally adapted to the environment and life requirements of the organism which bears them. For instance, the distribution of photoreceptors tends to match the area in which the highest acuity is required, with horizon-scanning organisms, such as those that live on the African plains, having a horizontal line of high-density ganglia, while tree-dwelling creatures which require good all-round vision tend to have a symmetrical distribution of ganglia, with acuity decreasing outwards from the centre.
Of course, for most eye types, it is impossible to diverge from a spherical form, so only the density of optical receptors can be altered. In organisms with compound eyes, it is the number of ommatidia rather than ganglia that reflects the region of highest data acquisition. Optical superposition eyes are constrained to a spherical shape, but other forms of compound eyes may deform to a shape where more ommatidia are aligned to, say, the horizon, without altering the size or density of individual ommatidia. Eyes of horizon-scanning organisms have stalks so they can be easily aligned to the horizon when this is inclined, for example, if the animal is on a slope.
An extension of this concept is that the eyes of predators typically have a zone of very acute vision at their centre, to assist in the identification of prey. In deep water organisms, it may not be the centre of the eye that is enlarged. The hyperiid amphipods are deep water animals that feed on organisms above them. Their eyes are almost divided into two, with the upper region thought to be involved in detecting the silhouettes of potential prey—or predators—against the faint light of the sky above. Accordingly, deeper water hyperiids, where the light against which the silhouettes must be compared is dimmer, have larger "upper-eyes", and may lose the lower portion of their eyes altogether. In the giant Antarctic isopod Glyptonotus a small ventral compound eye is physically completely separated from the much larger dorsal compound eye. Depth perception can be enhanced by having eyes which are enlarged in one direction; distorting the eye slightly allows the distance to the object to be estimated with a high degree of accuracy.
Acuity is higher among male organisms that mate in mid-air, as they need to be able to spot and assess potential mates against a very large backdrop. On the other hand, the eyes of organisms which operate in low light levels, such as around dawn and dusk or in deep water, tend to be larger to increase the amount of light that can be captured.
It is not only the shape of the eye that may be affected by lifestyle. Eyes can be the most visible parts of organisms, and this can act as a pressure on organisms to have more transparent eyes at the cost of function.
Eyes may be mounted on stalks to provide better all-round vision, by lifting them above an organism's carapace; this also allows them to track predators or prey without moving the head.
Physiology
Visual acuity
Visual acuity, or resolving power, is "the ability to distinguish fine detail" and is the property of cone cells. It is often measured in cycles per degree (CPD), which measures an angular resolution, or how much an eye can differentiate one object from another in terms of visual angles. Resolution in CPD can be measured by bar charts of different numbers of white/black stripe cycles. For example, if each pattern is 1.75 cm wide and is placed at 1 m distance from the eye, it will subtend an angle of 1 degree, so the number of white/black bar pairs on the pattern will be a measure of the cycles per degree of that pattern. The highest such number that the eye can resolve as stripes, or distinguish from a grey block, is then the measurement of visual acuity of the eye.
For a human eye with excellent acuity, the maximum theoretical resolution is 50 CPD (1.2 arcminute per line pair, or a 0.35 mm line pair, at 1 m). A rat can resolve only about 1 to 2 CPD. A horse has higher acuity through most of the visual field of its eyes than a human has, but does not match the high acuity of the human eye's central fovea region.
Spherical aberration limits the resolution of a 7 mm pupil to about 3 arcminutes per line pair. At a pupil diameter of 3 mm, the spherical aberration is greatly reduced, resulting in an improved resolution of approximately 1.7 arcminutes per line pair. A resolution of 2 arcminutes per line pair, equivalent to a 1 arcminute gap in an optotype, corresponds to 20/20 (normal vision) in humans.
However, in the compound eye, the resolution is related to the size of individual ommatidia and the distance between neighbouring ommatidia. Physically these cannot be reduced in size to achieve the acuity seen with single lensed eyes as in mammals. Compound eyes have a much lower acuity than vertebrate eyes.
Colour perception
"Colour vision is the faculty of the organism to distinguish lights of different spectral qualities." All organisms are restricted to a small range of electromagnetic spectrum; this varies from creature to creature, but is mainly between wavelengths of 400 and 700 nm. This is a rather small section of the electromagnetic spectrum, probably reflecting the submarine evolution of the organ: water blocks out all but two small windows of the EM spectrum, and there has been no evolutionary pressure among land animals to broaden this range.
The most sensitive pigment, rhodopsin, has a peak response at 500 nm. Small changes to the genes coding for this protein can tweak the peak response by a few nm; pigments in the lens can also filter incoming light, changing the peak response. Many organisms are unable to discriminate between colours, seeing instead in shades of grey; colour vision necessitates a range of pigment cells which are primarily sensitive to smaller ranges of the spectrum. In primates, geckos, and other organisms, these take the form of cone cells, from which the more sensitive rod cells evolved. Even if organisms are physically capable of discriminating different colours, this does not necessarily mean that they can perceive the different colours; only with behavioural tests can this be deduced.
Most organisms with colour vision can detect ultraviolet light. This high energy light can be damaging to receptor cells. With a few exceptions (snakes, placental mammals), most organisms avoid these effects by having absorbent oil droplets around their cone cells. The alternative, developed by organisms that had lost these oil droplets in the course of evolution, is to make the lens impervious to UV light—this precludes the possibility of any UV light being detected, as it does not even reach the retina.
Rods and cones
The retina contains two major types of light-sensitive photoreceptor cells used for vision: the rods and the cones.
Rods cannot distinguish colours, but are responsible for low-light (scotopic) monochrome (black-and-white) vision; they work well in dim light as they contain a pigment, rhodopsin (visual purple), which is sensitive at low light intensity, but saturates at higher (photopic) intensities. Rods are distributed throughout the retina but there are none at the fovea and none at the blind spot. Rod density is greater in the peripheral retina than in the central retina.
Cones are responsible for colour vision. They require brighter light to function than rods require. In humans, there are three types of cones, maximally sensitive to long-wavelength, medium-wavelength, and short-wavelength light (often referred to as red, green, and blue, respectively, though the sensitivity peaks are not actually at these colours). The colour seen is the combined effect of stimuli to, and responses from, these three types of cone cells. Cones are mostly concentrated in and near the fovea. Only a few are present at the sides of the retina. Objects are seen most sharply in focus when their images fall on the fovea, as when one looks at an object directly. Cone cells and rods are connected through intermediate cells in the retina to nerve fibres of the optic nerve. When rods and cones are stimulated by light, they connect through adjoining cells within the retina to send an electrical signal to the optic nerve fibres. The optic nerves send off impulses through these fibres to the brain.
Pigmentation
The pigment molecules used in the eye are various, but can be used to define the evolutionary distance between different groups, and can also be an aid in determining which are closely related—although problems of convergence do exist.
Opsins are the pigments involved in photoreception. Other pigments, such as melanin, are used to shield the photoreceptor cells from light leaking in from the sides. The opsin protein group evolved long before the last common ancestor of animals, and has continued to diversify since.
There are two types of opsin involved in vision; c-opsins, which are associated with ciliary-type photoreceptor cells, and r-opsins, associated with rhabdomeric photoreceptor cells. The eyes of vertebrates usually contain ciliary cells with c-opsins, and (bilaterian) invertebrates have rhabdomeric cells in the eye with r-opsins. However, some ganglion cells of vertebrates express r-opsins, suggesting that their ancestors used this pigment in vision, and that remnants survive in the eyes. Likewise, c-opsins have been found to be expressed in the brain of some invertebrates. They may have been expressed in ciliary cells of larval eyes, which were subsequently resorbed into the brain on metamorphosis to the adult form. C-opsins are also found in some derived bilaterian-invertebrate eyes, such as the pallial eyes of the bivalve molluscs; however, the lateral eyes (which were presumably the ancestral type for this group, if eyes evolved once there) always use r-opsins. Cnidaria, which are an outgroup to the taxa mentioned above, express c-opsins—but r-opsins are yet to be found in this group. Incidentally, the melanin produced in the cnidaria is produced in the same fashion as that in vertebrates, suggesting the common descent of this pigment.
Additional images
| Biology and health sciences | Biology | null |
158005 | https://en.wikipedia.org/wiki/Genetic%20recombination | Genetic recombination | Genetic recombination (also known as genetic reshuffling) is the exchange of genetic material between different organisms which leads to production of offspring with combinations of traits that differ from those found in either parent. In eukaryotes, genetic recombination during meiosis can lead to a novel set of genetic information that can be further passed on from parents to offspring. Most recombination occurs naturally and can be classified into two types: (1) interchromosomal recombination, occurring through independent assortment of alleles whose loci are on different but homologous chromosomes (random orientation of pairs of homologous chromosomes in meiosis I); & (2) intrachromosomal recombination, occurring through crossing over.
During meiosis in eukaryotes, genetic recombination involves the pairing of homologous chromosomes. This may be followed by information transfer between the chromosomes. The information transfer may occur without physical exchange (a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed) (see SDSA – Synthesis Dependent Strand Annealing pathway in Figure); or by the breaking and rejoining of DNA strands, which forms new molecules of DNA (see DHJ pathway in Figure).
Recombination may also occur during mitosis in eukaryotes where it ordinarily involves the two sister chromosomes formed after chromosomal replication. In this case, new combinations of alleles are not produced since the sister chromosomes are usually identical. In meiosis and mitosis, recombination occurs between similar molecules of DNA (homologous sequences). In meiosis, non-sister homologous chromosomes pair with each other so that recombination characteristically occurs between non-sister homologues. In both meiotic and mitotic cells, recombination between homologous chromosomes is a common mechanism used in DNA repair.
Gene conversion – the process during which homologous sequences are made identical also falls under genetic recombination.
Genetic recombination and recombinational DNA repair also occurs in bacteria and archaea, which use asexual reproduction.
Recombination can be artificially induced in laboratory (in vitro) settings, producing recombinant DNA for purposes including vaccine development.
V(D)J recombination in organisms with an adaptive immune system is a type of site-specific genetic recombination that helps immune cells rapidly diversify to recognize and adapt to new pathogens.
Synapsis
During meiosis, synapsis (the pairing of homologous chromosomes) ordinarily precedes genetic recombination.
Mechanism
Genetic recombination is catalyzed by many different enzymes. Recombinases are key enzymes that catalyse the strand transfer step during recombination. RecA, the chief recombinase found in Escherichia coli, is responsible for the repair of DNA double strand breaks (DSBs). In yeast and other eukaryotic organisms there are two recombinases required for repairing DSBs. The RAD51 protein is required for mitotic and meiotic recombination, whereas the DNA repair protein, DMC1, is specific to meiotic recombination. In the archaea, the ortholog of the bacterial RecA protein is RadA.
Bacterial recombination
Bacteria regularly undergo genetic recombination in three main ways:
Transformation, the uptake of exogenous DNA from the surrounding environment.
Transduction, the virus-mediated transfer of DNA between bacteria.
Conjugation, the transfer of DNA from one bacterium to another via cell-to-cell contact.
Sometimes a strand of DNA is transferred into the target cell but fails to be copied as the target divides. This is called an abortive transfer.
Chromosomal crossover
In eukaryotes, recombination during meiosis is facilitated by chromosomal crossover. The crossover process leads to offspring having different combinations of genes from those of their parents, and can occasionally produce new chimeric alleles. The shuffling of genes brought about by genetic recombination produces increased genetic variation. It also allows sexually reproducing organisms to avoid Muller's ratchet, in which the genomes of an asexual population tend to accumulate more deleterious mutations over time than beneficial or reversing mutations.
Chromosomal crossover involves recombination between the paired chromosomes inherited from each of one's parents, generally occurring during meiosis. During prophase I (pachytene stage) the four available chromatids are in tight formation with one another. While in this formation, homologous sites on two chromatids can closely pair with one another, and may exchange genetic information.
Because there is a small probability of recombination at any location along a chromosome, the frequency of recombination between two locations depends on the distance separating them. Therefore, for genes sufficiently distant on the same chromosome, the amount of crossover is high enough to destroy the correlation between alleles.
Tracking the movement of genes resulting from crossovers has proven quite useful to geneticists. Because two genes that are close together are less likely to become separated than genes that are farther apart, geneticists can deduce roughly how far apart two genes are on a chromosome if they know the frequency of the crossovers. Geneticists can also use this method to infer the presence of certain genes. Genes that typically stay together during recombination are said to be linked. One gene in a linked pair can sometimes be used as a marker to deduce the presence of the other gene. This is typically used to detect the presence of a disease-causing gene.
The recombination frequency between two loci observed is the crossing-over value. It is the frequency of crossing over between two linked gene loci (markers), and depends on the distance between the genetic loci observed. For any fixed set of genetic and environmental conditions, recombination in a particular region of a linkage structure (chromosome) tends to be constant, and the same is then true for the crossing-over value which is used in the production of genetic maps.
Gene conversion
In gene conversion, a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed. Gene conversion occurs at high frequency at the actual site of the recombination event during meiosis. It is a process by which a DNA sequence is copied from one DNA helix (which remains unchanged) to another DNA helix, whose sequence is altered. Gene conversion has often been studied in fungal crosses where the 4 products of individual meioses can be conveniently observed. Gene conversion events can be distinguished as deviations in an individual meiosis from the normal 2:2 segregation pattern (e.g. a 3:1 pattern).
Nonhomologous recombination
Recombination can occur between DNA sequences that contain no sequence homology. This can cause chromosomal translocations, sometimes leading to cancer.
In B cells
B cells of the immune system perform genetic recombination, called immunoglobulin class switching. It is a biological mechanism that changes an antibody from one class to another, for example, from an isotype called IgM to an isotype called IgG.
Genetic engineering
In genetic engineering, recombination can also refer to artificial and deliberate recombination of disparate pieces of DNA, often from different organisms, creating what is called recombinant DNA. A prime example of such a use of genetic recombination is gene targeting, which can be used to add, delete or otherwise change an organism's genes. This technique is important to biomedical researchers as it allows them to study the effects of specific genes. Techniques based on genetic recombination are also applied in protein engineering to develop new proteins of biological interest.
Examples include Restriction enzyme mediated integration, Gibson assembly and Golden Gate Cloning.
Recombinational repair
DNA damages caused by a variety of exogenous agents (e.g. UV light, X-rays, chemical cross-linking agents) can be repaired by homologous recombinational repair (HRR). These findings suggest that DNA damages arising from natural processes, such as exposure to reactive oxygen species that are byproducts of normal metabolism, are also repaired by HRR. In humans, deficiencies in the gene products necessary for HRR during meiosis likely cause infertility In humans, deficiencies in gene products necessary for HRR, such as BRCA1 and BRCA2, increase the risk of cancer (see DNA repair-deficiency disorder).
In bacteria, transformation is a process of gene transfer that ordinarily occurs between individual cells of the same bacterial species. Transformation involves integration of donor DNA into the recipient chromosome by recombination. This process appears to be an adaptation for repairing DNA damages in the recipient chromosome by HRR. Transformation may provide a benefit to pathogenic bacteria by allowing repair of DNA damage, particularly damages that occur in the inflammatory, oxidizing environment associated with infection of a host.
When two or more viruses, each containing lethal genomic damages, infect the same host cell, the virus genomes can often pair with each other and undergo HRR to produce viable progeny. This process, referred to as multiplicity reactivation, has been studied in lambda and T4 bacteriophages, as well as in several pathogenic viruses. In the case of pathogenic viruses, multiplicity reactivation may be an adaptive benefit to the virus since it allows the repair of DNA damages caused by exposure to the oxidizing environment produced during host infection. | Biology and health sciences | Genetics | Biology |
158011 | https://en.wikipedia.org/wiki/Lipid%20bilayer | Lipid bilayer | The lipid bilayer (or phospholipid bilayer) is a thin polar membrane made of two layers of lipid molecules. These membranes form a continuous barrier around all cells. The cell membranes of almost all organisms and many viruses are made of a lipid bilayer, as are the nuclear membrane surrounding the cell nucleus, and membranes of the membrane-bound organelles in the cell. The lipid bilayer is the barrier that keeps ions, proteins and other molecules where they are needed and prevents them from diffusing into areas where they should not be. Lipid bilayers are ideally suited to this role, even though they are only a few nanometers in width, because they are impermeable to most water-soluble (hydrophilic) molecules. Bilayers are particularly impermeable to ions, which allows cells to regulate salt concentrations and pH by transporting ions across their membranes using proteins called ion pumps.
Biological bilayers are usually composed of amphiphilic phospholipids that have a hydrophilic phosphate head and a hydrophobic tail consisting of two fatty acid chains. Phospholipids with certain head groups can alter the surface chemistry of a bilayer and can, for example, serve as signals as well as "anchors" for other molecules in the membranes of cells. Just like the heads, the tails of lipids can also affect membrane properties, for instance by determining the phase of the bilayer. The bilayer can adopt a solid gel phase state at lower temperatures but undergo phase transition to a fluid state at higher temperatures, and the chemical properties of the lipids' tails influence at which temperature this happens. The packing of lipids within the bilayer also affects its mechanical properties, including its resistance to stretching and bending. Many of these properties have been studied with the use of artificial "model" bilayers produced in a lab. Vesicles made by model bilayers have also been used clinically to deliver drugs.
The structure of biological membranes typically includes several types of molecules in addition to the phospholipids comprising the bilayer. A particularly important example in animal cells is cholesterol, which helps strengthen the bilayer and decrease its permeability. Cholesterol also helps regulate the activity of certain integral membrane proteins. Integral membrane proteins function when incorporated into a lipid bilayer, and they are held tightly to the lipid bilayer with the help of an annular lipid shell. Because bilayers define the boundaries of the cell and its compartments, these membrane proteins are involved in many intra- and inter-cellular signaling processes. Certain kinds of membrane proteins are involved in the process of fusing two bilayers together. This fusion allows the joining of two distinct structures as in the acrosome reaction during fertilization of an egg by a sperm, or the entry of a virus into a cell. Because lipid bilayers are fragile and invisible in a traditional microscope, they are a challenge to study. Experiments on bilayers often require advanced techniques like electron microscopy and atomic force microscopy.
Structure and organization
When phospholipids are exposed to water, they self-assemble into a two-layered sheet with the hydrophobic tails pointing toward the center of the sheet. This arrangement results in two 'leaflets' that are each a single molecular layer. The center of this bilayer contains almost no water and excludes molecules like sugars or salts that dissolve in water. The assembly process and maintenance are driven by aggregation of hydrophobic molecules (also called the hydrophobic effect). This complex process includes non-covalent interactions such as van der Waals forces, electrostatic and hydrogen bonds.
Cross-section analysis
The lipid bilayer is very thin compared to its lateral dimensions. If a typical mammalian cell (diameter ~10 micrometers) were magnified to the size of a watermelon (~1 ft/30 cm), the lipid bilayer making up the plasma membrane would be about as thick as a piece of office paper. Despite being only a few nanometers thick, the bilayer is composed of several distinct chemical regions across its cross-section. These regions and their interactions with the surrounding water have been characterized over the past several decades with x-ray reflectometry, neutron scattering, and nuclear magnetic resonance techniques.
The first region on either side of the bilayer is the hydrophilic headgroup. This portion of the membrane is completely hydrated and is typically around 0.8-0.9 nm thick. In phospholipid bilayers the phosphate group is located within this hydrated region, approximately 0.5 nm outside the hydrophobic core. In some cases, the hydrated region can extend much further, for instance in lipids with a large protein or long sugar chain grafted to the head. One common example of such a modification in nature is the lipopolysaccharide coat on a bacterial outer membrane.
Next to the hydrated region is an intermediate region that is only partially hydrated. This boundary layer is approximately 0.3 nm thick. Within this short distance, the water concentration drops from 2M on the headgroup side to nearly zero on the tail (core) side. The hydrophobic core of the bilayer is typically 3-4 nm thick, but this value varies with chain length and chemistry. Core thickness also varies significantly with temperature, in particular near a phase transition.
Asymmetry
In many naturally occurring bilayers, the compositions of the inner and outer membrane leaflets are different. In human red blood cells, the inner (cytoplasmic) leaflet is composed mostly of phosphatidylethanolamine, phosphatidylserine and phosphatidylinositol and its phosphorylated derivatives. By contrast, the outer (extracellular) leaflet is based on phosphatidylcholine, sphingomyelin and a variety of glycolipids. In some cases, this asymmetry is based on where the lipids are made in the cell and reflects their initial orientation. The biological functions of lipid asymmetry are imperfectly understood, although it is clear that it is used in several different situations. For example, when a cell undergoes apoptosis, the phosphatidylserine — normally localised to the cytoplasmic leaflet — is transferred to the outer surface: There, it is recognised by a macrophage that then actively scavenges the dying cell.
Lipid asymmetry arises, at least in part, from the fact that most phospholipids are synthesised and initially inserted into the inner monolayer: those that constitute the outer monolayer are then transported from the inner monolayer by a class of enzymes called flippases. Other lipids, such as sphingomyelin, appear to be synthesised at the external leaflet. Flippases are members of a larger family of lipid transport molecules that also includes floppases, which transfer lipids in the opposite direction, and scramblases, which randomize lipid distribution across lipid bilayers (as in apoptotic cells). In any case, once lipid asymmetry is established, it does not normally dissipate quickly because spontaneous flip-flop of lipids between leaflets is extremely slow.
It is possible to mimic this asymmetry in the laboratory in model bilayer systems. Certain types of very small artificial vesicle will automatically make themselves slightly asymmetric, although the mechanism by which this asymmetry is generated is very different from that in cells. By utilizing two different monolayers in Langmuir-Blodgett deposition or a combination of Langmuir-Blodgett and vesicle rupture deposition it is also possible to synthesize an asymmetric planar bilayer. This asymmetry may be lost over time as lipids in supported bilayers can be prone to flip-flop. However, it has been reported that lipid flip-flop is slow compare to cholesterol and other smaller molecules.
It has been reported that the organization and dynamics of the lipid monolayers in a bilayer are coupled. For example, introduction of obstructions in one monolayer can slow down the lateral diffusion in both monolayers. In addition, phase separation in one monolayer can also induce phase separation in other monolayer even when other monolayer can not phase separate by itself.
Phases and phase transitions
At a given temperature a lipid bilayer can exist in either a liquid or a gel (solid) phase. All lipids have a characteristic temperature at which they transition (melt) from the gel to liquid phase. In both phases the lipid molecules are prevented from flip-flopping across the bilayer, but in liquid phase bilayers a given lipid will exchange locations with its neighbor millions of times a second. This random walk exchange allows lipid to diffuse and thus wander across the surface of the membrane.Unlike liquid phase bilayers, the lipids in a gel phase bilayer have less mobility.
The phase behavior of lipid bilayers is determined largely by the strength of the attractive Van der Waals interactions between adjacent lipid molecules. Longer-tailed lipids have more area over which to interact, increasing the strength of this interaction and, as a consequence, decreasing the lipid mobility. Thus, at a given temperature, a short-tailed lipid will be more fluid than an otherwise identical long-tailed lipid. Transition temperature can also be affected by the degree of unsaturation of the lipid tails. An unsaturated double bond can produce a kink in the alkane chain, disrupting the lipid packing. This disruption creates extra free space within the bilayer that allows additional flexibility in the adjacent chains. An example of this effect can be noted in everyday life as butter, which has a large percentage saturated fats, is solid at room temperature while vegetable oil, which is mostly unsaturated, is liquid.
Most natural membranes are a complex mixture of different lipid molecules. If some of the components are liquid at a given temperature while others are in the gel phase, the two phases can coexist in spatially separated regions, rather like an iceberg floating in the ocean. This phase separation plays a critical role in biochemical phenomena because membrane components such as proteins can partition into one or the other phase and thus be locally concentrated or activated. One particularly important component of many mixed phase systems is cholesterol, which modulates bilayer permeability, mechanical strength, and biochemical interactions.
Surface chemistry
While lipid tails primarily modulate bilayer phase behavior, it is the headgroup that determines the bilayer surface chemistry. Most natural bilayers are composed primarily of phospholipids, but sphingolipids and sterols such as cholesterol are also important components. Of the phospholipids, the most common headgroup is phosphatidylcholine (PC), accounting for about half the phospholipids in most mammalian cells. PC is a zwitterionic headgroup, as it has a negative charge on the phosphate group and a positive charge on the amine but, because these local charges balance, no net charge.
Other headgroups are also present to varying degrees and can include phosphatidylserine (PS) phosphatidylethanolamine (PE) and phosphatidylglycerol (PG). These alternate headgroups often confer specific biological functionality that is highly context-dependent. For instance, PS presence on the extracellular membrane face of erythrocytes is a marker of cell apoptosis, whereas PS in growth plate vesicles is necessary for the nucleation of hydroxyapatite crystals and subsequent bone mineralization. Unlike PC, some of the other headgroups carry a net charge, which can alter the electrostatic interactions of small molecules with the bilayer.
Biological roles
Containment and separation
The primary role of the lipid bilayer in biology is to separate aqueous compartments from their surroundings. Without some form of barrier delineating “self” from “non-self”, it is difficult to even define the concept of an organism or of life. This barrier takes the form of a lipid bilayer in all known life forms except for a few species of archaea that utilize a specially adapted lipid monolayer. It has even been proposed that the very first form of life may have been a simple lipid vesicle with virtually its sole biosynthetic capability being the production of more phospholipids. The partitioning ability of the lipid bilayer is based on the fact that hydrophilic molecules cannot easily cross the hydrophobic bilayer core, as discussed in Transport across the bilayer below. The nucleus, mitochondria and chloroplasts have two lipid bilayers, while other sub-cellular structures are surrounded by a single lipid bilayer (such as the plasma membrane, endoplasmic reticula, Golgi apparatus and lysosomes). See Organelle.
Prokaryotes have only one lipid bilayer - the cell membrane (also known as the plasma membrane). Many prokaryotes also have a cell wall, but the cell wall is composed of proteins or long chain carbohydrates, not lipids. In contrast, eukaryotes have a range of organelles including the nucleus, mitochondria, lysosomes and endoplasmic reticulum. All of these sub-cellular compartments are surrounded by one or more lipid bilayers and, together, typically comprise the majority of the bilayer area present in the cell. In liver hepatocytes for example, the plasma membrane accounts for only two percent of the total bilayer area of the cell, whereas the endoplasmic reticulum contains more than fifty percent and the mitochondria a further thirty percent.
Signaling
The most familiar form of cellular signaling is likely synaptic transmission, whereby a nerve impulse that has reached the end of one neuron is conveyed to an adjacent neuron via the release of neurotransmitters. This transmission is made possible by the action of synaptic vesicles which are, inside the cell, loaded with the neurotransmitters to be released later. These loaded vesicles fuse with the cell membrane at the pre-synaptic terminal and their contents are released into the space outside the cell. The contents then diffuse across the synapse to the post-synaptic terminal.
Lipid bilayers are also involved in signal transduction through their role as the home of integral membrane proteins. This is an extremely broad and important class of biomolecule. It is estimated that up to a third of the human proteome are membrane proteins. Some of these proteins are linked to the exterior of the cell membrane. An example of this is the CD59 protein, which identifies cells as “self” and thus inhibits their destruction by the immune system. The HIV virus evades the immune system in part by grafting these proteins from the host membrane onto its own surface. Alternatively, some membrane proteins penetrate all the way through the bilayer and serve to relay individual signal events from the outside to the inside of the cell. The most common class of this type of protein is the G protein-coupled receptor (GPCR). GPCRs are responsible for much of the cell's ability to sense its surroundings and, because of this important role, approximately 40% of all modern drugs are targeted at GPCRs.
In addition to protein- and solution-mediated processes, it is also possible for lipid bilayers to participate directly in signaling. A classic example of this is phosphatidylserine-triggered phagocytosis. Normally, phosphatidylserine is asymmetrically distributed in the cell membrane and is present only on the interior side. During programmed cell death a protein called a scramblase equilibrates this distribution, displaying phosphatidylserine on the extracellular bilayer face. The presence of phosphatidylserine then triggers phagocytosis to remove the dead or dying cell.
Characterization methods
The lipid bilayer is a difficult structure to study because it is so thin and fragile. To overcome these limitations, techniques have been developed to allow investigations of its structure and function.
Electrical measurements
Electrical measurements are a straightforward way to characterize an important function of a bilayer: its ability to segregate and prevent the flow of ions in solution. By applying a voltage across the bilayer and measuring the resulting current, the resistance of the bilayer is determined. This resistance is typically quite high (108 Ohm-cm2 or more) since the hydrophobic core is impermeable to charged species. The presence of even a few nanometer-scale holes results in a dramatic increase in current. The sensitivity of this system is such that even the activity of single ion channels can be resolved.
Fluorescence microscopy
A lipid bilayer cannot be seen with a traditional microscope because it is too thin, so researchers often use fluorescence microscopy. A sample is excited with one wavelength of light and observed in another, so that only fluorescent molecules with a matching excitation and emission profile will be seen. A natural lipid bilayer is not fluorescent, so at least one fluorescent dye needs to be attached to some of the molecules in the bilayer. Resolution is usually limited to a few hundred nanometers, which is unfortunately much larger than the thickness of a lipid bilayer.
Electron microscopy
Electron microscopy offers a higher resolution image. In an electron microscope, a beam of focused electrons interacts with the sample rather than a beam of light as in traditional microscopy. In conjunction with rapid freezing techniques, electron microscopy has also been used to study the mechanisms of inter- and intracellular transport, for instance in demonstrating that exocytotic vesicles are the means of chemical release at synapses.
Nuclear magnetic resonance spectroscopy
31P-Nuclear magnetic resonance spectroscopy is widely used for studies of phospholipid bilayers and biological membranes in native conditions. The analysis of 31P-NMR spectra of lipids could provide a wide range of information about lipid bilayer packing, phase transitions (gel phase, physiological liquid crystal phase, ripple phases, non bilayer phases), lipid head group orientation/dynamics, and elastic properties of pure lipid bilayer and as a result of binding of proteins and other biomolecules.
Atomic force microscopy
A new method to study lipid bilayers is Atomic force microscopy (AFM). Rather than using a beam of light or particles, a very small sharpened tip scans the surface by making physical contact with the bilayer and moving across it, like a record player needle. AFM is a promising technique because it has the potential to image with nanometer resolution at room temperature and even under water or physiological buffer, conditions necessary for natural bilayer behavior. Utilizing this capability, AFM has been used to examine dynamic bilayer behavior including the formation of transmembrane pores (holes) and phase transitions in supported bilayers. Another advantage is that AFM does not require fluorescent or isotopic labeling of the lipids, since the probe tip interacts mechanically with the bilayer surface. Because of this, the same scan can image both lipids and associated proteins, sometimes even with single-molecule resolution. AFM can also probe the mechanical nature of lipid bilayers.
Dual polarisation interferometry
Lipid bilayers exhibit high levels of birefringence where the refractive index in the plane of the bilayer differs from that perpendicular by as much as 0.1 refractive index units. This has been used to characterise the degree of order and disruption in bilayers using dual polarisation interferometry to understand mechanisms of protein interaction.
Quantum chemical calculations
Lipid bilayers are complicated molecular systems with many degrees of freedom. Thus, atomistic simulation of membrane and in particular ab initio calculations of its properties is difficult and computationally expensive. Quantum chemical calculations has recently been successfully performed to estimate dipole and quadrupole moments of lipid membranes.
Transport across the bilayer
Passive diffusion
Most polar molecules have low solubility in the hydrocarbon core of a lipid bilayer and, as a consequence, have low permeability coefficients across the bilayer. This effect is particularly pronounced for charged species, which have even lower permeability coefficients than neutral polar molecules. Anions typically have a higher rate of diffusion through bilayers than cations. Compared to ions, water molecules actually have a relatively large permeability through the bilayer, as evidenced by osmotic swelling. When a cell or vesicle with a high interior salt concentration is placed in a solution with a low salt concentration it will swell and eventually burst. Such a result would not be observed unless water was able to pass through the bilayer with relative ease. The anomalously large permeability of water through bilayers is still not completely understood and continues to be the subject of active debate. Small uncharged apolar molecules diffuse through lipid bilayers many orders of magnitude faster than ions or water. This applies both to fats and organic solvents like chloroform and ether. Regardless of their polar character larger molecules diffuse more slowly across lipid bilayers than small molecules.
Ion pumps and channels
Two special classes of protein deal with the ionic gradients found across cellular and sub-cellular membranes in nature- ion channels and ion pumps. Both pumps and channels are integral membrane proteins that pass through the bilayer, but their roles are quite different. Ion pumps are the proteins that build and maintain the chemical gradients by utilizing an external energy source to move ions against the concentration gradient to an area of higher chemical potential. The energy source can be ATP, as is the case for the Na+-K+ ATPase. Alternatively, the energy source can be another chemical gradient already in place, as in the Ca2+/Na+ antiporter. It is through the action of ion pumps that cells are able to regulate pH via the pumping of protons.
In contrast to ion pumps, ion channels do not build chemical gradients but rather dissipate them in order to perform work or send a signal. Probably the most familiar and best studied example is the voltage-gated Na+ channel, which allows conduction of an action potential along neurons. All ion pumps have some sort of trigger or “gating” mechanism. In the previous example it was electrical bias, but other channels can be activated by binding a molecular agonist or through a conformational change in another nearby protein.
Endocytosis and exocytosis
Some molecules or particles are too large or too hydrophilic to pass through a lipid bilayer. Other molecules could pass through the bilayer but must be transported rapidly in such large numbers that channel-type transport is impractical. In both cases, these types of cargo can be moved across the cell membrane through fusion or budding of vesicles. When a vesicle is produced inside the cell and fuses with the plasma membrane to release its contents into the extracellular space, this process is known as exocytosis. In the reverse process, a region of the cell membrane will dimple inwards and eventually pinch off, enclosing a portion of the extracellular fluid to transport it into the cell. Endocytosis and exocytosis rely on very different molecular machinery to function, but the two processes are intimately linked and could not work without each other. The primary mechanism of this interdependence is the large amount of lipid material involved. In a typical cell, an area of bilayer equivalent to the entire plasma membrane travels through the endocytosis/exocytosis cycle in about half an hour.
Exocytosis in prokaryotes: Membrane vesicular exocytosis, popularly known as membrane vesicle trafficking, a Nobel prize-winning (year, 2013) process, is traditionally regarded as a prerogative of eukaryotic cells. This myth was however broken with the revelation that nanovesicles, popularly known as bacterial outer membrane vesicles, released by gram-negative microbes, translocate bacterial signal molecules to host or target cells to carry out multiple processes in favour of the secreting microbe e.g., in host cell invasion and microbe-environment interactions, in general.
Electroporation
Electroporation is the rapid increase in bilayer permeability induced by the application of a large artificial electric field across the membrane. Experimentally, electroporation is used to introduce hydrophilic molecules into cells. It is a particularly useful technique for large highly charged molecules such as DNA, which would never passively diffuse across the hydrophobic bilayer core. Because of this, electroporation is one of the key methods of transfection as well as bacterial transformation. It has even been proposed that electroporation resulting from lightning strikes could be a mechanism of natural horizontal gene transfer.
Mechanics
Lipid bilayers are large enough structures to have some of the mechanical properties of liquids or solids. The area compression modulus Ka, bending modulus Kb, and edge energy , can be used to describe them. Solid lipid bilayers also have a shear modulus, but like any liquid, the shear modulus is zero for fluid bilayers. These mechanical properties affect how the membrane functions. Ka and Kb affect the ability of proteins and small molecules to insert into the bilayer, and bilayer mechanical properties have been shown to alter the function of mechanically activated ion channels. Bilayer mechanical properties also govern what types of stress a cell can withstand without tearing. Although lipid bilayers can easily bend, most cannot stretch more than a few percent before rupturing.
As discussed in the Structure and organization section, the hydrophobic attraction of lipid tails in water is the primary force holding lipid bilayers together. Thus, the elastic modulus of the bilayer is primarily determined by how much extra area is exposed to water when the lipid molecules are stretched apart. It is not surprising given this understanding of the forces involved that studies have shown that Ka varies strongly with osmotic pressure but only weakly with tail length and unsaturation. Because the forces involved are so small, it is difficult to experimentally determine Ka. Most techniques require sophisticated microscopy and very sensitive measurement equipment.
In contrast to Ka, which is a measure of how much energy is needed to stretch the bilayer, Kb is a measure of how much energy is needed to bend or flex the bilayer. Formally, bending modulus is defined as the energy required to deform a membrane from its intrinsic curvature to some other curvature. Intrinsic curvature is defined by the ratio of the diameter of the head group to that of the tail group. For two-tailed PC lipids, this ratio is nearly one so the intrinsic curvature is nearly zero. If a particular lipid has too large a deviation from zero intrinsic curvature it will not form a bilayer and will instead form other phases such as micelles or inverted micelles. Addition of small hydrophilic molecules like sucrose into mixed lipid lamellar liposomes made from galactolipid-rich thylakoid membranes destabilises bilayers into the micellar phase.
is a measure of how much energy it takes to expose a bilayer edge to water by tearing the bilayer or creating a hole in it. The origin of this energy is the fact that creating such an interface exposes some of the lipid tails to water, but the exact orientation of these border lipids is unknown. There is some evidence that both hydrophobic (tails straight) and hydrophilic (heads curved around) pores can coexist.
Fusion
Fusion is the process by which two lipid bilayers merge, resulting in one connected structure. If this fusion proceeds completely through both leaflets of both bilayers, a water-filled bridge is formed and the solutions contained by the bilayers can mix. Alternatively, if only one leaflet from each bilayer is involved in the fusion process, the bilayers are said to be hemifused. Fusion is involved in many cellular processes, in particular in eukaryotes, since the eukaryotic cell is extensively sub-divided by lipid bilayer membranes. Exocytosis, fertilization of an egg by sperm activation, and transport of waste products to the lysozome are a few of the many eukaryotic processes that rely on some form of fusion. Even the entry of pathogens can be governed by fusion, as many bilayer-coated viruses have dedicated fusion proteins to gain entry into the host cell.
There are four fundamental steps in the fusion process. First, the involved membranes must aggregate, approaching each other to within several nanometers. Second, the two bilayers must come into very close contact (within a few angstroms). To achieve this close contact, the two surfaces must become at least partially dehydrated, as the bound surface water normally present causes bilayers to strongly repel. The presence of ions, in particular divalent cations like magnesium and calcium, strongly affects this step. One of the critical roles of calcium in the body is regulating membrane fusion. Third, a destabilization must form at one point between the two bilayers, locally distorting their structures. The exact nature of this distortion is not known. One theory is that a highly curved "stalk" must form between the two bilayers. Proponents of this theory believe that it explains why phosphatidylethanolamine, a highly curved lipid, promotes fusion. Finally, in the last step of fusion, this point defect grows and the components of the two bilayers mix and diffuse away from the site of contact.
The situation is further complicated when considering fusion in vivo since biological fusion is almost always regulated by the action of membrane-associated proteins. The first of these proteins to be studied were the viral fusion proteins, which allow an enveloped virus to insert its genetic material into the host cell (enveloped viruses are those surrounded by a lipid bilayer; some others have only a protein coat). Eukaryotic cells also use fusion proteins, the best-studied of which are the SNAREs. SNARE proteins are used to direct all vesicular intracellular trafficking. Despite years of study, much is still unknown about the function of this protein class. In fact, there is still an active debate regarding whether SNAREs are linked to early docking or participate later in the fusion process by facilitating hemifusion.
In studies of molecular and cellular biology it is often desirable to artificially induce fusion. The addition of polyethylene glycol (PEG) causes fusion without significant aggregation or biochemical disruption. This procedure is now used extensively, for example by fusing B-cells with myeloma cells. The resulting “hybridoma” from this combination expresses a desired antibody as determined by the B-cell involved, but is immortalized due to the melanoma component. Fusion can also be artificially induced through electroporation in a process known as electrofusion. It is believed that this phenomenon results from the energetically active edges formed during electroporation, which can act as the local defect point to nucleate stalk growth between two bilayers.
Model systems
Lipid bilayers can be created artificially in the lab to allow researchers to perform experiments that cannot be done with natural bilayers. They can also be used in the field of Synthetic Biology, to define the boundaries of artificial cells. These synthetic systems are called model lipid bilayers. There are many different types of model bilayers, each having experimental advantages and disadvantages. They can be made with either synthetic or natural lipids. Among the most common model systems are:
Black lipid membranes (BLM)
Supported lipid bilayers (SLB)
Vesicles
Droplet Interface Bilayers (DIBs)
Commercial applications
To date, the most successful commercial application of lipid bilayers has been the use of liposomes for drug delivery, especially for cancer treatment. (Note- the term “liposome” is in essence synonymous with “vesicle” except that vesicle is a general term for the structure whereas liposome refers to only artificial not natural vesicles) The basic idea of liposomal drug delivery is that the drug is encapsulated in solution inside the liposome then injected into the patient. These drug-loaded liposomes travel through the system until they bind at the target site and rupture, releasing the drug. In theory, liposomes should make an ideal drug delivery system since they can isolate nearly any hydrophilic drug, can be grafted with molecules to target specific tissues and can be relatively non-toxic since the body possesses biochemical pathways for degrading lipids.
The first generation of drug delivery liposomes had a simple lipid composition and suffered from several limitations. Circulation in the bloodstream was extremely limited due to both renal clearing and phagocytosis. Refinement of the lipid composition to tune fluidity, surface charge density, and surface hydration resulted in vesicles that adsorb fewer proteins from serum and thus are less readily recognized by the immune system. The most significant advance in this area was the grafting of polyethylene glycol (PEG) onto the liposome surface to produce “stealth” vesicles, which circulate over long times without immune or renal clearing.
The first stealth liposomes were passively targeted at tumor tissues. Because tumors induce rapid and uncontrolled angiogenesis they are especially “leaky” and allow liposomes to exit the bloodstream at a much higher rate than normal tissue would. More recently work has been undertaken to graft antibodies or other molecular markers onto the liposome surface in the hope of actively binding them to a specific cell or tissue type. Some examples of this approach are already in clinical trials.
Another potential application of lipid bilayers is the field of biosensors. Since the lipid bilayer is the barrier between the interior and exterior of the cell, it is also the site of extensive signal transduction. Researchers over the years have tried to harness this potential to develop a bilayer-based device for clinical diagnosis or bioterrorism detection. Progress has been slow in this area and, although a few companies have developed automated lipid-based detection systems, they are still targeted at the research community. These include Biacore (now GE Healthcare Life Sciences), which offers a disposable chip for utilizing lipid bilayers in studies of binding kinetics and Nanion Inc., which has developed an automated patch clamping system.
A supported lipid bilayer (SLB) as described above has achieved commercial success as a screening technique to measure the permeability of drugs. This parallel artificial membrane permeability assay (PAMPA) technique measures the permeability across specifically formulated lipid cocktail(s) found to be highly correlated with Caco-2 cultures, the gastrointestinal tract, blood–brain barrier and skin.
History
By the early twentieth century scientists had come to believe that cells are surrounded by a thin oil-like barrier, but the structural nature of this membrane was not known. Two experiments in 1925 laid the groundwork to fill in this gap. By measuring the capacitance of erythrocyte solutions, Hugo Fricke determined that the cell membrane was 3.3 nm thick.
Although the results of this experiment were accurate, Fricke misinterpreted the data to mean that the cell membrane is a single molecular layer. Prof. Dr. Evert Gorter (1881–1954) and F. Grendel of Leiden University approached the problem from a different perspective, spreading the erythrocyte lipids as a monolayer on a Langmuir-Blodgett trough. When they compared the area of the monolayer to the surface area of the cells, they found a ratio of two to one. Later analyses showed several errors and incorrect assumptions with this experiment but, serendipitously, these errors canceled out and from this flawed data Gorter and Grendel drew the correct conclusion- that the cell membrane is a lipid bilayer.
This theory was confirmed through the use of electron microscopy in the late 1950s. Although he did not publish the first electron microscopy study of lipid bilayers J. David Robertson was the first to assert that the two dark electron-dense bands were the headgroups and associated proteins of two apposed lipid monolayers. In this body of work, Robertson put forward the concept of the “unit membrane.” This was the first time the bilayer structure had been universally assigned to all cell membranes as well as organelle membranes.
Around the same time, the development of model membranes confirmed that the lipid bilayer is a stable structure that can exist independent of proteins. By “painting” a solution of lipid in organic solvent across an aperture, Mueller and Rudin were able to create an artificial bilayer and determine that this exhibited lateral fluidity, high electrical resistance and self-healing in response to puncture, all of which are properties of a natural cell membrane. A few years later, Alec Bangham showed that bilayers, in the form of lipid vesicles, could also be formed simply by exposing a dried lipid sample to water. This demonstrated that lipid bilayers form spontaneously via self assembly and do not require a patterned support structure. In 1977, a totally synthetic bilayer membrane was prepared by Kunitake and Okahata, from a single organic compound, didodecyldimethylammonium bromide. This showed that the bilayer membrane was assembled by the intermolecular forces.
| Biology and health sciences | Cell parts | Biology |
158400 | https://en.wikipedia.org/wiki/Sepsis | Sepsis | Sepsis is a potentially life-threatening condition that arises when the body's response to infection causes injury to its own tissues and organs.
This initial stage of sepsis is followed by suppression of the immune system. Common signs and symptoms include fever, increased heart rate, increased breathing rate, and confusion. There may also be symptoms related to a specific infection, such as a cough with pneumonia, or painful urination with a kidney infection. The very young, old, and people with a weakened immune system may not have any symptoms that are specific to their infection, and their body temperature may be low or normal instead of constituting a fever. Severe sepsis causes poor organ function or blood flow. The presence of low blood pressure, high blood lactate, or low urine output may suggest poor blood flow. Septic shock is low blood pressure due to sepsis that does not improve after fluid replacement.
Sepsis is caused by many organisms including bacteria, viruses, and fungi. Common locations for the primary infection include the lungs, brain, urinary tract, skin, and abdominal organs. Risk factors include being very young or old, a weakened immune system from conditions such as cancer or diabetes, major trauma, and burns. Previously, a sepsis diagnosis required the presence of at least two systemic inflammatory response syndrome (SIRS) criteria in the setting of presumed infection. In 2016, a shortened sequential organ failure assessment score (SOFA score), known as the quick SOFA score (qSOFA), replaced the SIRS system of diagnosis. qSOFA criteria for sepsis include at least two of the following three: increased breathing rate, change in the level of consciousness, and low blood pressure. Sepsis guidelines recommend obtaining blood cultures before starting antibiotics; however, the diagnosis does not require the blood to be infected. Medical imaging is helpful when looking for the possible location of the infection. Other potential causes of similar signs and symptoms include anaphylaxis, adrenal insufficiency, low blood volume, heart failure, and pulmonary embolism.
Sepsis requires immediate treatment with intravenous fluids and antimicrobial medications. Ongoing care and stabilization often continues in an intensive care unit. If an adequate trial of fluid replacement is not enough to maintain blood pressure, then the use of medications that raise blood pressure becomes necessary. Mechanical ventilation and dialysis may be needed to support the function of the lungs and kidneys, respectively. A central venous catheter and an arterial catheter may be placed for access to the bloodstream and to guide treatment. Other helpful measurements include cardiac output and superior vena cava oxygen saturation. People with sepsis need preventive measures for deep vein thrombosis, stress ulcers, and pressure ulcers unless other conditions prevent such interventions. Some people might benefit from tight control of blood sugar levels with insulin. The use of corticosteroids is controversial, with some reviews finding benefit, and others not.
Disease severity partly determines the outcome. The risk of death from sepsis is as high as 30%, while for severe sepsis it is as high as 50%, and the risk of death from septic shock is 80%. Sepsis affected about 49 million people in 2017, with 11 million deaths (1 in 5 deaths worldwide). In the developed world, approximately 0.2 to 3 people per 1000 are affected by sepsis yearly, resulting in about a million cases per year in the United States. Rates of disease have been increasing. Some data indicate that sepsis is more common among males than females, however, other data show a greater prevalence of the disease among women. Descriptions of sepsis date back to the time of Hippocrates.
Signs and symptoms
In addition to symptoms related to the actual cause, people with sepsis may have a fever, low body temperature, rapid breathing, a fast heart rate, confusion, and edema. Early signs include a rapid heart rate, decreased urination, and high blood sugar. Signs of established sepsis include confusion, metabolic acidosis (which may be accompanied by a faster breathing rate that leads to respiratory alkalosis), low blood pressure due to decreased systemic vascular resistance, higher cardiac output, and disorders in blood-clotting that may lead to organ failure. Fever is the most common presenting symptom in sepsis, but fever may be absent in some people such as the elderly or those who are immunocompromised.
The drop in blood pressure seen in sepsis can cause lightheadedness and is part of the criteria for septic shock.
Oxidative stress is observed in septic shock, with circulating levels of copper and vitamin C being decreased.
Diastolic blood pressure falls during the early stages of sepsis, causing a widening/increasing of pulse pressure, which is the difference between the systolic and diastolic blood pressures. If sepsis becomes severe and hemodynamic compromise advances, the systolic pressure also decreases, causing a narrowing/decreasing of pulse pressure. A pulse pressure of over 70 mmHg in patients with sepsis is correlated with an increased chance of survival. A widened pulse pressure is also correlated with an increased chance that someone with sepsis will benefit from and respond to IV fluids.
Cause
Infections leading to sepsis are usually bacterial but may be fungal, parasitic, or viral. Gram-positive bacteria were the primary cause of sepsis before the introduction of antibiotics in the 1950s. After the introduction of antibiotics, gram-negative bacteria became the predominant cause of sepsis from the 1960s to the 1980s. After the 1980s, gram-positive bacteria, most commonly staphylococci, are thought to cause more than 50% of cases of sepsis. Other commonly implicated bacteria include Streptococcus pyogenes, Escherichia coli, Pseudomonas aeruginosa, and Klebsiella species. Fungal sepsis accounts for approximately 5% of severe sepsis and septic shock cases; the most common cause of fungal sepsis is an infection by Candida species of yeast, a frequent hospital-acquired infection. The most common causes for parasitic sepsis are Plasmodium (which leads to malaria), Schistosoma and Echinococcus.
The most common sites of infection resulting in severe sepsis are the lungs, the abdomen, and the urinary tract. Typically, 50% of all sepsis cases start as an infection in the lungs. In one-third to one-half of cases, the source of infection is unclear.
Pathophysiology
Sepsis is caused by a combination of factors related to the particular invading pathogen(s) and the status of the immune system of the host. The early phase of sepsis characterized by excessive inflammation (sometimes resulting in a cytokine storm) may be followed by a prolonged period of decreased functioning of the immune system. Either of these phases may prove fatal. On the other hand, systemic inflammatory response syndrome (SIRS) occurs in people without the presence of infection, for example, in those with burns, polytrauma, or the initial state in pancreatitis and chemical pneumonitis. However, sepsis also causes similar response to SIRS.
Microbial factors
Bacterial virulence factors, such as glycocalyx and various adhesins, allow colonization, immune evasion, and establishment of disease in the host. Sepsis caused by gram-negative bacteria is thought to be largely due to a response by the host to the lipid A component of lipopolysaccharide, also called endotoxin. Sepsis caused by gram-positive bacteria may result from an immunological response to cell wall lipoteichoic acid. Bacterial exotoxins that act as superantigens also may cause sepsis. Superantigens simultaneously bind major histocompatibility complex and T-cell receptors in the absence of antigen presentation. This forced receptor interaction induces the production of pro-inflammatory chemical signals (cytokines) by T-cells.
There are a number of microbial factors that may cause the typical septic inflammatory cascade. An invading pathogen is recognized by its pathogen-associated molecular patterns (PAMPs). Examples of PAMPs include lipopolysaccharides and flagellin in gram-negative bacteria, muramyl dipeptide in the peptidoglycan of the gram-positive bacterial cell wall, and CpG bacterial DNA. These PAMPs are recognized by the pattern recognition receptors (PRRs) of the innate immune system, which may be membrane-bound or cytosolic. There are four families of PRRs: the toll-like receptors, the C-type lectin receptors, the NOD-like receptors, and the RIG-I-like receptors. Invariably, the association of a PAMP and a PRR will cause a series of intracellular signalling cascades. Consequentially, transcription factors such as nuclear factor-kappa B and activator protein-1, will up-regulate the expression of pro-inflammatory and anti-inflammatory cytokines.
Host factors
Upon detection of microbial antigens, the host systemic immune system is activated. Immune cells not only recognise pathogen-associated molecular patterns but also damage-associated molecular patterns from damaged tissues. An uncontrolled immune response is then activated because leukocytes are not recruited to the specific site of infection, but instead, they are recruited all over the body. Then, an immunosuppression state ensues when the proinflammatory T helper cell 1 (TH1) is shifted to TH2, mediated by interleukin 10, which is known as "compensatory anti-inflammatory response syndrome". The apoptosis (cell death) of lymphocytes further worsens the immunosuppression. Neutrophils, monocytes, macrophages, dendritic cells, CD4+ T cells, and B cells all undergo apoptosis, whereas regulatory T cells are more apoptosis-resistant. Subsequently, multiple organ failure ensues because tissues are unable to use oxygen efficiently due to inhibition of cytochrome c oxidase.
Inflammatory responses cause multiple organ dysfunction syndrome through various mechanisms as described below. Increased permeability of the lung vessels causes leaking of fluids into alveoli, which results in pulmonary edema and acute respiratory distress syndrome (ARDS). Impaired utilization of oxygen in the liver impairs bile salt transport, causing jaundice (yellowish discoloration of the skin). In kidneys, inadequate oxygenation results in tubular epithelial cell injury (of the cells lining the kidney tubules), and thus causes acute kidney injury (AKI). Meanwhile, in the heart, impaired calcium transport, and low production of adenosine triphosphate (ATP), can cause myocardial depression, reducing cardiac contractility and causing heart failure. In the gastrointestinal tract, increased permeability of the mucosa alters the microflora, causing mucosal bleeding and paralytic ileus. In the central nervous system, direct damage of the brain cells and disturbances of neurotransmissions causes altered mental status. Cytokines such as tumor necrosis factor, interleukin 1, and interleukin 6 may activate procoagulation factors in the cells lining blood vessels, leading to endothelial damage. The damaged endothelial surface inhibits anticoagulant properties as well as increases antifibrinolysis, which may lead to intravascular clotting, the formation of blood clots in small blood vessels, and multiple organ failure.
The low blood pressure seen in those with sepsis is the result of various processes, including excessive production of chemicals that dilate blood vessels such as nitric oxide, a deficiency of chemicals that constrict blood vessels such as vasopressin, and activation of ATP-sensitive potassium channels. In those with severe sepsis and septic shock, this sequence of events leads to a type of circulatory shock known as distributive shock.
Diagnosis
Early diagnosis is necessary to properly manage sepsis, as the initiation of rapid therapy is key to reducing deaths from severe sepsis. Some hospitals use alerts generated from electronic health records to bring attention to potential cases as early as possible.
Within the first three hours of suspected sepsis, diagnostic studies should include white blood cell counts, measuring serum lactate, and obtaining appropriate cultures before starting antibiotics, so long as this does not delay their use by more than 45 minutes. To identify the causative organism(s), at least two sets of blood cultures using bottles with media for aerobic and anaerobic organisms are necessary. At least one should be drawn through the skin and one through each vascular access device (such as an IV catheter) that has been in place for more than 48 hours. Bacteria are present in the blood in only about 30% of cases. Another possible method of detection is by polymerase chain reaction. If other sources of infection are suspected, cultures of these sources, such as urine, cerebrospinal fluid, wounds, or respiratory secretions, also should be obtained, as long as this does not delay the use of antibiotics.
Within six hours, if blood pressure remains low despite initial fluid resuscitation of 30 mL/kg, or if initial lactate is ≥ four mmol/L (36 mg/dL), central venous pressure and central venous oxygen saturation should be measured. Lactate should be re-measured if the initial lactate was elevated. Evidence for point of care lactate measurement over usual methods of measurement, however, is poor.
Within twelve hours, it is essential to diagnose or exclude any source of infection that would require emergent source control, such as a necrotizing soft tissue infection, an infection causing inflammation of the abdominal cavity lining, an infection of the bile duct, or an intestinal infarction. A pierced internal organ (free air on an abdominal X-ray or CT scan), an abnormal chest X-ray consistent with pneumonia (with focal opacification), or petechiae, purpura, or purpura fulminans may indicate the presence of an infection.
Definitions
Previously, SIRS criteria had been used to define sepsis. If the SIRS criteria are negative, it is very unlikely the person has sepsis; if it is positive, there is just a moderate probability that the person has sepsis. According to SIRS, there were different levels of sepsis: sepsis, severe sepsis, and septic shock. The definition of SIRS is shown below:
SIRS is the presence of two or more of the following: abnormal body temperature, heart rate, respiratory rate, or blood gas, and white blood cell count.
Sepsis is defined as SIRS in response to an infectious process.
Severe sepsis is defined as sepsis with sepsis-induced organ dysfunction or tissue hypoperfusion (manifesting as hypotension, elevated lactate, or decreased urine output). Severe sepsis is an infectious disease state associated with multiple organ dysfunction syndrome (MODS)
Septic shock is severe sepsis plus persistently low blood pressure, despite the administration of intravenous fluids.
In 2016 a new consensus was reached to replace screening by systemic inflammatory response syndrome (SIRS) with the sequential organ failure assessment (SOFA score) and the abbreviated version (qSOFA). The three criteria for the qSOFA score include a respiratory rate greater than or equal to 22 breaths per minute, systolic blood pressure 100 mmHg or less, and altered mental status. Sepsis is suspected when 2 of the qSOFA criteria are met. The SOFA score was intended to be used in the intensive care unit (ICU) where it is administered upon admission to the ICU and then repeated every 48 hours, whereas the qSOFA could be used outside the ICU. Some advantages of the qSOFA score are that it can be administered quickly and does not require labs. However, the American College of Chest Physicians (CHEST) raised concerns that qSOFA and SOFA criteria may lead to delayed diagnosis of serious infection, leading to delayed treatment. Although SIRS criteria can be too sensitive and not specific enough in identifying sepsis, SOFA also has its limitations and is not intended to replace the SIRS definition. qSOFA has also been found to be poorly sensitive though decently specific for the risk of death with SIRS possibly better for screening. NOTE - Surviving Sepsis Campaign 2021 Guidelines recommend "against using qSOFA compared with SIRS, NEWS, or MEWS as a single screening tool for sepsis or septic shock".
End-organ dysfunction
Examples of end-organ dysfunction include the following:
Lungs: acute respiratory distress syndrome (ARDS) (PaO2/FiO2 ratio < 300), different ratio in pediatric acute respiratory distress syndrome
Brain: encephalopathy symptoms including agitation, confusion, and coma; causes may include ischemia, bleeding, formation of blood clots in small blood vessels, microabscesses, multifocal necrotizing leukoencephalopathy
Liver: disruption of protein synthetic function manifests acutely as progressive disruption of blood clotting due to an inability to synthesize clotting factors and disruption of metabolic functions leads to impaired bilirubin metabolism, resulting in elevated unconjugated serum bilirubin levels
Kidney: low urine output or no urine output, electrolyte abnormalities, or volume overload
Heart: systolic and diastolic heart failure, likely due to chemical signals that depress myocyte function, cellular damage, manifest as a troponin leak (although not necessarily ischemic in nature)
More specific definitions of end-organ dysfunction exist for SIRS in pediatrics.
Cardiovascular dysfunction (after fluid resuscitation with at least 40 mL/kg of crystalloid)
hypotension with blood pressure < 5th percentile for age or systolic blood pressure < 2 standard deviations below normal for age, or
vasopressor requirement, or
Two of the following criteria:
unexplained metabolic acidosis with base deficit > 5 mEq/L
lactic acidosis: serum lactate 2 times the upper limit of normal
oliguria (urine output )
prolonged capillary refill > 5 seconds
core to peripheral temperature difference
Respiratory dysfunction (in the absence of a cyanotic heart defect or a known chronic respiratory disease)
the ratio of the arterial partial pressure of oxygen to the fraction of oxygen in the gases inspired (PaO2/FiO2) < 300 (the definition of acute lung injury), or
arterial partial pressure of carbon dioxide (PaCO2) > 65 torr (20 mmHg) over baseline PaCO2 (evidence of hypercapnic respiratory failure), or
supplemental oxygen requirement of greater than FiO2 0.5 to maintain oxygen saturation ≥ 92%
Neurologic dysfunction
Glasgow Coma Score (GCS) ≤ 11, or
altered mental status with a drop in GCS of 3 or more points in a person with developmental delay/intellectual disability
Hematologic dysfunction
platelet count or 50% drop from the maximum in chronically thrombocytopenic, or
international normalized ratio (INR) > 2
Disseminated intravascular coagulation
Kidney dysfunction
serum creatinine ≥ 2 times the upper limit of normal for age or 2-fold increase in baseline creatinine in people with chronic kidney disease
Liver dysfunction (only applicable to infants > 1 month)
total serum bilirubin ≥ 4 mg/dL, or
alanine aminotransferase (ALT) ≥ 2 times the upper limit of normal
Consensus definitions, however, continue to evolve, with the latest expanding the list of signs and symptoms of sepsis to reflect clinical bedside experience.
Biomarkers
Biomarkers can help with diagnosis because they can point to the presence or severity of sepsis, although their exact role in the management of sepsis remains undefined. A 2013 review concluded moderate-quality evidence exists to support the use of the procalcitonin level as a method to distinguish sepsis from non-infectious causes of SIRS. The same review found the sensitivity of the test to be 77% and the specificity to be 79%. The authors suggested that procalcitonin may serve as a helpful diagnostic marker for sepsis, but cautioned that its level alone does not definitively make the diagnosis. More current literature recommends utilizing the PCT to direct antibiotic therapy for improved antibiotic stewardship and better patient outcomes.
A 2012 systematic review found that soluble urokinase-type plasminogen activator receptor (SuPAR) is a nonspecific marker of inflammation and does not accurately diagnose sepsis. This same review concluded, however, that SuPAR has prognostic value, as higher SuPAR levels are associated with an increased rate of death in those with sepsis. Serial measurement of lactate levels (approximately every 4 to 6 hours) may guide treatment and is associated with lower mortality in sepsis.
Differential diagnosis
The differential diagnosis for sepsis is broad and has to examine (to exclude) the non-infectious conditions that may cause the systemic signs of SIRS: alcohol withdrawal, acute pancreatitis, burns, pulmonary embolism, thyrotoxicosis, anaphylaxis, adrenal insufficiency, and neurogenic shock. Hyperinflammatory syndromes such as hemophagocytic lymphohistiocytosis (HLH) may have similar symptoms and are on the differential diagnosis.
Neonatal sepsis
In common clinical usage, neonatal sepsis refers to a bacterial blood stream infection in the first month of life, such as meningitis, pneumonia, pyelonephritis, or gastroenteritis, but neonatal sepsis also may be due to infection with fungi, viruses, or parasites. Criteria with regard to hemodynamic compromise or respiratory failure are not useful because they present too late for intervention.
Management
Early recognition and focused management may improve the outcomes of sepsis. Current professional recommendations include several actions ("bundles") to be followed as soon as possible after diagnosis. Within the first three hours, someone with sepsis should have received antibiotics, and intravenous fluids if there is evidence of either low blood pressure or other evidence for inadequate blood supply to organs (as evidenced by a raised level of lactate); blood cultures also should be obtained within this period. After six hours the blood pressure should be adequate, close monitoring of blood pressure and blood supply to organs should be in place, and the lactate should be measured again if initially it was raised. A related bundle, the "Sepsis Six", is in widespread use in the United Kingdom; this requires the administration of antibiotics within an hour of recognition, blood cultures, lactate, and hemoglobin determination, urine output monitoring, high-flow oxygen, and intravenous fluids.
Apart from the timely administration of fluids and antibiotics, the management of sepsis also involves surgical drainage of infected fluid collections and appropriate support for organ dysfunction. This may include hemodialysis in kidney failure, mechanical ventilation in lung dysfunction, transfusion of blood products, and drug and fluid therapy for circulatory failure. Ensuring adequate nutrition—preferably by enteral feeding, but if necessary, by parenteral nutrition—is important during prolonged illness. Medication to prevent deep vein thrombosis and gastric ulcers also may be used.
Antibiotics
Two sets of blood cultures (aerobic and anaerobic) are recommended without delaying the initiation of antibiotics. Cultures from other sites such as respiratory secretions, urine, wounds, cerebrospinal fluid, and catheter insertion sites (in situ for more than 48 hours) are recommended if infections from these sites are suspected. In severe sepsis and septic shock, broad-spectrum antibiotics (usually two, a β-lactam antibiotic with broad coverage, or broad-spectrum carbapenem combined with fluoroquinolones, macrolides, or aminoglycosides) are recommended. The choice of antibiotics is important in determining the survival of the person. Some recommend they be given within one hour of making the diagnosis, stating that for every hour of delay in the administration of antibiotics, there is an associated 6% rise in mortality. Others did not find a benefit with early administration.
Several factors determine the most appropriate choice for the initial antibiotic regimen. These factors include local patterns of bacterial sensitivity to antibiotics, whether the infection is thought to be a hospital or community-acquired infection, and which organ systems are thought to be infected. Antibiotic regimens should be reassessed daily and narrowed if appropriate. Treatment duration is typically 7–10 days with the type of antibiotic used directed by the results of cultures. If the culture result is negative, antibiotics should be de-escalated according to the person's clinical response or stopped altogether if an infection is not present to decrease the chances that the person is infected with multiple drug resistance organisms. In case of people having a high risk of being infected with multiple drug resistant organisms such as Pseudomonas aeruginosa, Acinetobacter baumannii, the addition of an antibiotic specific to the gram-negative organism is recommended. For methicillin-resistant Staphylococcus aureus (MRSA), vancomycin or teicoplanin is recommended. For Legionella infection, addition of macrolide or fluoroquinolone is chosen. If fungal infection is suspected, an echinocandin, such as caspofungin or micafungin, is chosen for people with severe sepsis, followed by triazole (fluconazole and itraconazole) for less ill people. Prolonged antibiotic prophylaxis is not recommended in people who has SIRS without any infectious origin such as acute pancreatitis and burns unless sepsis is suspected.
Once-daily dosing of aminoglycoside is sufficient to achieve peak plasma concentration for a clinical response without kidney toxicity. Meanwhile, for antibiotics with low volume distribution (vancomycin, teicoplanin, colistin), a loading dose is required to achieve an adequate therapeutic level to fight infections. Frequent infusions of beta-lactam antibiotics without exceeding the total daily dose would help to keep the antibiotics level above minimum inhibitory concentration (MIC), thus providing a better clinical response. Giving beta-lactam antibiotics continuously may be better than giving them intermittently. Access to therapeutic drug monitoring is important to ensure adequate drug therapeutic level while at the same time preventing the drug from reaching a toxic level.
Intravenous fluids
The Surviving Sepsis Campaign has recommended 30 mL/kg of fluid to be given in adults in the first three hours followed by fluid titration according to blood pressure, urine output, respiratory rate, and oxygen saturation with a target mean arterial pressure (MAP) of 65 mmHg. In children an initial amount of 20 mL/kg is reasonable in shock. In cases of severe sepsis and septic shock where a central venous catheter is used to measure blood pressures dynamically, fluids should be administered until the central venous pressure reaches 8–12 mmHg. Once these goals are met, the central venous oxygen saturation (ScvO2), i.e., the oxygen saturation of venous blood as it returns to the heart as measured at the vena cava, is optimized. If the ScvO2 is less than 70%, blood may be given to reach a hemoglobin of 10 g/dL and then inotropes are added until the ScvO2 is optimized. In those with acute respiratory distress syndrome (ARDS) and sufficient tissue blood fluid, more fluids should be given carefully.
Crystalloid solution is recommended as the fluid of choice for resuscitation. Albumin can be used if a large amount of crystalloid is required for resuscitation. Crystalloid solutions shows little difference with hydroxyethyl starch in terms of risk of death. Starches also carry an increased risk of acute kidney injury, and need for blood transfusion. Various colloid solutions (such as modified gelatin) carry no advantage over crystalloid. Albumin also appears to be of no benefit over crystalloids.
Blood products
The Surviving Sepsis Campaign recommended packed red blood cells transfusion for hemoglobin levels below 70 g/L if there is no myocardial ischemia, hypoxemia, or acute bleeding. In a 2014 trial, blood transfusions to keep target hemoglobin above 70 or 90 g/L did not make any difference to survival rates; meanwhile, those with a lower threshold of transfusion received fewer transfusions in total. Erythropoietin is not recommended in the treatment of anemia with septic shock because it may precipitate blood clotting events. Fresh frozen plasma transfusion usually does not correct the underlying clotting abnormalities before a planned surgical procedure. However, platelet transfusion is suggested for platelet counts below (10 × 109/L) without any risk of bleeding, or (20 × 109/L) with a high risk of bleeding, or (50 × 109/L) with active bleeding, before planned surgery or an invasive procedure. IV immunoglobulin is not recommended because its beneficial effects are uncertain. Monoclonal and polyclonal preparations of intravenous immunoglobulin (IVIG) do not lower the rate of death in newborns and adults with sepsis. Evidence for the use of IgM-enriched polyclonal preparations of IVIG is inconsistent. On the other hand, the use of antithrombin to treat disseminated intravascular coagulation is also not useful. Meanwhile, the blood purification technique (such as hemoperfusion, plasma filtration, and coupled plasma filtration adsorption) to remove inflammatory mediators and bacterial toxins from the blood also does not demonstrate any survival benefit for septic shock.
Vasopressors
If the person has been sufficiently fluid resuscitated but the mean arterial pressure is not greater than 65 mmHg, vasopressors are recommended. Norepinephrine (noradrenaline) is recommended as the initial choice. Delaying initiation of vasopressor therapy during septic shock is associated with increased mortality.
Norepinephrine is often used as a first-line treatment for hypotensive septic shock because evidence shows that there is a relative deficiency of vasopressin when shock continues for 24 to 48 hours. Norepinephrine raises blood pressure through a vasoconstriction effect, with little effect on stroke volume and heart rate. In some people, the required dose of vasopressor needed to increase the mean arterial pressure can become exceedingly high and it becomes toxic. To reduce the required dose of vasopressor, epinephrine may be added. Epinephrine is not often used as a first-line treatment for hypotensive shock because it reduces blood flow to the abdominal organs and increases lactate levels. Vasopressin can be used in septic shock because studies have shown that there is a relative deficiency of vasopressin when shock continues for 24 to 48 hours. However, vasopressin reduces blood flow to the heart, fingers/toes, and abdominal organs, resulting in a lack of oxygen supply to these tissues. Dopamine is typically not recommended. Although dopamine is useful for increasing the stroke volume of the heart, it causes more abnormal heart rhythms than norepinephrine and also has an immunosuppressive effect. Dopamine is not proven to have protective properties on the kidneys. Dobutamine can also be used in hypotensive septic shock to increase cardiac output and correct blood flow to the tissues. Dobutamine is not used as often as epinephrine due to its associated side effects, which include reducing blood flow to the gut. Additionally, dobutamine increases the cardiac output by abnormally increasing the heart rate.
Steroids
The use of steroids in sepsis is controversial. Studies do not give a clear picture as to whether and when glucocorticoids should be used. The 2016 Surviving Sepsis Campaign recommends low dose hydrocortisone only if both intravenous fluids and vasopressors are not able to adequately treat septic shock. The 2021 Surviving Sepsis Campaign recommends IV corticosteroids for adults with septic shock who have an ongoing requirement for vasopressor therapy. A 2019 Cochrane review found low-quality evidence of benefit, as did two 2019 reviews.
During critical illness, a state of adrenal insufficiency and tissue resistance to corticosteroids may occur. This has been termed critical illness–related corticosteroid insufficiency. Treatment with corticosteroids might be most beneficial in those with septic shock and early severe ARDS, whereas its role in others such as those with pancreatitis or severe pneumonia is unclear. However, the exact way of determining corticosteroid insufficiency remains problematic. It should be suspected in those poorly responding to resuscitation with fluids and vasopressors. Neither ACTH stimulation testing nor random cortisol levels are recommended to confirm the diagnosis. The method of stopping glucocorticoid drugs is variable, and it is unclear whether they should be slowly decreased or simply abruptly stopped. However, the 2016 Surviving Sepsis Campaign recommended to taper steroids when vasopressors are no longer needed.
Anesthesia
A target tidal volume of 6 mL/kg of predicted body weight (PBW) and a plateau pressure less than 30 cm H2O is recommended for those who require ventilation due to sepsis-induced severe ARDS. High positive end expiratory pressure (PEEP) is recommended for moderate to severe ARDS in sepsis as it opens more lung units for oxygen exchange. Predicted body weight is calculated based on sex and height, and tools for this are available. Recruitment maneuvers may be necessary for severe ARDS by briefly raising the transpulmonary pressure. It is recommended that the head of the bed be raised if possible to improve ventilation. However, β2 adrenergic receptor agonists are not recommended to treat ARDS because it may reduce survival rates and precipitate abnormal heart rhythms. A spontaneous breathing trial using continuous positive airway pressure (CPAP), T piece, or inspiratory pressure augmentation can help reduce the duration of ventilation. Minimizing intermittent or continuous sedation helps reduce the duration of mechanical ventilation.
General anesthesia is recommended for people with sepsis who require surgical procedures to remove the infective source. Usually, inhalational and intravenous anesthetics are used. Requirements for anesthetics may be reduced in sepsis. Inhalational anesthetics can reduce the level of proinflammatory cytokines, altering leukocyte adhesion and proliferation, inducing apoptosis (cell death) of the lymphocytes, possibly with a toxic effect on mitochondrial function. Although etomidate has a minimal effect on the cardiovascular system, it is often not recommended as a medication to help with intubation in this situation due to concerns it may lead to poor adrenal function and an increased risk of death. The small amount of evidence there is, however, has not found a change in the risk of death with etomidate.
Paralytic agents are not suggested for use in sepsis cases in the absence of ARDS, as a growing body of evidence points to reduced durations of mechanical ventilation, ICU and hospital stays. However, paralytic use in ARDS cases remains controversial. When appropriately used, paralytics may aid successful mechanical ventilation, however, evidence has also suggested that mechanical ventilation in severe sepsis does not improve oxygen consumption and delivery.
Source control
Source control refers to physical interventions to control a focus of infection and reduce conditions favorable to microorganism growth or host defense impairment, such as drainage of pus from an abscess. It is one of the oldest procedures for the control of infections, giving rise to the Latin phrase Ubi pus, ibi evacua, and remains important despite the emergence of more modern treatments.
Early goal directed therapy
Early goal directed therapy (EGDT) is an approach to the management of severe sepsis during the initial 6 hours after diagnosis. It is a step-wise approach, with the physiologic goal of optimizing cardiac preload, afterload, and contractility. It includes giving early antibiotics. EGDT also involves monitoring of hemodynamic parameters and specific interventions to achieve key resuscitation targets which include maintaining a central venous pressure between 8–12 mmHg, a mean arterial pressure of between 65 and 90 mmHg, a central venous oxygen saturation (ScvO2) greater than 70% and a urine output of greater than 0.5 mL/kg/hour. The goal is to optimize oxygen delivery to tissues and achieve a balance between systemic oxygen delivery and demand. An appropriate decrease in serum lactate may be equivalent to ScvO2 and easier to obtain.
In the original trial, early goal-directed therapy was found to reduce mortality from 46.5% to 30.5% in those with sepsis, and the Surviving Sepsis Campaign has been recommending its use. However, three more recent large randomized control trials (ProCESS, ARISE, and ProMISe), did not demonstrate a 90-day mortality benefit of early goal-directed therapy when compared to standard therapy in severe sepsis. It is likely that some parts of EGDT are more important than others. Following these trials the use of EGDT is still considered reasonable.
Newborns
Neonatal sepsis can be difficult to diagnose as newborns may be asymptomatic. If a newborn shows signs and symptoms suggestive of sepsis, antibiotics are immediately started and are either changed to target a specific organism identified by diagnostic testing or discontinued after an infectious cause for the symptoms has been ruled out. Despite early intervention, death occurs in 13% of children who develop septic shock, with the risk partly based on other health problems. For those without multiple organ system failures or who require only one inotropic agent, mortality is low.
Other
Treating fever in sepsis, including people in septic shock, has not been associated with any improvement in mortality over a period of 28 days. Treatment of fever still occurs for other reasons.
A 2012 Cochrane review concluded that N-acetylcysteine does not reduce mortality in those with SIRS or sepsis and may even be harmful.
Recombinant activated protein C (drotrecogin alpha) was originally introduced for severe sepsis (as identified by a high APACHE II score), where it was thought to confer a survival benefit. However, subsequent studies showed that it increased adverse events—bleeding risk in particular—and did not decrease mortality. It was removed from sale in 2011. Another medication known as eritoran also has not shown benefit.
In those with high blood sugar levels, insulin to bring it down to 7.8–10 mmol/L (140–180 mg/dL) is recommended with lower levels potentially worsening outcomes. Glucose levels taken from capillary blood should be interpreted with care because such measurements may not be accurate. If a person has an arterial catheter, arterial blood is recommended for blood glucose testing.
Intermittent or continuous renal replacement therapy may be used if indicated. However, sodium bicarbonate is not recommended for a person with lactic acidosis secondary to hypoperfusion. Low-molecular-weight heparin (LMWH), unfractionated heparin (UFH), and mechanical prophylaxis with intermittent pneumatic compression devices are recommended for any person with sepsis at moderate to high risk of venous thromboembolism. Stress ulcer prevention with proton-pump inhibitor (PPI) and H2 antagonist are useful in a person with risk factors of developing upper gastrointestinal bleeding (UGIB) such as on mechanical ventilation for more than 48 hours, coagulation disorders, liver disease, and renal replacement therapy. Achieving partial or full enteral feeding (delivery of nutrients through a feeding tube) is chosen as the best approach to provide nutrition for a person who is contraindicated for oral intake or unable to tolerate orally in the first seven days of sepsis when compared to intravenous nutrition. However, omega-3 fatty acids are not recommended as immune supplements for a person with sepsis or septic shock. The usage of prokinetic agents such as metoclopramide, domperidone, and erythromycin are recommended for those who are septic and unable to tolerate enteral feeding. However, these agents may precipitate prolongation of the QT interval and consequently provoke a ventricular arrhythmia such as torsades de pointes. The usage of prokinetic agents should be reassessed daily and stopped if no longer indicated.
People in sepsis may have micronutrient deficiencies, including low levels of vitamin C. Reviews mention that an intake of 3.0 g/day, which requires intravenous administration, may be needed to maintain normal plasma concentrations in people with sepsis or severe burn injury.
Prognosis
Sepsis will prove fatal in approximately 24.4% of people, and septic shock will prove fatal in 34.7% of people within 30 days (32.2% and 38.5% after 90 days).
Lactate is a useful method of determining prognosis, with those who have a level greater than 4 mmol/L having a mortality of 40% and those with a level of less than 2 mmol/L having a mortality of less than 15%.
There are a number of prognostic stratification systems, such as APACHE II and Mortality in Emergency Department Sepsis. APACHE II factors in the person's age, underlying condition, and various physiologic variables to yield estimates of the risk of dying of severe sepsis. Of the individual covariates, the severity of the underlying disease most strongly influences the risk of death. Septic shock is also a strong predictor of short- and long-term mortality. Case-fatality rates are similar for culture-positive and culture-negative severe sepsis. The Mortality in Emergency Department Sepsis (MEDS) score is simpler and useful in the emergency department environment.
Some people may experience severe long-term cognitive decline following an episode of severe sepsis, but the absence of baseline neuropsychological data in most people with sepsis makes the incidence of this difficult to quantify or study.
Epidemiology
Sepsis causes millions of deaths globally each year and is the most common cause of death in people who have been hospitalized. The number of new cases worldwide of sepsis is estimated to be 18 million cases per year. In the United States sepsis affects approximately 3 in 1,000 people, and severe sepsis contributes to more than 200,000 deaths per year.
Sepsis occurs in 1–2% of all hospitalizations and accounts for as much as 25% of ICU bed utilization. Due to it rarely being reported as a primary diagnosis (often being a complication of cancer or other illness), the incidence, mortality, and morbidity rates of sepsis are likely underestimated. A study of U.S. states found approximately 651 hospital stays per 100,000 population with a sepsis diagnosis in 2010. It is the second-leading cause of death in non-coronary intensive care unit (ICU) and the tenth-most-common cause of death overall (the first being heart disease). Children under 12 months of age and elderly people have the highest incidence of severe sepsis. Among people from the U.S. who had multiple sepsis hospital admissions in 2010, those who were discharged to a skilled nursing facility or long-term care following the initial hospitalization were more likely to be readmitted than those discharged to another form of care. A study of 18 U.S. states found that amongst people with Medicare in 2011, sepsis was the second most common principal reason for readmission within 30 days.
Several medical conditions increase a person's susceptibility to infection and developing sepsis. Common sepsis risk factors include age (especially the very young and old); conditions that weaken the immune system such as cancer, diabetes, or the absence of a spleen; and major trauma and burns.
From 1979 to 2000, data from the United States National Hospital Discharge Survey showed that the incidence of sepsis increased fourfold, to 240 cases per 100,000 population, with a higher incidence in men when compared to women. However, the global prevalence of sepsis has been estimated to be higher in women. During the same time frame, the in-hospital case fatality rate was reduced from 28% to 18%. However, according to the nationwide inpatient sample from the United States, the incidence of severe sepsis increased from 200 per 10,000 population in 2003 to 300 cases in 2007 for a population aged more than 18 years. The incidence rate is particularly high among infants, with an incidence of 500 cases per 100,000 population. Mortality related to sepsis increases with age, from less than 10% in the age group of 3 to 5 years to 60% by sixth decade of life. The increase in the average age of the population, alongside the presence of more people with chronic diseases or on immunosuppressive medications, and also the increase in the number of invasive procedures being performed, has led to an increased rate of sepsis.
History
The term "σήψις" (sepsis) was introduced by Hippocrates in the fourth century BC, and it meant the process of decay or decomposition of organic matter. In the eleventh century, Avicenna used the term "blood rot" for diseases linked to severe purulent process. Though severe systemic toxicity had already been observed, it was only in the 19th century that the specific term – sepsis – was used for this condition.
The terms "septicemia", also spelled "septicaemia", and "blood poisoning" referred to the microorganisms or their toxins in the blood. The International Statistical Classification of Diseases and Related Health Problems (ICD) version 9, which was in use in the US until 2013, used the term septicemia with numerous modifiers for different diagnoses, such as "Streptococcal septicemia". All those diagnoses have been converted to sepsis, again with modifiers, in ICD-10, such as "Sepsis due to streptococcus".
The current terms are dependent on the microorganism that is present: bacteremia if bacteria are present in the blood at abnormal levels and are the causative issue, viremia for viruses, and fungemia for a fungus.
By the end of the 19th century, it was widely believed that microbes produced substances that could injure the mammalian host and that soluble toxins released during infection caused the fever and shock that were commonplace during severe infections. Pfeiffer coined the term endotoxin at the beginning of the 20th century to denote the pyrogenic principle associated with Vibrio cholerae. It was soon realized that endotoxins were expressed by most and perhaps all gram-negative bacteria. The lipopolysaccharide character of enteric endotoxins was elucidated in 1944 by Shear. The molecular character of this material was determined by Luderitz et al. in 1973.
It was discovered in 1965 that a strain of C3H/HeJ mouse was immune to the endotoxin-induced shock. The genetic locus for this effect was dubbed Lps. These mice were also found to be hyper-susceptible to infection by gram-negative bacteria. These observations were finally linked in 1998 by the discovery of the toll-like receptor gene 4 (TLR 4). Genetic mapping work, performed over five years, showed that TLR4 was the sole candidate locus within the Lps critical region; this strongly implied that a mutation within TLR4 must account for the lipopolysaccharide resistance phenotype. The defect in the TLR4 gene that led to the endotoxin-resistant phenotype was discovered to be due to a mutation in the cytoplasm.
Controversy occurred in the scientific community over the use of mouse models in research into sepsis in 2013 when scientists published a review of the mouse immune system compared to the human immune system and showed that on a systems level, the two worked very differently; the authors noted that as of the date of their article over 150 clinical trials of sepsis had been conducted in humans, almost all of them supported by promising data in mice and that all of them had failed. The authors called for abandoning the use of mouse models in sepsis research; others rejected that but called for more caution in interpreting the results of mouse studies, and more careful design of preclinical studies. One approach is to rely more on studying biopsies and clinical data from people who have had sepsis, to try to identify biomarkers and drug targets for intervention.
Society and culture
Economics
Sepsis was the most expensive condition treated in United States' hospital stays in 2013, at an aggregate cost of $23.6 billion for nearly 1.3 million hospitalizations. Costs for sepsis hospital stays more than quadrupled since 1997 with an 11.5 percent annual increase. By payer, it was the most costly condition billed to Medicare and the uninsured, the second-most costly billed to Medicaid, and the fourth-most costly billed to private insurance.
Education
A large international collaboration entitled the "Surviving Sepsis Campaign" was established in 2002 to educate people about sepsis and to improve outcomes with sepsis. The Campaign has published an evidence-based review of management strategies for severe sepsis, with the aim to publish a complete set of guidelines in subsequent years. The guidelines were updated in 2016 and again in 2021.
Sepsis Alliance is a charitable organization based in the United States that was created to raise sepsis awareness among both the general public and healthcare professionals.
Research
Some authors suggest that initiating sepsis by the normally mutualistic (or neutral) members of the microbiome may not always be an accidental side effect of the deteriorating host immune system. Rather it is often an adaptive microbial response to a sudden decline of host survival chances. Under this scenario, the microbe species provoking sepsis benefit from monopolizing the future cadaver, utilizing its biomass as decomposers, and then transmitting through soil or water to establish mutualistic relations with new individuals. The bacteria Streptococcus pneumoniae, Escherichia coli, Proteus spp., Pseudomonas aeruginosa, Staphylococcus aureus, Klebsiella spp., Clostridium spp., Lactobacillus spp., Bacteroides spp. and the fungi Candida spp. are all capable of such a high level of phenotypic plasticity. Not all cases of sepsis arise through such adaptive microbial strategy switches.
Paul E. Marik's "Marik protocol", also known as the "HAT" protocol, proposed a combination of hydrocortisone, vitamin C, and thiamine as a treatment for preventing sepsis for people in intensive care. Marik's initial research, published in 2017, showed dramatic evidence of benefit, leading to the protocol becoming popular among intensive care physicians, especially after the protocol received attention on social media and National Public Radio, leading to criticism of science by press conference from the wider medical community. Subsequent independent research failed to replicate Marik's positive results, indicating the possibility that they had been compromised by bias. A systematic review of trials in 2021 found that the claimed benefits of the protocol could not be confirmed.
Overall, the evidence for any role of vitamin C in the treatment of sepsis remains unclear .
| Biology and health sciences | Symptoms and signs | Health |
158405 | https://en.wikipedia.org/wiki/Iron%28II%29%20sulfate | Iron(II) sulfate | Iron(II) sulfate (British English: iron(II) sulphate) or ferrous sulfate denotes a range of salts with the formula FeSO4·xH2O. These compounds exist most commonly as the heptahydrate (x = 7) but several values for x are known. The hydrated form is used medically to treat or prevent iron deficiency, and also for industrial applications. Known since ancient times as copperas and as green vitriol (vitriol is an archaic name for hydrated sulfate minerals), the blue-green heptahydrate (hydrate with 7 molecules of water) is the most common form of this material. All the iron(II) sulfates dissolve in water to give the same aquo complex [Fe(H2O)6]2+, which has octahedral molecular geometry and is paramagnetic. The name copperas dates from times when the copper(II) sulfate was known as blue copperas, and perhaps in analogy, iron(II) and zinc sulfate were known respectively as green and white copperas.
It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 107th most commonly prescribed medication in the United States, with more than 6million prescriptions.
Uses
Industrially, ferrous sulfate is mainly used as a precursor to other iron compounds. It is a reducing agent, and as such is useful for the reduction of chromate in cement to less toxic Cr(III) compounds. Historically ferrous sulfate was used in the textile industry for centuries as a dye fixative. It is used historically to blacken leather and as a constituent of iron gall ink. The preparation of sulfuric acid ('oil of vitriol') by the distillation of green vitriol (iron(II) sulfate) has been known for at least 700 years.
Medical use
Plant growth
Iron(II) sulfate is sold as ferrous sulfate, a soil amendment for lowering the pH of a high alkaline soil so that plants can access the soil's nutrients.
In horticulture it is used for treating iron chlorosis. Although not as rapid-acting as ferric EDTA, its effects are longer-lasting. It can be mixed with compost and dug into the soil to create a store which can last for years. Ferrous sulfate can be used as a lawn conditioner. It can also be used to eliminate silvery thread moss in golf course putting greens.
Pigment and craft
Ferrous sulfate can be used to stain concrete and some limestones and sandstones a yellowish rust color.
Woodworkers use ferrous sulfate solutions to color maple wood a silvery hue.
Green vitriol is also a useful reagent in the identification of mushrooms.
Historical uses
Ferrous sulfate was used in the manufacture of inks, most notably iron gall ink, which was used from the Middle Ages until the end of the 18th century. Chemical tests made on the Lachish letters () showed the possible presence of iron. It is thought that oak galls and copperas may have been used in making the ink on those letters. It also finds use in wool dyeing as a mordant. Harewood, a material used in marquetry and parquetry since the 17th century, is also made using ferrous sulfate.
Two different methods for the direct application of indigo dye were developed in England in the 18th century and remained in use well into the 19th century. One of these, known as china blue, involved iron(II) sulfate. After printing an insoluble form of indigo onto the fabric, the indigo was reduced to leuco-indigo in a sequence of baths of ferrous sulfate (with reoxidation to indigo in air between immersions). The china blue process could make sharp designs, but it could not produce the dark hues of other methods.
In the second half of the 1850s ferrous sulfate was used as a photographic developer for collodion process images.
Hydrates
Iron(II) sulfate can be found in various states of hydration, and several of these forms exist in nature or were created synthetically.
FeSO4·H2O (mineral: szomolnokite, relatively rare, monoclinic)
FeSO4·H2O (synthetic compound stable at pressures exceeding 6.2 GPa, triclinic)
FeSO4·4H2O (mineral: rozenite, white, relatively common, may be dehydration product of melanterite, monoclinic)
FeSO4·5H2O (mineral: siderotil, relatively rare, triclinic)
FeSO4·6H2O (mineral: ferrohexahydrite, very rare, monoclinic)
FeSO4·7H2O (mineral: melanterite, blue-green, relatively common, monoclinic)
The tetrahydrate is stabilized when the temperature of aqueous solutions reaches . At these solutions form both the tetrahydrate and monohydrate.
Mineral forms are found in oxidation zones of iron-bearing ore beds, e.g. pyrite, marcasite, chalcopyrite, etc. They are also found in related environments, like coal fire sites. Many rapidly dehydrate and sometimes oxidize. Numerous other, more complex (either basic, hydrated, and/or containing additional cations) Fe(II)-bearing sulfates exist in such environments, with copiapite being a common example.
Production and reactions
In the finishing of steel prior to plating or coating, the steel sheet or rod is passed through pickling baths of sulfuric acid. This treatment produces large quantities of iron(II) sulfate as a by-product.
Another source of large amounts results from the production of titanium dioxide from ilmenite via the sulfate process.
Ferrous sulfate is also prepared commercially by oxidation of pyrite:
It can be produced by displacement of metals less reactive than Iron from solutions of their sulfate:
Reactions
Upon dissolving in water, ferrous sulfates form the metal aquo complex [Fe(H2O)6]2+, which is an almost colorless, paramagnetic ion.
On heating, iron(II) sulfate first loses its water of crystallization and the original green crystals are converted into a white anhydrous solid. When further heated, the anhydrous material decomposes into sulfur dioxide and sulfur trioxide, leaving a reddish-brown iron(III) oxide. Thermolysis of iron(II) sulfate begins at about .
->[\Delta]
Like other iron(II) salts, iron(II) sulfate is a reducing agent. For example, it reduces nitric acid to nitrogen monoxide and chlorine to chloride:
Its mild reducing power is of value in organic synthesis. It is used as the iron catalyst component of Fenton's reagent.
Ferrous sulfate can be detected by the cerimetric method, which is the official method of the Indian Pharmacopoeia. This method includes the use of ferroin solution showing a red to light green colour change during titration.
| Physical sciences | Sulfuric oxyanions | Chemistry |
158478 | https://en.wikipedia.org/wiki/Pedestrian%20crossing | Pedestrian crossing | A pedestrian crossing (or crosswalk in American and Canadian English) is a place designated for pedestrians to cross a road, street or avenue. The term "pedestrian crossing" is also used in the Vienna and Geneva Conventions, both of which pertain to road signs and road traffic.
Marked pedestrian crossings are often found at intersections, but may also be at other points on busy roads that would otherwise be too unsafe to cross without assistance due to vehicle numbers, speed or road widths. They are also commonly installed where large numbers of pedestrians are attempting to cross (such as in shopping areas) or where vulnerable road users (such as school children) regularly cross. Rules govern usage of the pedestrian crossings to ensure safety; for example, in some areas, the pedestrian must be more than halfway across the crosswalk before the driver proceeds, and in other areas, jaywalking laws are in place which restrict pedestrians from crossing away from marked crossing facilities.
Pedestrian crossings using signals clearly separate when each type of traffic (pedestrians or road vehicles) can use the crossing. Crossings without signals generally assist pedestrians, and usually prioritise pedestrians, depending on the locality. Pelican crossings use signals to keep pedestrians together where they can be seen by motorists, and where they can cross most safely across the flow of vehicular traffic, whereas zebra crossings are uncontrolled and more appropriate for lower flow numbers. What appears to be just pedestrian crossings can also be created largely as a traffic calming technique, especially when combined with other features like pedestrian priority, refuge islands, or raised surfaces.
History
Pedestrian crossings already existed more than 2,000 years ago, as can be seen in the ruins of Pompeii. Blocks raised on the road allowed pedestrians to cross the street without having to step onto the road itself which doubled up as Pompeii's drainage and sewage disposal system. The spaces between the blocks allowed horse-drawn carts to pass along the road.
The first pedestrian crossing signal was erected in Bridge Street, Westminster, London, in December 1868. It was the idea of John Peake Knight, a railway engineer, who thought that it would provide a means to safely allow pedestrians to cross this busy thoroughfare. The signal consisted of a semaphore arm (manufactured by Saxby and Farmer, who were railway signaling makers), which was raised and lowered manually by a police constable who would rotate a handle on the side of the pole. The semaphore arms were augmented by gas illuminated lights at the top (green and red) to increase visibility of the signal at night. However, in January 1869, the gas used to illuminate the lights at the top leaked and caused an explosion, injuring the police operator. No further work was done on signalled pedestrian crossings until fifty years later.
On October 31, 1951, in the town of Slough, west of London, United Kingdom, the first pedestrian crossing in history was marked. The exact source of the name "zebra crossing" cannot be confirmed with certainty, but it is believed that it came from the visual similarity of the crossing with the stripes on zebra fur. It is believed that the term "zebra crossing" was first used by British politician and military officer James Callaghan.
In the early 20th century, car traffic increased dramatically. A reader of The Times wrote to the editor in 1911:
<blockquote>"Could you do something to help the pedestrian to recover the old margin of safety on our common streets and roads? It is heartrending to read of the fearful deaths taking place. If a pedestrian now has even one hesitation or failure the chance of escape from a dreadful death is now much less than when all vehicles were much slower. There is, too, in the motor traffic an evident desire not to slow down before the last moment. It is surely a scandal that on the common ways there should be undue apprehension in the minds of the weakest users of them. While the streets and roads are for all, of necessity the pedestrians, and the feeblest of these, should receive the supreme consideration."<ref>The Times, 14 Feb. 1911, pg. 14: The Pedestrian's Chances.</ref></blockquote>
According to Zegeer,
"Pedestrians have a right to cross roads safely and, therefore, planners and engineers have a professional responsibility to plan, design, and install safe crossing facilities."
Criteria
Pedestrian crossing warrants are guidelines for the appropriate pedestrian crossing type for a site's traffic conditions. There are several guidelines in use across the world, and guidance and practice differ between jurisdictions. An over-emphasis by traffic engineers on vehicular movement in these criteria is criticised for neglecting the safety of pedestrians.
PV2 warrants have been used in the UK, among other countries such as India, since they were developed in 1987. This warrant uses a calculation of peak pedestrian volume and peak vehicular volume to determine which type of crossing, if any, should be installed. For example, if 500 pedestrians cross the road per hour and 600 vehicles per hour use that road section, PV2 dictates that a pelican crossing should be installed.
The US Manual on Uniform Traffic Control Devices (MUTCD) advises that crosswalk markings should 'not be used indiscriminately' and encourages engineering studies at sites away from signalized intersections and STOP or Yield signs. Its guidance is against installing crossing markings (without extra engineering interventions) on high-traffic routes if the speed limit exceeds .
Types and design
Unmarked crossings
In some countries, including the US, "unmarked crosswalks" are presumed to occur at intersections even if a crossing is not marked, except at locations where pedestrian crossing is expressly prohibited.
Pedestrian refuges are uncontrolled crossings with two dropped kerbs and a central traffic island, protected by kerbs. The island allows pedestrians to cross the road one direction of traffic at a time, which can be quicker and safer (they decrease pedestrian accidents by around 40%) than a lack of crossing. Additionally, they can narrow the road, slowing down vehicles and preventing them from overtaking. However, they may not afford pedestrians priority, meaning pedestrians may have a longer wait than a controlled crossing. They can also create pinch points, which can be dangerous for cyclists.
Courtesy crossings are uncontrolled crossings with coloured surfacing or some other non-formal suggestion that pedestrians may cross. They aim to encourage concentrated pedestrian crossings and to encourage drivers to let pedestrians cross the roads out of courtesy, rather than obligation. The inclusion of stripes (e.g. in paving), the presence of narrowing and visual narrowings of the road positively affect courtesy.
Marked crossings
The simplest marked crossings may just consist of some markings on the road surface. In the US these are known as "marked crosswalks". In the UK these are often called zebra crossings, referring to the alternate white and black stripes painted on the road surface. If the pedestrian has priority over vehicular traffic when using the crossing, then they have an incentive to use the crossing instead of crossing the road at other places. In some countries, pedestrians may not have priority, but may be committing an offence if they cross the road elsewhere, or "jaywalk". Special markings are often made on the road surface, both to direct pedestrians and to prevent motorists from stopping vehicles in the way of foot traffic. There are many varieties of signal and marking layouts around the world and even within single countries. In the United States, there are many inconsistencies, although the variations are usually minor. There are several distinct types in the United Kingdom, each with their own name.
Pedestrian cross striping machines are special equipment professionally used to paint zebra lines on the intersections or other busy road sections. Because of the characteristics of zebra crossings, parallel stripes that are wide but not long, the striping machine is often a small hand-guided road marking machine, which can easily be made to change direction. There are differences between the engineering regulations in different countries. The marking shoe of a pedestrian cross striping machine, which determines marking lines' width, is much wider than on other marking machines. A smaller marking shoe with wheels may be used to perform the road striping.
The section of road should be swept clean and kept dry. The painter first pulls a guiding line straight and fix the two ends on the ground. Then they spray or brush a primer layer on the asphalt or concrete surface. The thermoplastic paint in powder form is then melted into a molten liquid state for painting. Finally, the painter pulls or pushes the striping machine with the guide rod along the guiding line. As an alternative to thermoplastics, household paint or epoxy can be used to mark crosswalks.
Signal-controlled crossings
Some crossings have pedestrian traffic signals that allow pedestrians and road traffic to use the crossing alternately. On some traffic signals, pressing a call button is required to trigger the signal. Audible or tactile signals may also be included to assist people who have poor sight. In many cities, some or most signals are equipped with countdown timers to give notice to both drivers and pedestrians the time remaining on the crossing signal. In places where there is very high pedestrian traffic, Embedded pavement flashing-light systems are used to signal traffic of pedestrian presence, or exclusive traffic signal phases for pedestrians (also known as Barnes Dances) may be used, which stop vehicular traffic in all directions at the same time.
Pedestrian scramble
Some intersections display red lights to vehicles in all directions for a period of time. Known as a pedestrian scramble, this type of vehicle all-way stop allows pedestrians to cross safely in any direction, including diagonally.
Footbridges and tunnels
Footbridges or pedestrian tunnels may be used in lieu of crosswalks at very busy intersections as well as at locations where limited-access roads and controlled-access highways must be crossed. They can also be beneficial in locations where the sidewalk or pedestrian path naturally ascends or descends to a different level than the intersection itself, and the natural "desire line" leads to a footbridge or tunnel, respectively.
However, pedestrian bridges are ineffective in most locations; due to their expense, they are typically spaced far apart. Additionally, ramps, stairs, or elevators present additional obstacles, and pedestrians tend to use an at-grade pedestrian crossing instead. A variation on the bridge concept, often called a skyway or skywalk, is sometimes implemented in regions that experience inclement weather.
Crosswalk shortening
Pedestrian refuges or small islands in the middle of a street may be added when a street is very wide, as these crossings can be too long for some individuals to cross in one cycle. These pedestrian refuges may consist of building traffic islands in the middle of the road; extending an existing island or median strip to the crosswalk to provide a refuge; or simply cutting through the existing island or median strip where the median is already continuous.
Another relatively widespread variation is the curb/kerb extension (also known as a bulb-out), which narrows the width of the street and is used in combination with crosswalk markings. They can also be used to slow down cars, potentially creating a safer crossing for pedestrians.
Artwork crossings
Some crosswalks, known as colourful crossings, include unique designs, many of which take the form of artwork. These works of art may serve many different purposes, such as attracting tourism or catching drivers' attention.
Cities and towns worldwide have held competitions to paint crosswalks, usually as a form of artwork. In Santiago, Chile, a 2013 work by Canadian artist Roadsworth features yellow-and-blue fish overlaid on the existing crosswalk. Other crossings worldwide also feature some of Roadsworth's work, including a crosswalk in Montreal where the zebra stripes are shaped like bullets, as well as "conveyor belt" crosswalk in Winston-Salem, North Carolina. In Lompoc, California, several artists were commissioned to create an artwork as part of its "Creative Crossings" competition. Artist Marlee Bedford painted the first set of four crosswalks as part of the 2015 competition, and Linda Powers painted two more crosswalks in 2016 following that year's competition.
In Tbilisi, Georgia, some Tbilisi Academy of Arts students and government officials jointly created a crossing that is designed to look like it is in 3D. A message on the white bars of the crosswalk reads, "for your safety." 3D crosswalk designs have also been installed in China, with a "floating zebra crossing" implemented in a village in Luoyuan County to boost tourism; a multicolored 3-D crossing installed in Changsha, China to catch drivers' attention; and another multicolored crossing in Sichuan Province that serves the same purpose as the colored Changsha crosswalk.
Colored crosswalks might have themes that reflect the immediate area. For instance, Chengdu, China had a red-and-white zebra crossing with hearts painted on it, reflecting its location near a junction of two rivers. In Curitiba, Brazil, a crosswalk with its bars irregularly painted like a barcode served as an advertisement for a nearby shopping center, but was later painted over. A pedestrian scramble in the Chinatown section of Oakland, California, is painted with red-and-yellow colors to signify the colors of the flag of China.
Sometimes, different cities around the world may have similar art concepts for their crosswalks. Rainbow flag-colored crosswalks, which are usually painted to show support for the locality's LGBT cultures, have been installed in San Francisco; West Hollywood; Philadelphia; and Tel Aviv. Crosswalks painted like piano keyboards have been painted in Long Beach; Warsaw; and Chongqing.
The United States Federal Highway Administration prohibits crosswalk art due to concerns about safety and visibility, but U.S. cities have chosen to install their own designs. Seattle had 40 crosswalks with unique designs, including the rainbow flag in Capitol Hill and the Pan-African flag in the Central District.
Colourful crossings have been criticised for creating accessibility issues. For blind and visually impaired pedestrians, consistency in design is important to ensure a safe crossing. Visually impaired people with limited sight and neurodivergent people may experience pain or confusion in interpreting colourful crossings or distress from visual noise. These crossings may therefore discriminate against marginalised groups in accessing public spaces.
Raised crossings
Raised crossings are a traffic calming measure that contains speed tables spanning the crossing. The crossings are demarcated with paint and/or have special paving materials. These crossings allow the pedestrian to cross at grade with the sidewalk and has been shown to reduce pedestrian crashes by 45% due to reduction of vehicular speeds and the prominence of the pedestrian in the driver's field of vision.
Distinctions by region
North America
In the United States, crosswalks are sometimes marked with white stripes, though many municipalities have slightly different styles. The designs used vary widely between jurisdictions, and often vary even between a city and its county (or local equivalents). Most frequently, they are marked with two parallel white lines running from one side of the road to the other, with the width of the lines being typically wide.
Marked crosswalks are usually placed at traffic intersections or crossroads, but are occasionally used at mid-block locations, which may include additional regulatory signage such as "PED XING" (for "pedestrian crossing"), flashing yellow beacons (also known as rectangular rapid-flashing beacons or RRFBs), stop or yield signs, or by actuated or automatic signals. Some more innovative crossing treatments include in-pavement flashers, yellow flashing warning lights installed in the roadway, or HAWK beacon.
Crossing laws vary between different states and provinces and sometimes at the local level. All U.S. states require vehicles to yield to a pedestrian who has entered a marked crosswalk, and in most states crosswalks exist at all intersections meeting at approximately right angles, whether they are marked or not.See here (discussing the Uniform Vehicle Code and stating that "a crosswalk at an intersection is defined as the extension of the sidewalk or the shoulder across the intersection, regardless of whether it is marked or not."); see also California Vehicle Code section 275(a) ("'Crosswalk' is . . . [t]hat portion of a roadway included within the [extension] of the boundary lines of sidewalks at intersections where the intersecting roadways meet at approximately right angles, except the [extension] of such lines from an alley across a street")
At crossings controlled by signals, generally the poles at both ends of the crosswalk also have the pedestrian signal heads. For many years these bore white and Portland Orange legends, but pictograms of an "upraised hand" (symbolizing ) and a "walking person" (symbolizing ) have been required since 2009.
Europe
In Spain, the United Kingdom, Germany and other European countries, 90% of pedestrian fatalities occur outside of pedestrian crossings. The highest rate is in the UK, which has fewer crossings than neighbouring European countries.
Continental Europe
Nearly every country of Continental Europe is party to (though has not necessarily ratified) the Vienna Convention on Road Signs and Signals (1968), which says of pedestrian crossings: 'to mark pedestrian crossings, relatively broad stripes, parallel to the axis of the carriageway, should preferably be used'. This means that pedestrian crossing styles are quite uniform across the Continent. However, while the stripes are normally white, in Switzerland they are yellow.
Furthermore, the Vienna Convention on Road Traffic (1968) states that pedestrians should use pedestrian crossings when one is nearby (§6.c) and prohibits the overtaking of other vehicles approaching crossings, unless the driver would be able to stop for a pedestrian. The 1971 European supplement to that Convention re-iterates the former and outlaws the standing or parking of vehicles around pedestrian crossings. It also specifies signs and markings: the "pedestrian crossing sign" is on a blue or black ground, with a white or yellow triangle where the symbol is displayed in black or dark blue, and that the minimum width recommended for pedestrian crossings is 2.5 m (or 8-foot) on roads on which the speed limit is lower than 60 km/h (or 37 mph), and 4 m (or 13-foot) on roads with a higher or no speed limit.
In France, it is not mandatory that crosswalks exist. However, if there is one less than 50 meters (55 yards) away, pedestrians are obliged to use it.
In the east of Germany, including Berlin, the unique Ampelmännchen design for pedestrian lights are widely used. These signals originated in the former East Germany and have become an icon of the city and of ostalgie – nostalgia for East German life. A study has shown they are more effective than Western-style icons.
United Kingdom and Ireland
The United Kingdom and Ireland's pedestrian crossings are quite distinct from the rest of Europe use animal names to distinguish different types of crossing. These conventions have been adapted in some ex-Empire countries, such as Hong Kong and Malta. 'Look right' and 'look left' markings are sometimes found in tourist areas, to remind pedestrians of the driving direction in the UK.
Zebra crossings are similar to their Continental counterparts, with white stripe markings, they must have orange flashing globes, called 'belisha beacons'. They also normally have zig-zag markings to prevent overtaking and stopping of vehicles.
There are a number of different types of signal-controlled crossing. The traditional pelican crossing is no longer permitted in the UK, because it has been replaced with more intelligent puffin crossings – which have crossing sensors and low-level pedestrian signals – and pedex crossings, which features pedestrian countdown timers, however in Ireland only pelican crossings are installed. Puffin crossings are rare. Cyclists are sometimes permitted to use pedestrian crossings, such as toucan crossings (so named because TWO user types CAN cross) and sparrow crossings.
Australia
Pictograms are standard on all traffic light controlled crossings. Like some other countries, a flashing red sequence is used prior to steady red to clear pedestrians. Moments after, in some instances, a flashing yellow sequence (for motorists) can begin indicating that the vehicles may proceed through the crossing if safe to do so; this is fairly uncommon however.
There are two distinctive types of crossings in Australia: marked foot crossings and pedestrian crossing (also called zebra crossings).
Marked foot crossings consist on two parallel broken white lines indicating where pedestrians must cross with pedestrian lights facing pedestrians and traffic lights facing drivers. These crossings are located at intersections with signals and may also be located between intersections. On most Australian foot crossings, PB/5 “Audio-Tactile Pedestrian Detector” push buttons are provided to allow pedestrians to request the green walk (green symbol) display.
On the other hand, zebra crossings are common in low traffic areas and their approaches may be marked by zigzag lines. When a pedestrian crossing is placed on a raised section of road they are known as wombat crossings and are usually accompanied by a 40km/h speed limit. Pedestrian crossings can have a yellow sign showing a pair of legs to indicate pedestrian priority. Children’s crossings are part-time crossings that usually operate during school zone hours, and at other approved times and locations, marked by red‑orange flags at both sides. Reflector signposting is also used at crossings in school zones.
Signals
Pedestrian call buttons
Pedestrian call buttons (also known as pedestrian push buttons or pedestrian beg buttons) are installed at traffic lights with a dedicated pedestrian signal, and are used to bring up the pedestrian "walk" indication in locations where they function correctly. In the majority of locations where call buttons are installed, pushing the button does not light up the pedestrian walk sign immediately. One Portland State University researcher notes of call buttons in the US, "Most [call] buttons don't provide any feedback to the pedestrian that the traffic signal has received the input. It may appear at many locations that nothing happens." However, there are some locations where call buttons do provide confirmation feedback. At such locations, pedestrians are more likely to wait for the "walk" indications.
Reports suggest that many walk buttons in some areas, such as New York City and the United Kingdom, may actually be either placebo buttons or nonworking call buttons that used to function correctly. In the former case, these buttons are designed to give pedestrians an illusion of control while the crossing signal continues its operation as programmed. However, in instances of the latter case, such as New York City's, the buttons were simply deactivated when traffic signals were updated to automatically include pedestrian phases as part of every signal cycle. In such instances these buttons may be removed during future updates to the pedestrian signals. In the United Kingdom, pressing a button at a standalone pedestrian crossing that is unconnected to a junction will turn a traffic light red immediately, but this is not necessarily the case at a junction.
Sometimes, call buttons work only at some intersections, at certain times of day, or certain periods of the year, such as in New York City or in Boston, Massachusetts. In Boston, some busy intersections are programmed to give a pedestrian cycle during certain times of day (so pushing the button is not necessary) but at off-peak times a button push is required to get a pedestrian cycle. In neighboring Cambridge, a button press is always required if a button is available, though the city prefers to build signals where no button is present and the pedestrian cycle always happens between short car cycles. In both cases the light will not turn immediately, but will wait until the next available pedestrian slot in a pre-determined rotation.
Countdown timers
Some pedestrian signals integrate a countdown timer, showing how many seconds are remaining for the clearing phase. In the United States, San Francisco was the first major city to install countdown signals to replace older pedestrian modules, doing so on a trial basis starting in March 2001. The United States MUTCD added a countdown signal as an optional feature to its 2003 edition; if included, the countdown digits would be Portland Orange, the same color as the "Upraised Hand" indication. The MUTCD's 2009 edition changed countdown timers to a mandatory feature on pedestrian signals at all signalized intersections with pedestrian clearance intervals ("flashing upraised hand" phases) longer than seven seconds. With the MUTCD guideline allotting at least one second to cross , this indicates that countdown timers are supposed to be installed on roads wider than . The countdown is not supposed to be displayed during the pedestrian "walk" interval ("steady walking person" phase).
Some municipalities have found that there are instances where pedestrian countdown signals may be less effective than standard hand/man or ""/"" signals. New York City started studying the pedestrian timers in an inconclusive 2006 study but only started rolling out pedestrian timers on a large scale in 2011 after the conclusion of a second study, which found that pedestrian countdown timers were ineffective at shorter crosswalks. Additionally, a 2000 study of pedestrian countdown timers in Lake Buena Vista, Florida, at several intersections near Walt Disney World, found that pedestrians were more likely to cross the street during the pedestrian clearance interval (flashing upraised hand) if there is a timer present, compared to at intersections where there was no timer present. A study in Toronto found similar results to the Florida study, determining that countdown timers may actually cause more crashes than standard hand/man signals. However, other cities such as London found that countdown timers were effective, and New York City found that countdown signals worked mainly at longer crosswalks.
Pedestrian countdown signals are also used elsewhere around the world, such as in Buenos Aires, India, Mexico, Taiwan, and the United Arab Emirates. In Mexico City, the walking man moves his feet during the countdown. In Taiwan, all the crossings feature animated men called xiaolüren ("little green man"), who will walk faster immediately before the traffic signal will change. There is also always a countdown timer.
Variations
In some countries, instead of "don't walk", a depiction of a red man or hand indicating when not to cross, the drawing of the person crossing appears with an "X" drawn over it.
Some countries around the Baltic Sea in Scandinavia duplicate the red light. Instead of one red light, there are two which both illuminate at the same time.
In many parts of eastern Germany, particularly the former German Democratic Republic, the design of the crossing man (Ampelmännchen) has a hat. There are also female Ampelmännchen in western Germany and the Netherlands. Other countries also use unusual "walk" and "don't walk" pedestrian indicators. In southwest Yokohama, Kanagawa Prefecture, there are pedestrian signal lights that resemble Astro Boy. In Lisbon, some signals have a "don't walk" indicator that dances; these "dancing man" signals, created by Daimler AG, were created to encourage pedestrians to wait for the "walk" indicator, with the result that 81% more pedestrians stopped and waited for the "walk" light compared to at crosswalks with conventional signals.
Leading Pedestrian Interval
In some areas, the signal timing technique of a Leading Pedestrian Interval (LPI) allows pedestrians exclusive access to a crosswalk, typically 3–7 seconds, before vehicular traffic is permitted. Depending on intersection volume and safety history, a normal right-turn-on-red (RTOR) might be explicitly prohibited during the LPI phase. LPI benefits include increased visibility and greater likelihood of vehicles yielding. LPI is among the tools being considered in the fatality-elimination toolkit of Vision Zero planners and advocates.
Temporary signals
In certain circumstances, there are needs to install temporary pedestrian crossing signals. The reasons may include redirecting traffic due to roadworks, closing of the permanent crossing signals due to repairs or upgrades, and establishing new pedestrian crossings for the duration of large public events.
The temporary pedestrian crossings can be integrated into portable traffic signals that may be used during the roadworks, or it can be stand-alone just to stop vehicles to allow pedestrians to safely cross the road without directing vehicle movements. When using the temporary pedestrian crossings signals for roadworks, there should be consideration on signal cycle time. The pedestrian crossing cycles may add longer delay to the traffics which may require additional planning on road work traffic flows.
Depending on the duration and the nature of the temporary signals, the equipment can be installed in different way. One way is to use the permanent traffic signals mounted temporary poles such as poles in concrete-filled barrels. Another way is to use portable pedestrian crossing signals.
Enhancements for disabled people
Pedestrian controlled crossings are sometimes provided with enhanced features to assist disabled people.
Tactile indications
Tactile cones near or under the control button may rotate or shake when the pedestrian signal is in the pedestrian "walk" phase. This is for pedestrians with visual impairments. A vibrating button is used in Australia, Germany, some parts of the United States, Greece, Ireland, and Hong Kong to assist hearing-impaired people. Alternatively, electrostatic, touch-sensitive buttons require no force to activate. To confirm that a request has been registered, the buttons usually emit a chirp or other sound. They also offer anti-vandalism benefits due to not including moving parts which are sometimes jammed on traditional push-button units.
Tactile surfacing patterns (or tactile pavings) may be laid flush within the adjacent footways (US: sidewalks), so that visually impaired pedestrians can locate the control box and cone device and know when they have reached the other side. In Britain, different colours of tactile paving indicate different types of crossings; yellow (referred to as buff coloured) is used at non-controlled (no signals) crossings, and red is used at controlled (signalised) locations.
Some crossing include a tactile map of the crossing geometry. For example, this crossing from Oslo shows (starting at the bottom) that the crossing consists of a curb, a bicycle lane, two lanes of traffic, a pedestrian island, two tram tracks, another island, then three more traffic lanes.
Audible signals
Crosswalks have adaptations, mainly for people with visual impairments, through the addition of accessible pedestrian signals (APS) that may include speakers at the pushbutton, or under the signal display, for each crossing location. These types of signals have been shown to reduce conflicts between pedestrians and vehicles. However, without other indications such as tactile pavings or cones, these APS units may be hard for visually impaired people to locate.
In the United States, the standards in the 2009 MUTCD require APS units to have a pushbutton locator tone, audible and vibrotactile walk indications, a tactile arrow aligned with the direction of travel on the crosswalk, and to respond to ambient sound. The pushbutton locator tone is a beep or tick, repeating at once per second, to allow people who are blind to find the device. If APS units are installed in more than one crossing direction (e.g. if there are APS units at a curb for both the north–south and west–east crossing directions), different sounds or speech messages may be used for each direction. Under the MUTCD guideline, the walk indication may be a speech message if two or more units on the same curb are separated by less than . These speech messages usually follow the pattern "[Street name]. Walk sign is on to cross [Street Name]." Otherwise, the walk indication may be a "percussive tone", which usually consists of repeated, rapid sounds that can be clearly heard from the opposite curb and can oscillate between high and low volumes. In both cases, when the "don't walk" indication is flashing, the device will beep at every second until the "don't walk" indication becomes steady and the pedestrian countdown indication reaches "0", at which point the device will beep intermittently at lower volume. When activated, the APS units are mandated to be accompanied by a vibrating arrow on the APS during the walk signal.
The devices have been in existence since the mid-20th century, but were not popular until the 2000s because of concerns over noise. As of the 2009 MUTCD, APS are supposed to be set to be heard only 6 to 12 feet from the device to be easy to detect from a close distance but not so loud as to be intrusive to neighboring properties. Among American cities, San Francisco has one of the greatest numbers of APS-equipped intersections in the United States, with APS installed at 202 intersections . New York City has APS at 131 intersections , with 75 more intersections to be equipped every year after that.
APS in other countries may consist of a short recorded message, as in Scotland, Hong Kong, Singapore and some parts of Canada (moderate to large urban centres). In Japan, various electronic melodies are played, often of traditional melancholic folk songs such as "Tōryanse" or "Sakura". In Croatia, Denmark and Sweden, beeps (or clicks) with long intervals in-between signifying "don't walk" mode and beeps with very short intervals signifying "walk" mode.
Relief symbol
On some pushbutton especially in Austria and Germany there is a symbolic relief showing the crossing situation for the visually impaired, so they can get an overview of the crossing.
The relief is read from the bottom up. It consists of different modules, which are put together according to the crosswalk. Each pedestrian crossing begins with the start symbol, consisting of an arrow and a broad line representing the curb. Subsequently, different modules for traffic lanes and islands follow. The relief is completed with a broad line.
Modules for traffic lanes consist of a dash in the middle and a symbol for the kind of lane right or left of the dash, depending on the direction from which the traffic crosses the crossing. If a crossing is possible from both directions, a symbol is located on both sides. If the pedestrian crossing is a zebra crossing, the middle line is dashed. A traffic light secured crossing has a solid line.
A cycle path is represented by two points next to each other, a vehicle lane by a rectangle and tram rails by two lines lying one above the other.
Islands are represented as a rectangle, which has semicircles on the right and left side. If there is a pushbutton for pedestrians on the island, there is a dot in the middle of the rectangle. If the pedestrian walkway divides on an island, the rectangle may be open on the right or left side.
Key-based system
In Perth, Western Australia, an extended phase system called "Keywalk" was developed by the Main Roads Department of Western Australia in response to concerns from disability advocates about the widening of the Albany Highway in that city in the mid-1990s. The department felt that extending the walk phase permanently on cross streets would cause too much disruption to traffic flow on the highway and so the Keywalk system was developed to allow for those who needed an extended green light phase to cross the road safely. A small electronic key adjusted the green/walk and flashing red/complete crossing phases to allow more time for the key holder to complete the crossing of the highway safely. The system was first installed at the junction of Albany Highway and Cecil Avenue. It is unclear what became of this system.
Lighting
There are two types of crosswalk lights: those that illuminate the whole crosswalk area, and warning lights. Both these lighting systems encourage oncoming traffic to yield for pedestrians only if necessary.
The Illuminating Engineering Society of North America currently provides engineering design standards for highway lighting. In the US, in conventional intersections, area lighting is typically provided by pole-mounted luminaires. These systems illuminate the crosswalk as well as surrounding areas, and do not always provide enough contrast between the pedestrian and his or her background.
There have been many efforts to create lighting scenarios that offer better nighttime illumination in crosswalks. Some innovative concepts include:
Illuminating lights
Bollard posts containing linear light sources inside. These posts have been shown to sufficiently illuminate the pedestrian but not the background, consequently increasing contrast and improving pedestrian visibility and detection. Although this method shows promise in being incorporated into crosswalk lighting standards, more studies need to be done.Bullough, J.D., X. Zhang, N.P. Skinner, and M.S. Rea. Design and Evaluation of Effective Crosswalk Lighting. Publication FHWA-NJ-2009-03. New Jersey Department of Transportation, Trenton, NJ, 2009.
Festooned strings of light over the top of the crosswalk.
Warning lights
To warn the oncoming traffic, these warning lights usually only rapidly flash when a pedestrian presses a button to use the crosswalk.
In-pavement lighting oriented to face oncoming traffic (Embedded pavement flashing-light system).
In-pavement, flashing warning lights oriented upwards (especially visible to children, the short-statured, and smombies)
Pole-mounted, flashing warning lights (mounted similar to a traffic signal).
Pedestrian warning signs enhanced with LED lights either within the sign face or underneath it.Impacts of LED Brightness, Flash Pattern, and Location for Illuminated Pedestrian Traffic Control Device, Federal Highway Administration, May 2015
In areas with heavy snowfall, using in-pavement lighting can be problematic, since snow can obscure the lights, and snowplows can damage them.
Railway pedestrian crossings
In Finland, fences in the footpath approaching the crossing force pedestrians and bicycles to slow down to navigate a zigzag path, which also tends to force that user to look out for the train.
Pedestrian crossings across railways may be arranged differently elsewhere, such as in New South Wales, where they consist of:
a barrier which closes when a train approaches;
a "Red Man" light; no light when no train approaching
an alarm
In France, when a train is approaching, a red man is shown with the word STOP flashing in red (R25 signal).
When a footpath crosses a railway in the United Kingdom, there will most often be gates or stiles protecting the crossing from wildlife and livestock. In situations where there is little visibility along the railway, or the footpath is especially busy, there will also be a small set of lights with an explanatory sign. When a train approaches, the signal light will change to red and an alarm will sound until the train has cleared the crossing.
Safety
The safety of unsignalled pedestrian or zebra crossings is somewhat contested in traffic engineering circles.
Research undertaken in New Zealand showed that a zebra crossing without other safety features on average increases'' pedestrian crashes by 28% compared to a location without crossings. However, if combined with (placed on top of) a speed table, zebra crossings were found to reduce pedestrian crashes by 80%.
A five-year U.S. study of 1,000 marked crosswalks and 1,000 unmarked comparison sites found that on most roads, the difference in safety performance of marked and unmarked crossings is not statistically significant, unless additional safety features are used. On multilane roads carrying over 12,000 vehicles per day, a marked crosswalk is likely to have worse safety performance than an otherwise similar unmarked location, unless safety features such as raised median refuges or pedestrian beacons are also installed. On multilane roads carrying over 15,000 vehicles per day, a marked crosswalk is likely to have worse safety performance than an unmarked location, even if raised median refuges are provided. The marking pattern had no significant effect on safety. This study only included locations where vehicle traffic was not controlled by a signal or stop sign.
Traffic accidents are reduced when intersections are daylighted, i.e. visibility increased such as by removing adjacent parked cars.
| Technology | Road infrastructure | null |
158530 | https://en.wikipedia.org/wiki/Cepheid%20variable | Cepheid variable | A Cepheid variable () is a type of variable star that pulsates radially, varying in both diameter and temperature. It changes in brightness, with a well-defined stable period and amplitude. Cepheids are important cosmic benchmarks for scaling galactic and extragalactic distances; a strong direct relationship exists between a Cepheid variable's luminosity and its pulsation period.
This characteristic of classical Cepheids was discovered in 1908 by Henrietta Swan Leavitt after studying thousands of variable stars in the Magellanic Clouds. The discovery establishes the true luminosity of a Cepheid by observing its pulsation period. This in turn gives the distance to the star by comparing its known luminosity to its observed brightness, calibrated by directly observing the parallax distance to the closest Cepheids such as RS Puppis and Polaris.
Cepheids change brightness due to the κ–mechanism, which occurs when opacity in a star increases with temperature rather than decreasing. The main gas involved is thought to be helium. The cycle is driven by the fact doubly ionized helium, the form adopted at high temperatures, is more opaque than singly ionized helium. As a result, the outer layer of the star cycles between being compressed, which heats the helium until it becomes doubly ionized and (due to opacity) absorbs enough heat to expand; and expanded, which cools the helium until it becomes singly ionized and (due to transparency) cools and collapses again. Cepheid variables become dimmest during the part of the cycle when the helium is doubly ionized.
Etymology
The term Cepheid originates from the star Delta Cephei in the constellation Cepheus, which was one of the early discoveries.
History
On September 10, 1784, Edward Pigott detected the variability of Eta Aquilae, the first known representative of the class of classical Cepheid variables. The eponymous star for classical Cepheids, Delta Cephei, was discovered to be variable by John Goodricke a few months later. The number of similar variables grew to several dozen by the end of the 19th century, and they were referred to as a class as Cepheids. Most of the Cepheids were known from the distinctive light curve shapes with the rapid increase in brightness and a hump, but some with more symmetrical light curves were known as Geminids after the prototype ζ Geminorum.
A relationship between the period and luminosity for classical Cepheids was discovered in 1908 by Henrietta Swan Leavitt in an investigation of thousands of variable stars in the Magellanic Clouds. She published it in 1912 with further evidence. Cepheid variables were found to show radial velocity variation with the same period as the luminosity variation, and initially this was interpreted as evidence that these stars were part of a binary system. However, in 1914, Harlow Shapley demonstrated that this idea should be abandoned. Two years later, Shapley and others had discovered that Cepheid variables changed their spectral types over the course of a cycle.
In 1913, Ejnar Hertzsprung attempted to find distances to 13 Cepheids using their motion through the sky. (His results would later require revision.) In 1918, Harlow Shapley used Cepheids to place initial constraints on the size and shape of the Milky Way and of the placement of the Sun within it. In 1924, Edwin Hubble established the distance to classical Cepheid variables in the Andromeda Galaxy, until then known as the "Andromeda Nebula" and showed that those variables were not members of the Milky Way. Hubble's finding settled the question raised in the "Great Debate" of whether the Milky Way represented the entire Universe or was merely one of many galaxies in the Universe.
In 1929, Hubble and Milton L. Humason formulated what is now known as Hubble's law by combining Cepheid distances to several galaxies with Vesto Slipher's measurements of the speed at which those galaxies recede from us. They discovered that the Universe is expanding, confirming the theories of Georges Lemaître.
In the mid 20th century, significant problems with the astronomical distance scale were resolved by dividing the Cepheids into different classes with very different properties. In the 1940s, Walter Baade recognized two separate populations of Cepheids (classical and type II). Classical Cepheids are younger and more massive population I stars, whereas type II Cepheids are older, fainter Population II stars. Classical Cepheids and type II Cepheids follow different period-luminosity relationships. The luminosity of type II Cepheids is, on average, less than classical Cepheids by about 1.5 magnitudes (but still brighter than RR Lyrae stars). Baade's seminal discovery led to a twofold increase in the distance to M31, and the extragalactic distance scale. RR Lyrae stars, then known as Cluster Variables, were recognized fairly early as being a separate class of variable, due in part to their short periods.
The mechanics of stellar pulsation as a heat-engine was proposed in 1917 by Arthur Stanley Eddington (who wrote at length on the dynamics of Cepheids), but it was not until 1953 that S. A. Zhevakin identified ionized helium as a likely valve for the engine.
Classes
Cepheid variables are divided into two subclasses which exhibit markedly different masses, ages, and evolutionary histories: classical Cepheids and type II Cepheids. Delta Scuti variables are A-type stars on or near the main sequence at the lower end of the instability strip and were originally referred to as dwarf Cepheids. RR Lyrae variables have short periods and lie on the instability strip where it crosses the horizontal branch. Delta Scuti variables and RR Lyrae variables are not generally treated with Cepheid variables although their pulsations originate with the same helium ionisation kappa mechanism.
Classical Cepheids
Classical Cepheids (also known as Population I Cepheids, type I Cepheids, or Delta Cepheid variables) undergo pulsations with very regular periods on the order of days to months. Classical Cepheids are Population I variable stars which are 4–20 times more massive than the Sun, and up to 100,000 times more luminous. These Cepheids are yellow bright giants and supergiants of spectral class F6 – K2 and their radii change by (~25% for the longer-period I Carinae) millions of kilometers during a pulsation cycle.
Classical Cepheids are used to determine distances to galaxies within the Local Group and beyond, and are a means by which the Hubble constant can be established. Classical Cepheids have also been used to clarify many characteristics of the Milky Way galaxy, such as the Sun's height above the galactic plane and the Galaxy's local spiral structure.
A group of classical Cepheids with small amplitudes and sinusoidal light curves are often separated out as Small Amplitude Cepheids or s-Cepheids, many of them pulsating in the first overtone.
Type II Cepheids
Type II Cepheids (also termed Population II Cepheids) are population II variable stars which pulsate with periods typically between 1 and 50 days. Type II Cepheids are typically metal-poor, old (~10 Gyr), low mass objects (~half the mass of the Sun). Type II Cepheids are divided into several subgroups by period. Stars with periods between 1 and 4 days are of the BL Her subclass, 10–20 days belong to the W Virginis subclass, and stars with periods greater than 20 days belong to the RV Tauri subclass.
Type II Cepheids are used to establish the distance to the Galactic Center, globular clusters, and galaxies.
Anomalous Cepheids
A group of pulsating stars on the instability strip have periods of less than 2 days, similar to RR Lyrae variables but with higher luminosities. Anomalous Cepheid variables have masses higher than type II Cepheids, RR Lyrae variables, and the Sun. It is unclear whether they are young stars on a "turned-back" horizontal branch, blue stragglers formed through mass transfer in binary systems, or a mix of both.
Double-mode Cepheids
A small proportion of Cepheid variables have been observed to pulsate in two modes at the same time, usually the fundamental and first overtone, occasionally the second overtone. A very small number pulsate in three modes, or an unusual combination of modes including higher overtones.
Uncertain distances
Chief among the uncertainties tied to the classical and type II Cepheid distance scale are: the nature of the period-luminosity relation in various passbands, the impact of metallicity on both the zero-point and slope of those relations, and the effects of photometric contamination (blending with other stars) and a changing (typically unknown) extinction law on Cepheid distances. All these topics are actively debated in the literature.
These unresolved matters have resulted in cited values for the Hubble constant (established from Classical Cepheids) ranging between 60 km/s/Mpc and 80 km/s/Mpc. Resolving this discrepancy is one of the foremost problems in astronomy since the cosmological parameters of the Universe may be constrained by supplying a precise value of the Hubble constant. Uncertainties have diminished over the years, due in part to discoveries such as RS Puppis.
Delta Cephei is also of particular importance as a calibrator of the Cepheid period-luminosity relation since its distance is among the most precisely established for a Cepheid, partly because it is a member of a star cluster and the availability of precise parallaxes observed by the Hubble, Hipparcos, and Gaia space telescopes. The accuracy of parallax distance measurements to Cepheid variables and other bodies within 7,500 light-years is vastly improved by comparing images from Hubble taken six months apart, from opposite points in the Earth's orbit. (Between two such observations 2 AU apart, a star at a distance of 7500 light-years = 2300 parsecs would appear to move an angle of 2/2300 arc-seconds = 2 x 10−7 degrees, the resolution limit of the available telescopes.)
Pulsation model
The accepted explanation for the pulsation of Cepheids is called the Eddington valve, or "κ-mechanism", where the Greek letter κ (kappa) is the usual symbol for the gas opacity.
Helium is the gas thought to be most active in the process. Doubly ionized helium (helium whose atoms are missing both electrons) is more opaque than singly ionized helium. As helium is heated, its temperature rises until it reaches the point at which double ionisation spontaneously occurs and is sustained throughout the layer in much the same way a fluorescent tube 'strikes'. At the dimmest part of a Cepheid's cycle, this ionized gas in the outer layers of the star is relatively opaque, and so is heated by the star's radiation, and due to the increasing temperature, begins to expand. As it expands, it cools, but remains ionised until another threshold is reached at which point double ionization cannot be sustained and the layer becomes singly ionized hence more transparent, which allows radiation to escape. The expansion then stops, and reverses due to the star's gravitational attraction. The star's states are held to be either expanding or contracting by the hysteresis generated by the doubly ionized helium and indefinitely flip-flops between the two states reversing every time the upper or lower threshold is crossed. This process is rather analogous to the relaxation oscillator found in electronics.
In 1879, August Ritter (1826–1908) demonstrated that the adiabatic radial pulsation period for a homogeneous sphere is related to its surface gravity and radius through the relation:
where k is a proportionality constant. Now, since the surface gravity is related to the sphere mass and radius through the relation:
one finally obtains:
where Q is a constant, called the pulsation constant.
Examples
Classical Cepheids include: Eta Aquilae, Zeta Geminorum, Beta Doradus, RT Aurigae, Polaris, as well as Delta Cephei.
Type II Cepheids include: W Virginis, Kappa Pavonis and BL Herculis.
Anomalous Cepheids include: XZ Ceti (overtone pulsation mode) and BL Boötis.
| Physical sciences | Stellar astronomy | null |
158681 | https://en.wikipedia.org/wiki/Aircraft%20engine | Aircraft engine | An aircraft engine, often referred to as an aero engine, is the power component of an aircraft propulsion system. Aircraft using power components are referred to as powered flight. Most aircraft engines are either piston engines or gas turbines, although a few have been rocket powered and in recent years many small UAVs have used electric motors.
Manufacturing industry
In commercial aviation the major Western manufacturers of turbofan engines are Pratt & Whitney (a subsidiary of Raytheon Technologies), General Electric, Rolls-Royce, and CFM International (a joint venture of Safran Aircraft Engines and General Electric). Russian manufacturers include the United Engine Corporation, Aviadvigatel and Klimov. Aeroengine Corporation of China was formed in 2016 with the merger of several smaller companies.
The largest manufacturer of turboprop engines for general aviation is Pratt & Whitney. General Electric announced in 2015 entrance into the market.
Development history
1848: John Stringfellow made a steam engine for a 10-foot wingspan model aircraft which achieved the first powered flight, albeit with negligible payload.
1903: Charlie Taylor built an inline engine, mostly of aluminum, for the Wright Flyer (12 horsepower).
1903: Manly-Balzer engine sets standards for later radial engines.
1906: Léon Levavasseur produces a successful water-cooled V8 engine for aircraft use.
1908: René Lorin patents a design for the ramjet engine.
1908: Louis Seguin designed the Gnome Omega, the world's first rotary engine to be produced in quantity. In 1909 a Gnome powered Farman III aircraft won the prize for the greatest non-stop distance flown at the Reims Grande Semaine d'Aviation setting a world record for endurance of .
1910: Coandă-1910, an unsuccessful ducted fan aircraft exhibited at Paris Aero Salon, powered by a piston engine. The aircraft never flew, but a patent was filed for routing exhaust gases into the duct to augment thrust.
1914: Auguste Rateau suggests using exhaust-powered compressor – a turbocharger – to improve high-altitude performance; not accepted after the tests
1915: The Mercedes D.VI - an eighteen-cylinder liquid-cooled W-18 type aircraft engine - (517 hp/380 kW) was the most powerful engine during WW1.
1917–18: The Idflieg-numbered R.30/16 example of the Imperial German Luftstreitkräfte's Zeppelin-Staaken R.VI heavy bomber becomes the earliest known supercharger-equipped aircraft to fly, with a Mercedes D.II straight-six engine in the central fuselage driving a Brown-Boveri mechanical supercharger for the R.30/16's four Mercedes D.IVa engines.
1918: Sanford Alexander Moss picks up Rateau's idea and creates the first successful turbocharger
1926: Armstrong Siddeley Jaguar IV (S), the first series-produced supercharged engine for aircraft use; two-row radial with a gear-driven centrifugal supercharger.
1930: Frank Whittle submitted his first patent for a turbojet engine.
June 1939: Heinkel He 176 is the first successful aircraft to fly powered solely by a liquid-fueled rocket engine.
August 1939: Heinkel HeS 3 turbojet propels the pioneering German Heinkel He 178 aircraft.
1940: Jendrassik Cs-1, the world's first run of a turboprop engine. It is not put into service.
1943 Daimler-Benz DB 670, first turbofan runs
1944: Messerschmitt Me 163B Komet, the world's first rocket-propelled combat aircraft deployed.
1945: First turboprop-powered aircraft flies, a modified Gloster Meteor with two Rolls-Royce Trent engines.
1947: Bell X-1 rocket-propelled aircraft exceeds the speed of sound.
1948: 100 shp 782, the first turboshaft engine to be applied to aircraft use; in 1950 used to develop the larger Turbomeca Artouste.
1949: Leduc 010, the world's first ramjet-powered aircraft flight.
1950: Rolls-Royce Conway, the world's first production turbofan, enters service.
1968: General Electric TF39 high bypass turbofan enters service delivering greater thrust and much better efficiency.
2002: HyShot scramjet flew in dive.
2004: NASA X-43, the first scramjet to maintain altitude.
2020: Pipistrel E-811 is the first electric aircraft engine to be awarded a type certificate by EASA. It powers the Pipistrel Velis Electro, the first fully electric EASA type-certified aeroplane.
Shaft engines
Reciprocating (piston) engines
In-line engine
In this section, for clarity, the term "inline engine" refers only to engines with a single row of cylinders, as used in automotive language, but in aviation terms, the phrase "inline engine" also covers V-type and opposed engines (as described below), and is not limited to engines with a single row of cylinders. This is typically to differentiate them from radial engines.
A straight engine typically has an even number of cylinders, but there are instances of three- and five-cylinder engines. The greatest advantage of an inline engine is that it allows the aircraft to be designed with a low frontal area to minimize drag. If the engine crankshaft is located above the cylinders, it is called an inverted inline engine: this allows the propeller to be mounted high up to increase ground clearance, enabling shorter landing gear. The disadvantages of an inline engine include a poor power-to-weight ratio, because the crankcase and crankshaft are long and thus heavy. An in-line engine may be either air-cooled or liquid-cooled, but liquid-cooling is more common because it is difficult to get enough air-flow to cool the rear cylinders directly.
Inline engines were common in early aircraft; one was used in the Wright Flyer, the aircraft that made the first controlled powered flight. However, the inherent disadvantages of the design soon became apparent, and the inline design was abandoned, becoming a rarity in modern aviation.
For other configurations of aviation inline engine, such as X-engines, U-engines, H-engines, etc., see Inline engine (aeronautics).
V-type engine
Cylinders in this engine are arranged in two in-line banks, typically tilted 60–90 degrees apart from each other and driving a common crankshaft. The vast majority of V engines are water-cooled. The V design provides a higher power-to-weight ratio than an inline engine, while still providing a small frontal area. Perhaps the most famous example of this design is the legendary Rolls-Royce Merlin engine, a 27-litre (1649 in3) 60° V12 engine used in, among others, the Spitfires that played a major role in the Battle of Britain.
Horizontally opposed engine
A horizontally opposed engine, also called a flat or boxer engine, has two banks of cylinders on opposite sides of a centrally located crankcase. The engine is either air-cooled or liquid-cooled, but air-cooled versions predominate. Opposed engines are mounted with the crankshaft horizontal in airplanes, but may be mounted with the crankshaft vertical in helicopters. Due to the cylinder layout, reciprocating forces tend to cancel, resulting in a smooth running engine. Opposed-type engines have high power-to-weight ratios because they have a comparatively small, lightweight crankcase. In addition, the compact cylinder arrangement reduces the engine's frontal area and allows a streamlined installation that minimizes aerodynamic drag. These engines always have an even number of cylinders, since a cylinder on one side of the crankcase "opposes" a cylinder on the other side.
Opposed, air-cooled four- and six-cylinder piston engines are by far the most common engines used in small general aviation aircraft requiring up to per engine. Aircraft that require more than per engine tend to be powered by turbine engines.
H configuration engine
An H configuration engine is essentially a pair of horizontally opposed engines placed together, with the two crankshafts geared together.
Radial engine
This type of engine has one or more rows of cylinders arranged around a centrally located crankcase. Each row generally has an odd number of cylinders to produce smooth operation. A radial engine has only one crank throw per row and a relatively small crankcase, resulting in a favorable power-to-weight ratio. Because the cylinder arrangement exposes a large amount of the engine's heat-radiating surfaces to the air and tends to cancel reciprocating forces, radials tend to cool evenly and run smoothly. The lower cylinders, which are under the crankcase, may collect oil when the engine has been stopped for an extended period. If this oil is not cleared from the cylinders prior to starting the engine, serious damage due to hydrostatic lock may occur.
Most radial engines have the cylinders arranged evenly around the crankshaft, although some early engines, sometimes called semi-radials or fan configuration engines, had an uneven arrangement. The best known engine of this type is the Anzani engine, which was fitted to the Bleriot XI used for the first flight across the English Channel in 1909. This arrangement had the drawback of needing a heavy counterbalance for the crankshaft, but was used to avoid the spark plugs oiling up.
In military aircraft designs, the large frontal area of the engine acted as an extra layer of armor for the pilot. Also air-cooled engines, without vulnerable radiators, are slightly less prone to battle damage, and on occasion would continue running even with one or more cylinders shot away. However, the large frontal area also resulted in an aircraft with an aerodynamically inefficient increased frontal area.
Rotary engine
Rotary engines have the cylinders in a circle around the crankcase, as in a radial engine, (see above), but the crankshaft is fixed to the airframe and the propeller is fixed to the engine case, so that the crankcase and cylinders rotate. The advantage of this arrangement is that a satisfactory flow of cooling air is maintained even at low airspeeds, retaining the weight advantage and simplicity of a conventional air-cooled engine without one of their major drawbacks.
The first practical rotary engine was the Gnome Omega designed by the Seguin brothers and first flown in 1909. Its relative reliability and good power to weight ratio changed aviation dramatically. Before the first World War most speed records were gained using Gnome-engined aircraft, and in the early years of the war rotary engines were dominant in aircraft types for which speed and agility were paramount. To increase power, engines with two rows of cylinders were built.
However, the gyroscopic effects of the heavy rotating engine produced handling problems in aircraft and the engines also consumed large amounts of oil since they used total loss lubrication, the oil being mixed with the fuel and ejected with the exhaust gases. Castor oil was used for lubrication, since it is not soluble in petrol, and the resultant fumes were nauseating to the pilots. Engine designers had always been aware of the many limitations of the rotary engine so when the static style engines became more reliable and gave better specific weights and fuel consumption, the days of the rotary engine were numbered.
Wankel engine
The Wankel is a type of rotary engine. The Wankel engine is about one half the weight and size of a traditional four-stroke cycle piston engine of equal power output, and much lower in complexity. In an aircraft application, the power-to-weight ratio is very important, making the Wankel engine a good choice. Because the engine is typically constructed with an aluminium housing and a steel rotor, and aluminium expands more than steel when heated, a Wankel engine does not seize when overheated, unlike a piston engine. This is an important safety factor for aeronautical use. Considerable development of these designs started after World War II, but at the time the aircraft industry favored the use of turbine engines. It was believed that turbojet or turboprop engines could power all aircraft, from the largest to smallest designs. The Wankel engine did not find many applications in aircraft, but was used by Mazda in a popular line of sports cars. The French company Citroën had developed Wankel powered helicopter in 1970's.
In modern times the Wankel engine has been used in motor gliders where the compactness, light weight, and smoothness are crucially important.
The now-defunct Staverton-based firm MidWest designed and produced single- and twin-rotor aero engines, the MidWest AE series. These engines were developed from the motor in the Norton Classic motorcycle. The twin-rotor version was fitted into ARV Super2s and the Rutan Quickie. The single-rotor engine was put into a Chevvron motor glider and into the Schleicher ASH motor-gliders. After the demise of MidWest, all rights were sold to Diamond of Austria, who have since developed a MkII version of the engine.
As a cost-effective alternative to certified aircraft engines some Wankel engines, removed from automobiles and converted to aviation use, have been fitted in homebuilt experimental aircraft. Mazda units with outputs ranging from to can be a fraction of the cost of traditional engines. Such conversions first took place in the early 1970s; and as of 10 December 2006 the National Transportation Safety Board has only seven reports of incidents involving aircraft with Mazda engines, and none of these is of a failure due to design or manufacturing flaws.
Combustion cycles
The most common combustion cycle for aero engines is the four-stroke with spark ignition. Two-stroke spark ignition has also been used for small engines, while the compression-ignition diesel engine is seldom used.
Starting in the 1930s attempts were made to produce a practical aircraft diesel engine. In general, Diesel engines are more reliable and much better suited to running for long periods of time at medium power settings. The lightweight alloys of the 1930s were not up to the task of handling the much higher compression ratios of diesel engines, so they generally had poor power-to-weight ratios and were uncommon for that reason, although the Clerget 14F Diesel radial engine (1939) has the same power to weight ratio as a gasoline radial. Improvements in Diesel technology in automobiles (leading to much better power-weight ratios), the Diesel's much better fuel efficiency and the high relative taxation of AVGAS compared to Jet A1 in Europe have all seen a revival of interest in the use of diesels for aircraft. Thielert Aircraft Engines converted Mercedes Diesel automotive engines, certified them for aircraft use, and became an OEM provider to Diamond Aviation for their light twin. Financial problems have plagued Thielert, so Diamond's affiliate — Austro Engine — developed the new AE300 turbodiesel, also based on a Mercedes engine. Competing new Diesel engines may bring fuel efficiency and lead-free emissions to small aircraft, representing the biggest change in light aircraft engines in decades.
Power turbines
Turboprop
While military fighters require very high speeds, many civil airplanes do not. Yet, civil aircraft designers wanted to benefit from the high power and low maintenance that a gas turbine engine offered. Thus was born the idea to mate a turbine engine to a traditional propeller. Because gas turbines optimally spin at high speed, a turboprop features a gearbox to lower the speed of the shaft so that the propeller tips don't reach supersonic speeds. Often the turbines that drive the propeller are separate from the rest of the rotating components so that they can rotate at their own best speed (referred to as a free-turbine engine). A turboprop is very efficient when operated within the realm of cruise speeds it was designed for, which is typically .
Turboshaft
Turboshaft engines are used primarily for helicopters and auxiliary power units. A turboshaft engine is similar to a turboprop in principle, but in a turboprop the propeller is supported by the engine and the engine is bolted to the airframe: in a turboshaft, the engine does not provide any direct physical support to the helicopter's rotors. The rotor is connected to a transmission which is bolted to the airframe, and the turboshaft engine drives the transmission. The distinction is seen by some as slim, as in some cases aircraft companies make both turboprop and turboshaft engines based on the same design.
Electric power
A number of electrically powered aircraft, such as the QinetiQ Zephyr, have been designed since the 1960s. Some are used as military drones. In France in late 2007, a conventional light aircraft powered by an 18 kW electric motor using lithium polymer batteries was flown, covering more than , the first electric airplane to receive a certificate of airworthiness.
On 18 May 2020, the Pipistrel E-811 was the first electric aircraft engine to be awarded a type certificate by EASA for use in general aviation. The E-811 powers the Pipistrel Velis Electro.
Limited experiments with solar electric propulsion have been performed, notably the manned Solar Challenger and Solar Impulse and the unmanned NASA Pathfinder aircraft.
Many big companies, such as Siemens, are developing high performance electric engines for aircraft use, also, SAE shows new developments in elements as pure Copper core electric motors with a better efficiency. A hybrid system as emergency back-up and for added power in take-off is offered for sale by Axter Aerospace, Madrid, Spain.
Small multicopter UAVs are almost always powered by electric motors.
Reaction engines
Reaction engines generate the thrust to propel an aircraft by ejecting the exhaust gases at high velocity from the engine, the resultant reaction of forces driving the aircraft forwards. The most common reaction propulsion engines flown are turbojets, turbofans and rockets. Other types such as pulsejets, ramjets, scramjets and pulse detonation engines have also flown. In jet engines the oxygen necessary for fuel combustion comes from the air, while rockets carry an oxidizer (usually oxygen in some form) as part of the fuel load, permitting their use in space.
Jet turbines
Turbojet
A turbojet is a type of gas turbine engine that was originally developed for military fighters during World War II. A turbojet is the simplest of all aircraft gas turbines. It consists of a compressor to draw air in and compress it, a combustion section where fuel is added and ignited, one or more turbines that extract power from the expanding exhaust gases to drive the compressor, and an exhaust nozzle that accelerates the exhaust gases out the back of the engine to create thrust. When turbojets were introduced, the top speed of fighter aircraft equipped with them was at least 100 miles per hour faster than competing piston-driven aircraft. In the years after the war, the drawbacks of the turbojet gradually became apparent. Below about Mach 2, turbojets are very fuel inefficient and create tremendous amounts of noise. Early designs also respond very slowly to power changes, a fact that killed many experienced pilots when they attempted the transition to jets. These drawbacks eventually led to the downfall of the pure turbojet, and only a handful of types are still in production. The last airliner that used turbojets was the Concorde, whose Mach 2 airspeed permitted the engine to be highly efficient.
Turbofan
A turbofan engine is much the same as a turbojet, but with an enlarged fan at the front that provides thrust in much the same way as a ducted propeller, resulting in improved fuel efficiency. Though the fan creates thrust like a propeller, the surrounding duct frees it from many of the restrictions that limit propeller performance. This operation is a more efficient way to provide thrust than simply using the jet nozzle alone, and turbofans are more efficient than propellers in the transsonic range of aircraft speeds and can operate in the supersonic realm. A turbofan typically has extra turbine stages to turn the fan. Turbofans were among the first engines to use multiple spools—concentric shafts that are free to rotate at their own speed—to let the engine react more quickly to changing power requirements. Turbofans are coarsely split into low-bypass and high-bypass categories. Bypass air flows through the fan, but around the jet core, not mixing with fuel and burning. The ratio of this air to the amount of air flowing through the engine core is the bypass ratio. Low-bypass engines are preferred for military applications such as fighters due to high thrust-to-weight ratio, while high-bypass engines are preferred for civil use for good fuel efficiency and low noise. High-bypass turbofans are usually most efficient when the aircraft is traveling at , the cruise speed of most large airliners. Low-bypass turbofans can reach supersonic speeds, though normally only when fitted with afterburners.
Advanced technology engine
The term advanced technology engine refers to the modern generation of jet engines. The principle is that a turbine engine will function more efficiently if the various sets of turbines can revolve at their individual optimum speeds, instead of at the same speed. The true advanced technology engine has a triple spool, meaning that instead of having a single drive shaft, there are three, in order that the three sets of blades may revolve at different speeds. An interim state is a twin-spool engine, allowing only two different speeds for the turbines.
Pulsejets
Pulsejets are mechanically simple devices that—in a repeating cycle—draw air through a no-return valve at the front of the engine into a combustion chamber and ignite it. The combustion forces the exhaust gases out the back of the engine. It produces power as a series of pulses rather than as a steady output, hence the name. The only application of this type of engine was the German unmanned V1 flying bomb of World War II. Though the same engines were also used experimentally for ersatz fighter aircraft, the extremely loud noise generated by the engines caused mechanical damage to the airframe that was sufficient to make the idea unworkable.
Gluhareff Pressure Jet
The Gluhareff Pressure Jet (or tip jet) is a type of jet engine that, like a valveless pulsejet, has no moving parts. Having no moving parts, the engine works by having a coiled pipe in the combustion chamber that superheats the fuel (propane) before being injected into the air-fuel inlet. In the combustion chamber, the fuel/air mixture ignites and burns, creating thrust as it leaves through the exhaust pipe. Induction and compression of the fuel/air mixture is done both by the pressure of propane as it is injected, along with the sound waves created by combustion acting on the intake stacks. It was intended as a power plant for personal helicopters and compact aircraft such as Microlights.
Rocket
A few aircraft have used rocket engines for main thrust or attitude control, notably the Bell X-1 and North American X-15.
Rocket engines are not used for most aircraft as the energy and propellant efficiency is very poor, but have been employed for short bursts of speed and takeoff. Where fuel/propellant efficiency is of lesser concern, rocket engines can be useful because they produce very large amounts of thrust and weigh very little.
Rocket turbine engine
A rocket turbine engine is a combination of two types of propulsion engines: a liquid-propellant rocket and a turbine jet engine. Its power-to-weight ratio is a little higher than a regular jet engine, and works at higher altitudes.
Precooled jet engines
For very high supersonic/low hypersonic flight speeds, inserting a cooling system into the air duct of a hydrogen jet engine permits greater fuel injection at high speed and obviates the need for the duct to be made of refractory or actively cooled materials. This greatly improves the thrust/weight ratio of the engine at high speed.
It is thought that this design of engine could permit sufficient performance for antipodal flight at Mach 5, or even permit a single stage to orbit vehicle to be practical. The hybrid air-breathing SABRE rocket engine is a pre-cooled engine under development.
Piston-turbofan hybrid
At the April 2018 ILA Berlin Air Show, Munich-based research institute :de:Bauhaus Luftfahrt presented a high-efficiency composite cycle engine for 2050, combining a geared turbofan with a piston engine core.
The 2.87 m diameter, 16-blade fan gives a 33.7 ultra-high bypass ratio, driven by a geared low-pressure turbine but the high-pressure compressor drive comes from a piston-engine with two 10 piston banks without a high-pressure turbine, increasing efficiency with non-stationary isochoric-isobaric combustion for higher peak pressures and temperatures.
The 11,200 lb (49.7 kN) engine could power a 50-seat regional jet.
Its cruise TSFC would be 11.5 g/kN/s (0.406 lb/lbf/hr) for an overall engine efficiency of 48.2%, for a burner temperature of , an overall pressure ratio of 38 and a peak pressure of .
Although engine weight increases by 30%, aircraft fuel consumption is reduced by 15%.
Sponsored by the European Commission under Framework 7 project , Bauhaus Luftfahrt, MTU Aero Engines and GKN Aerospace presented the concept in 2015, raising the overall engine pressure ratio to over 100 for a 15.2% fuel burn reduction compared to 2025 engines.
Engine position numbering
On multi-engine aircraft, engine positions are numbered from left to right from the point of view of the pilot looking forward, so for example on a four-engine aircraft such as the Boeing 747, engine No. 1 is on the left side, farthest from the fuselage, while engine No. 3 is on the right side nearest to the fuselage.
In the case of the twin-engine English Electric Lightning, which has two fuselage-mounted jet engines one above the other, engine No. 1 is below and to the front of engine No. 2, which is above and behind.
In the Cessna 337 Skymaster, a push-pull twin-engine airplane, engine No. 1 is the one at the front of the fuselage, while engine No. 2 is aft of the cabin.
Fuel
Aircraft reciprocating (piston) engines are typically designed to run on aviation gasoline. Avgas has a higher octane rating than automotive gasoline to allow higher compression ratios, power output, and efficiency at higher altitudes. Currently the most common Avgas is 100LL. This refers to the octane rating (100 octane) and the lead content (LL = low lead, relative to the historic levels of lead in pre-regulation Avgas).
Refineries blend Avgas with tetraethyllead (TEL) to achieve these high octane ratings, a practice that governments no longer permit for gasoline intended for road vehicles. The shrinking supply of TEL and the possibility of environmental legislation banning its use have made a search for replacement fuels for general aviation aircraft a priority for pilots’ organizations.
Turbine engines and aircraft diesel engines burn various grades of jet fuel. Jet fuel is a relatively less volatile petroleum derivative based on kerosene, but certified to strict aviation standards, with additional additives.
Model aircraft typically use nitro engines (also known as "glow engines" due to the use of a glow plug) powered by glow fuel, a mixture of methanol, nitromethane, and lubricant. Electrically powered model airplanes and helicopters are also commercially available. Small multicopter UAVs are almost always powered by electricity, but larger gasoline-powered designs are under development.
| Technology | Aviation | null |
158788 | https://en.wikipedia.org/wiki/Carbonyl%20group | Carbonyl group | For organic chemistry, a carbonyl group is a functional group with the formula , composed of a carbon atom double-bonded to an oxygen atom, and it is divalent at the C atom. It is common to several classes of organic compounds (such as aldehydes, ketones and carboxylic acids), as part of many larger functional groups. A compound containing a carbonyl group is often referred to as a carbonyl compound.
The term carbonyl can also refer to carbon monoxide as a ligand in an inorganic or organometallic complex (a metal carbonyl, e.g. nickel carbonyl).
The remainder of this article concerns itself with the organic chemistry definition of carbonyl, such that carbon and oxygen share a double bond.
Carbonyl compounds
In organic chemistry, a carbonyl group characterizes the following types of compounds:
Other organic carbonyls are urea and the carbamates, the derivatives of acyl chlorides chloroformates and phosgene, carbonate esters, thioesters, lactones, lactams, hydroxamates, and isocyanates. Examples of inorganic carbonyl compounds are carbon dioxide and carbonyl sulfide.
A special group of carbonyl compounds are dicarbonyl compounds, which can exhibit special properties.
Structure and reactivity
For organic compounds, the length of the C-O bond does not vary widely from 120 picometers. Inorganic carbonyls have shorter C-O distances: CO, 113; CO2, 116; and COCl2, 116 pm.
The carbonyl carbon is typically electrophilic. A qualitative order of electrophilicity is RCHO (aldehydes) > R2CO (ketones) > RCO2R' (esters) > RCONH2 (amides). A variety of nucleophiles attack, breaking the carbon-oxygen double bond.
Interactions between carbonyl groups and other substituents were found in a study of collagen. Substituents can affect carbonyl groups by addition or subtraction of electron density by means of a sigma bond. ΔHσ values are much greater when the substituents on the carbonyl group are more electronegative than carbon.
The polarity of C=O bond also enhances the acidity of any adjacent C-H bonds. Due to the positive charge on carbon and the negative charge on oxygen, carbonyl groups are subject to additions and/or nucleophilic attacks. A variety of nucleophiles attack, breaking the carbon-oxygen double bond, and leading to addition-elimination reactions. Nucleophilic reactivity is often proportional to the basicity of the nucleophile and as nucleophilicity increases, the stability within a carbonyl compound decreases. The pKa values of acetaldehyde and acetone are 16.7 and 19 respectively,
Spectroscopy
Infrared spectroscopy: the C=O double bond absorbs infrared light at wavenumbers between approximately 1600–1900 cm−1(5263 nm to 6250 nm). The exact location of the absorption is well understood with respect to the geometry of the molecule. This absorption is known as the "carbonyl stretch" when displayed on an infrared absorption spectrum. In addition, the ultraviolet-visible spectra of propanone in water gives an absorption of carbonyl at 257 nm.
Nuclear magnetic resonance: the C=O double-bond exhibits different resonances depending on surrounding atoms, generally a downfield shift. The 13C NMR of a carbonyl carbon is in the range of 160–220 ppm.
| Physical sciences | Concepts: General | Chemistry |
9270823 | https://en.wikipedia.org/wiki/Protocarnivorous%20plant | Protocarnivorous plant | A protocarnivorous plant (sometimes also paracarnivorous, subcarnivorous, or borderline carnivore), according to some definitions, traps and kills insects or other animals but lacks the ability to either directly digest or absorb nutrients from its prey like a carnivorous plant. The morphological adaptations such as sticky trichomes or pitfall traps of protocarnivorous plants parallel the trap structures of confirmed carnivorous plants.
Some authors prefer the term "protocarnivorous" because it implies that these plants are on the evolutionary path to true carnivory, whereas others oppose the term for the same reason. The same problem arises with "subcarnivorous". Donald Schnell, author of the book Carnivorous Plants of the United States and Canada, prefers the term "paracarnivorous" for a less rigid definition of carnivory that can include many of the possible carnivorous plants.
The demarcation between carnivorous and protocarnivorous is blurred by the lack of a strict definition of botanical carnivory and ambiguous academic literature on the subject. Many examples of protocarnivorous plants exist, some of which are counted among the ranks of true carnivorous plants as a matter of historical preference. Further research into these plants' carnivorous adaptations may reveal that a few protocarnivorous plants do meet the more rigid definition of a carnivorous plant.
Historical observations
Historical observations of the carnivorous syndrome in plant species have been restricted to the more obvious examples of carnivory, such as the active trapping mechanisms of Drosera (the sundews) and Dionaea (Venus flytrap), though authors have often noted speculation about other species that may not be so obviously carnivorous. In one of the earlier publications on carnivorous plants, Charles Darwin had suggested many plants that have developed adhesive glands, such as Erica tetralix, Mirabilis longifolia, Pelargonium zonale, Primula sinesis, and Saxifraga umbrosa, may indeed be carnivorous but little research has been done on them. Darwin himself only mentioned these species in passing and did not follow through with any investigation. Adding to the small but growing list, Francis Lloyd provided his own list of species suspected of carnivory in his 1942 book on carnivorous plants, though these species and their potential were only mentioned in the introduction. Later, in a 1981 review of the literature, Paul Simons rediscovered Italian journal articles from the early 1900s that identified several additional sticky species that digested insect prey. Simons was surprised to find these articles lacking in the literature cited sections of many modern books and articles on carnivorous plants, suggesting that academic research has treated Lloyd's 1942 book as the authoritative and comprehensive source on pre-1942 research on the carnivorous syndrome.
Defining carnivory
Debate about what criteria a plant must meet to be considered carnivorous has yielded two proposed definitions: one with strict requirements and the other less restrictive.
The strict definition requires that a plant must possess morphological adaptations that attract prey through scent or visual cues, capture and retain prey (e.g., the waxy scales of Brocchinia reducta or downward facing hairs of Heliamphora prevent escape), digest the dead prey through enzymes produced by the plant, and absorb the products of digestion through specialized structures. The presence of commensals is also listed as strong evidence of a long evolutionary history of carnivory. By this definition, many sun pitcher plants (Heliamphora) and the cobra lily (Darlingtonia californica) would not be included on a roster of carnivorous plants because they rely on symbiotic bacteria and other organisms to produce the necessary proteolytic enzymes.
The broader definition differs mainly in including plants that do not produce their own digestive enzymes but rely on internal food webs or microbes to digest prey, such as Darlingtonia and some species of Heliamphora. The original definition of botanical carnivory, set out in Givnish et al. (1984), required a plant to exhibit an adaptation of some trait specifically for the attraction, capture, or digestion of prey while gaining a fitness advantage through the absorption of nutrients derived from said prey. Upon further analysis of genera currently considered carnivorous, botanists widened the original definition to include species that use mutualistic interactions for digestion.
Both the strict and broad definitions require absorption of the digested nutrients. The plant must receive some benefit from the carnivorous syndrome; that is, the plant must display some increase in fitness because of the nutrients obtained from its carnivorous adaptations. Increased fitness might mean improved growth rate, increased chance of survival, higher pollen production or seed set.
Degrees of carnivory
One prevailing idea is that carnivory in plants is not a black and white duality, but rather a spectrum from strict non-carnivorous photoautotrophs (a rose, for example) to fully carnivorous plants with active trapping mechanisms like those of Dionaea or Aldrovanda. However, passive traps are still considered fully carnivorous. Plants that fall between the definitions in the strict carnivorous/non-carnivorous demarcation can be defined as being protocarnivorous.
It is thought that these plants that have evolved protocarnivorous habits typically reside in habitats where there is a significant nutrient deficiency, but not the severe deficiency in nitrogen and phosphorus seen where true carnivorous plants grow. The function of the protocarnivorous habit, however, need not be directly related to lack of nutrient access. Some classic protocarnivorous plants represent convergent evolution in form but not necessarily in function. Plumbago, for example, possesses glandular trichomes on its calyces that structurally resemble the tentacles of Drosera and Drosophyllum. The function of the Plumbago tentacles is, however, disputed. Some contend that their function is to aid in pollination, adhering seeds to visiting pollinators. Others note that on some species (Plumbago auriculata), small, crawling insects have been trapped in the Plumbago's mucilage, which supports the conclusion that these tentacles could have evolved to exclude crawling insects and favor flying pollinators for greater seed dispersal or perhaps for protection against crawling insect predators.
Trapping mechanisms
There are visible parallels between the trapping mechanisms of carnivorous plants and protocarnivorous plants. Plumbago and other species with glandular trichomes resemble the flypaper traps of Drosera and Drosophyllum. The pitfall traps of protocarnivorous plants, such as some Heliamphora species and Darlingtonia californica, are so similar to those of true carnivorous plants that the only reason they may be considered protocarnivorous instead of carnivorous is that they do not produce their own digestive enzymes. There are also protocarnivorous bromeliads that form a pitfall trap in an "urn" of rosetted leaves that are held together tightly. There are also other plants that produce a sticky mucilage not necessarily associated with a tentacle or glandular trichome, but instead can be described more like a slime capable of trapping and killing insects.
Flypaper traps
Dr. George Spomer of the University of Idaho has discovered protocarnivorous activity and function in several glandular plant species, including Cerastium arvense, Ipomopsis aggregata, Heuchera cylindrica, Mimulus lewisii, Penstemon attenuata, Penstemon diphyllus, Potentilla glandulosa var. intermedia, Ribes cereum, Rosa nutkana var. hispida, Rosa woodsii var. ultramontana, Solanum tuberosum, Stellaria americana, and Stellaria jamesiana. These species tested positive for protease activity, though it is unclear whether the protease is produced by the plant or by surface microbes. Two other species evaluated by Dr. Spomer, Geranium viscosissimum and Potentilla arguta, exhibited protease activity and were further examined with 14C-labeled algal protein for nutrient absorption activity. Both of these latter species displayed an ability to digest and absorb the labeled protein.
Other plants that are considered to be protocarnivorous have sticky trichomes on some surface, such as the flower scape and bud of Stylidium and Plumbago, the bracts of Passiflora, and leaves of Roridula. The trichomes of Stylidium, which appear below the flower, have been known to trap and kill small insects since their discovery several centuries ago, but their purpose remained ambiguous. In November 2006, Dr. Douglas Darnowski published a paper describing the active digestion of proteins when they come in contact with a trichome of a Stylidium species grown in aseptic tissue culture, proving that the plant, rather than the surface microbes, was the source of protease production. Darnowski asserts in that paper that given this evidence, Stylidium species are properly called carnivorous, though in order to fulfill the strict definition of carnivory it needs to be proven that they are capable of absorbing nutrients derived from prey and that this adaptation gives the plants some competitive advantage.
The glandular hairs on the calyx of plants of the genus Plumbago have been proposed as a potential carnivorous adaptation. While these calyxes have long been considered as a seed dispersal mechanism, many researchers have noted the entrapment of numerous ants and other small insects on the species Plumbago auriculata, Plumbago europa, Plumbago indica, and Plumbago zeylanica. Studies on P. auriculata and P. indica detected potential protease activity from these glands, but were inconsistent in detecting it. Energy-dispersive X-ray spectroscopy spectra of the glands on P. auriculata and P. zeylanica found that the glandular secretions were composed mainly of the elements C, O, Si, Mg, and Al. One such species, P. europaea, has also been noted to kill small birds by covering them in sticky calyxes, causing them to be unable to fly and subsequently die. A similar sticky-seed killing mechanism has been studied Pisonia grandis, but was concluded to not be a carnivorous adaptation.
Roridula has a more complex relationship with its prey. The plants in this genus produce sticky leaves with resin-tipped glands that look similar to those of larger Drosera. However, the resin, unlike mucilage, is unable to carry digestive enzymes. Therefore, Roridula species do not directly benefit from the insects they catch. Instead, they form a mutualistic symbiosis with species of assassin bugs that eat the trapped insects. The plant benefits from the nutrients in the bugs' feces.
Likewise, the sticky, modified bracts of passion flowers of the section Dysosmia have notable glandular bracts that surround flowers and forming fruit. While this has long been discussed as a defense mechanism, studies of Passiflora foetida have investigated them for potential carnivorous abilities. A 1995 paper published in the Journal of Biosciences detailed the evidence that the glandular bracts played a distinct role in defense of the flower and were also capable of digesting captured prey and absorbing the nutrients. Various authors have questioned the methods and conclusions of this paper. Further studies using on the glandular bracts using histochemical tests have confirmed the presence of enzymes in both Passiflora foetida and Passiflora sublanceolata.
Various plants of the Martyniaceae family have been considered crude flypaper protocarnivores. Early publications identified the entrapment of numerous insects on the glandular hairs covering the stems and leaves of Martynia annua, Proboscidea louisiana, Proboscidea parviflora, and Ibicella lutea. Early, rudimentary studies showed that placed bits of food—beef and hard-boiled egg white broke down when placed on the leaf surface ofP. louisiana and I. lutea, respectively. Despite this, more recent studies have suggested that there are no detectable proteases on the leaves of I. lutea and P. louisiana and no detectable phosphatases or uptake of N, P, K, Mg from dried flies places on I. lutea and P. parviflora. Observations have suggested that there may be a digestive mutualism between carnivorous insects and the sticky plant surface similar to Roridula. A similar relationship has been identified in many other sticky desert plants and concluded to be a passive defense mechanism.
Pitfall traps
The pitfall traps of protocarnivorous plants are identical to those of carnivorous plants in every way except in the plant's mode of digestion. The rigid definition of carnivory in plants requires digestion of prey by enzymes produced by the plant. Given this criterion, many of the pitfall trap plants commonly considered to be carnivorous would instead be classified as protocarnivorous. However, this is highly contentious and generally not reflected in current carnivorous plant phylogenies or literature. Darlingtonia californica and several Heliamphora species do not produce their own enzymes, relying instead on an internal food web to break down the prey into absorbable nutrients.
Another pitfall trap form unrelated to the Sarraceniaceae family are the urns of bromeliad leaves that are formed when leaves are tightly packed together in a rosette, collecting water and trapping insects. Unlike Brocchinia reducta, which has been proven to produce at least one digestive enzyme and can therefore be considered carnivorous, the epiphytic Catopsis berteroniana has little evidence supporting the claims that it is carnivorous. It is able to attract and kill prey and the trichomes on the surface of the leaves can absorb nutrients, but so far no enzyme activity has been detected. It may be that this plant also relies on an internal food web for soft tissue digestion. The same could be said for Paepalanthus bromelioides, though it is a member of Eriocaulaceae and not a bromeliad. It also forms a central water reservoir that has adaptations to attract insects. It, like C. berteroniana, produces no digestive enzymes.
Another potential protocarnivorous pitfall trap is a species of teasel, Dipsacus fullonum, which has been only suggested as a possible carnivore. Only one major study has examined D. fullonum for carnivory and no evidence of digestive enzymes or foliar nutrient absorption was revealed.
Other
Capsella bursa-pastoris, Shepherd's purse, is another plant where the claim of carnivory is contested. This unique protocarnivorous plant is only capable of capturing and digesting prey during one stage of its life cycle. The seeds of the plant, when moistened, secrete a Mucilage that attracts and kills prey. There is also evidence of protease activity and absorption of nutrients. More recent studies have suggested that the plants may benefit from the feeding of Nematodes to the seeds of the plants, but due to a small sample size such conclusions cannot be made. Other plants such as Descurainia pinnata, Descurainia sophia, Hirschfeldia incana, and Lepidium flavum were also noted to entrap small insects. Mucilage production by seeds is fairly common in the plant kingdom and is typically associated with root and shoot penetration. Further work to identify the nutrient fluxes in this seed-insect system in-situ are required to understand any carnivorous aspects of this system.
Puya raimondii and Puya chilensis are two large arid bromeliads that have been suspected of being proto-carnivorous plants due to their entrapment of small animals in their spiny leaves. Puya raimondii was noted to have associated with numerous birds, some of which would become ensnared in the spiky foliage and die. It is hypothesized that this, as well as dropping from the birds who lived amongst the leaves, are a source of nutrients upon decomposition and subsequent foliage absorption by the plant. Similarly, Puya chilensis was noted to ensnare livestock such as sheep who, unless rescued would degrade and feed the plant. Despite this, the adaptations seen in Puya that lead to ensnarement of animals seems most likely to be a defense mechanism.
Loss of carnivory
A few plants that could be considered protocarnivorous or paracarnivorous are those that once had carnivorous adaptations but appear to be evolving or have evolved away from a direct prey relationship with arthropods and rely on other sources for obtaining nutrients. One example of such a phenomenon is the pitfall trap of Nepenthes ampullaria, a tropical pitcher plant. Although it retains its ability to attract, capture, kill, and digest insect prey, this species has acquired adaptations that appear to favor digestion of leaf litter. It could potentially be referred to as a detritivore. Another tropical pitcher plant, Nepenthes lowii, is known to catch very few prey items compared to other Nepenthes. Preliminary observations suggest that this particular species may have moved away from a solely (or even primarily) carnivorous nature and be adapted to "catching" the droppings of birds feeding at its nectaries. A 2009 study found that mature N. lowii plants derived 57–100% of their foliar nitrogen from treeshrew droppings.
Utricularia purpurea, a bladderwort, comes from another genus of carnivorous plants and may have lost its appetite for carnivory, at least in part. This species can still trap and digest arthropod prey in its specialized bladder traps, but does so sparingly. Instead, it harbors a community of algae, zooplankton, and debris in the bladders, giving rise to the hypothesis that the bladders of U. purpurea favor a mutualistic interaction in place of a predator-prey relationship.
Evolution
The disciplines of ecology and evolutionary biology have presented several hypotheses on the evolution of carnivorous plants that may also apply to protocarnivorous plants. The name "protocarnivorous plant" itself suggests that these species are on their way to carnivory, though others may simply be an example of a defense-related adaptation, such as that found in Plumbago. Still others (Utricularia purpurea, Nepenthes ampullaria, and Nepenthes lowii) may be examples of carnivorous plants moving away from the carnivorous syndrome.
In his 1998 book, Interrelationship Between Insects and Plants, Pierre Jolivet only considered four species of plants to be protocarnivorous: Catopsis berteroniana, Brocchinia reducta, B. hectioides, and Paepalanthus bromeloides. Jolivet writes, "It is important to remember that all carnivorous plants are dicots and all protocarnivorous plants are monocots," though he does not explain why nor does he describe his reasons for excluding other dicotyledonous plants that are protocarnivorous.
| Biology and health sciences | Botany | Biology |
975495 | https://en.wikipedia.org/wiki/Street%20photography | Street photography | Street photography is photography conducted for art or inquiry that features unmediated chance encounters and random incidents within public places. It usually has the aim of capturing images at a decisive or poignant moment by careful framing and timing. Street photography overlaps widely with candid photography, although the latter can also be used in other settings, such as portrait photography and event photography.
Street photography does not necessitate the presence of a street or even the urban environment. Though people usually feature directly, street photography might be absent of people and can be of an object or environment where the image projects a decidedly human character in facsimile or aesthetic.
Street photography can focus on people and their behavior in public. In this respect, the street photographer is similar to social documentary photographers or photojournalists who also work in public places, but with the aim of capturing newsworthy events. Any of these photographers' images may capture people and property visible within or from public places, which often entails navigating ethical issues and laws of privacy, security, and property.
Much of what is regarded, stylistically and subjectively, as definitive street photography was made in the era spanning the end of the 19th century through to the late 1970s, a period which saw the emergence of portable cameras that enabled candid photography in public places.
History
Depictions of everyday public life form a genre in almost every period of world art, beginning in the pre-historic, Sumerian, Egyptian and early Buddhist art periods. Art dealing with the life of the street, whether within views of cityscapes, or as the dominant motif, appears in the West in the canon of the Northern Renaissance, Baroque, Rococo, of Romanticism, Realism, Impressionism and Post-Impressionism. With the type having been so long established in other media, it followed that photographers would also pursue the subject as soon as technology enabled them.
Nineteenth-century precursors
In 1838 or 1839 the first photograph of figures in the street was recorded by Louis-Jacques-Mandé Daguerre in one of a pair of daguerreotype views taken from his studio window of the Boulevard du Temple in Paris. The second, made at the height of the day, shows an unpopulated stretch of street, while the other was taken at about 8:00 am, and as Beaumont Newhall reports, "The Boulevard, so constantly filled with a moving throng of pedestrians and carriages was perfectly solitary, except an individual who was having his boots brushed. His feet were compelled, of course, to be stationary for some time, one being on the box of the boot black, and the other on the ground. Consequently his boots and legs were well defined, but he is without body or head, because these were in motion."
Charles Nègre was the first photographer to attain the technical sophistication required to register people in movement on the street in Paris in 1851. Photographer John Thomson, a Scotsman working with journalist and social activist Adolphe Smith, published Street Life in London in twelve monthly installments starting in February 1877. Thomson played a key role in making everyday life on the streets a significant subject for the medium.
Eugene Atget is regarded as a progenitor, not because he was the first of his kind, but as a result of the popularisation in the late 1920s of his record of Parisian streets by Berenice Abbott, who was inspired to undertake a similar documentation of New York City. As the city developed, Atget helped to promote Parisian streets as a worthy subject for photography. From the 1890s to the 1920s he mainly photographed its architecture, stairs, gardens, and windows. He did photograph some workers, but people were not his main interest.
First sold in 1925, the Leica was the first commercially successful camera to use 35 mm film. Its compactness and bright viewfinder, matched to lenses of quality (changeable on Leicas sold from 1930) helped photographers move through busy streets and capture fleeting moments.
Twentieth-century practitioners
United Kingdom
Paul Martin is considered a pioneer, making candid unposed photographs of people in London and at the seaside in the late 19th and early 20th century in order to record life. Martin is the first recorded photographer to do so in London with a disguised camera.
Mass-Observation was a social research organisation founded in 1937 which aimed to record everyday life in Britain and to record the reactions of the 'man-in-the-street' to King Edward VIII's abdication in 1936 to marry divorcée Wallis Simpson, and the succession of George VI. Humphrey Spender made photographs on the streets of the northern English industrial town of Bolton, identified for the project's publications as "Yorktown", while filmmaker Humphrey Jennings made a cinematic record in London for a parallel branch of investigation. The chief Mass-Observationists were anthropologist Tom Harrisson in Bolton and poet Charles Madge in London, and their first report was produced as the book "May the Twelfth: Mass-Observation Day-Surveys 1937 by over two hundred observers"
France
The post-war French Humanist School photographers found their subjects on the street or in the bistro. They worked primarily in black‐and‐white in available light with the popular small cameras of the day, discovering what the writer Pierre Mac Orlan (1882–1970) called the "fantastique social de la rue" (social fantastic of the street) and their style of image-making rendered romantic and poetic the way of life of ordinary European people, particularly in Paris. Between 1946 and 1957 Le Groupe des XV annually exhibited work of this kind.
Street photography formed the major content of two exhibitions at the Museum of Modern Art (MoMA) in New York curated by Edward Steichen, Five French Photographers: Brassai; Cartier-Bresson, Doisneau, Ronis, Izis in 1951 to 1952, and Post-war European Photography in 1953, which exported the concept of street photography internationally. Steichen drew on large numbers of European humanist and American humanistic photographs for his 1955 exhibition The Family of Man, proclaimed as a compassionate portrayal of a global family, which toured the world, inspiring photographers in the depiction of everyday life.
Henri Cartier-Bresson's widely admired Images à la Sauvette (1952) (the English-language edition was titled The Decisive Moment) promoted the idea of taking a picture at what he termed the "decisive moment"; "when form and content, vision and composition merged into a transcendent whole". His book inspired successive generations of photographers to make candid photographs in public places before this approach per se came to be considered déclassé in the aesthetics of postmodernism.
America
Walker Evans worked from 1938 to 1941 on a series in the New York City Subway in order to practice a pure 'record method' of photography; candid portraits of people who would unconsciously come 'into range before an impersonal fixed recording machine during a certain time period'. The recording machine was 'a hidden camera', a 35 mm Contax concealed beneath his coat, that was 'strapped to the chest and connected to a long wire strung down the right sleeve'. However, his work had little contemporary impact as due to Evans' sensitivities about the originality of his project and the privacy of his subjects, it was not published until 1966, in the book Many Are Called, with an introduction written by James Agee in 1940. The work was exhibited as Walker Evans Subway Photographs and Other Recent Acquisitions held at the National Gallery of Art, 1991–1992, accompanied by the catalogue Walker Evans: Subways and Streets.
Helen Levitt, then a teacher of young children, associated with Evans in 1938–39. She documented the transitory chalk drawings that were part of children's street culture in New York at the time, as well as the children who made them. In July 1939, MoMA's new photography section included Levitt's work in its inaugural exhibition. In 1943, Nancy Newhall curated her first solo exhibition Helen Levitt: Photographs of Children there. The photographs were ultimately published in 1987 as In The Street: chalk drawings and messages, New York City 1938–1948.
The beginnings of street photography in the United States can also be linked to those of jazz, both emerging as outspoken depictions of everyday life. This connection is visible in the work of the New York school of photography (not to be confused with the New York School). The New York school of photography was not a formal institution, but rather comprised groups of photographers in the mid-20th century based in New York City.
Robert Frank's 1958 book, The Americans, was significant; raw and often out of focus, Frank's images questioned mainstream photography of the time, "challenged all the formal rules laid down by Henri Cartier-Bresson and Walker Evans" and "flew in the face of the wholesome pictorialism and heartfelt photojournalism of American magazines like LIFE and Time". Although the photo-essay format was formative in his early years in Switzerland, Frank rejected it: "I wanted to follow my own intuition and do it my way, and not make any concession – not make a Life story'. Even the work of Cartier-Bresson he regarded as insufficiently subjective: "I've always thought it was terribly important to have a point of view, and I was also sort of disappointed in him [Cartier-Bresson] that that was never in his pictures'.
Frank's work thus epitomises the subjectivity of postwar American photography, as John Szarkowski prominently argued; "Minor White's magazine Aperture and Robert Frank's book The Americans were characteristic of the new work of their time in the sense that they were both uncompromisingly committed to a highly personal vision of the world". His claim for subjectivism is widely accepted, resulting more recently in Patricia Vettel-Becker's perspective on postwar street photography as highly masculine and centred on the male body, and Lili Corbus Benzer positioning Robert Frank's book as negatively prioritising 'personal vision' over social activism. Mainstream photographers in America fiercely rejected Frank's work, but the book later "changed the nature of photography, what it could say and how it could say it". It was a stepping stone for fresh photographers looking to break away from the restrictions of the old style and "remains perhaps the most influential photography book of the 20th century". Szarkowski's recognition of Frank's subjectivity led him to promote more street photography in America, such as his curation of the 1967 New Documents exhibition featuring Diane Arbus, Lee Friedlander and Garry Winogrand or of Mark Cohen's work in 1973. Both at the Museum of Modern Art (MoMA).
Individual approaches in the later twentieth and early twenty-first centuries
Inspired by Frank, in the 1960s Garry Winogrand, Lee Friedlander and Joel Meyerowitz began photographing on the streets of New York. Phil Coomes, writing for BBC News in 2013, said "For those of us interested in street photography there are a few names that stand out and one of those is Garry Winogrand"; critic Sean O'Hagan, writing in The Guardian in 2014, said "In the 1960s and 70s, he defined street photography as an attitude as well as a style – and it has laboured in his shadow ever since, so definitive are his photographs of New York."
Returning to the UK in 1965 from the US where he had met Winogrand and adopted street photography, Tony Ray-Jones turned a wry eye on often surreal groupings of British people on their holidays or participating in festivals. The acerbic comic vein of Ray-Jones' high-contrast monochromes, which before his premature death were popularized by Creative Camera (for which he conducted an interview with Brassaï), is mined more recently by Martin Parr in hyper-saturated colour.
Characteristics and Distinctions
Street photography is a vast genre that can be defined in many ways, but it is often characterized by the spontaneous capturing of an unrepeatable, fleeting moment, often of the everyday going-ons of strangers. It is classically shot with wider angle lenses (e.g. 35mm) and usually features urban environments.
Street photography versus documentary photography
Street photography and documentary photography are similar genres of photography that often overlap while having distinct individual qualities.
Documentary photographers typically have a defined, premeditated message and an intention to record particular events in history. The gamut of the documentary approach encompasses aspects of journalism, art, education, sociology and history. In social investigation, documentary images are often intended to provoke, or to highlight the need for, societal change. Conversely, street photography is reactive and disinterested by nature and motivated by curiosity or creative inquiry, allowing it to deliver a relatively neutral depiction of the world that mirrors society, "unmanipulated" and with usually unaware subjects.
Candid street photography versus street portraits
Street photography is generally seen as unposed and candid, but there are a few street photographers who interact with strangers on the streets and take their portraits. Street portraits are unplanned portraits taken of strangers while out doing street photography, however they are seen as posed because there is interaction with the subject.
Legal concerns
The issue of street photographers taking photographs of strangers in public places without their consent (i.e. 'candid photography' by definition) for fine art purposes has been controversial. Photographing people and places in public is legal in most countries protecting freedom of expression and journalistic freedom. There are usually limits on how photos of people may be used and most countries have specific laws regarding people's privacy.
Street photography may also conflict with laws that were originally established to protect against paparazzi, defamation, or harassment and special laws will sometimes apply when taking pictures of minors.
Canada
While the common-law provinces follow the United Kingdom, with respect to the freedom to take pictures in a public place, Quebec law provides that, in most circumstances, their publication can take place only with the consent of the subjects therein.
European Union
The European Union's Human Rights Act 1998, which all EU countries have to uphold in their domestic law, establishes in a right to privacy. This can result in restrictions on the publication of photography. The right to privacy is protected by Article 8 of the convention. In the context of photography, it stands at odds to the Article 10 right of freedom of expression. As such, courts will usually consider the public interest in balancing the rights through the legal test of proportionality.
France
While also limiting photography in order to protect privacy rights, street photography can still be legal in France when pursued as an art form under certain circumstances. While in one prominent case the freedom of artistic expression trumped the individual's right to privacy, the legality will much depend on the individual case.
Germany
Germany protects the right to take photos in public, but also recognizes a "right to one's own picture". That means that even though pictures can often be taken without someones consent, they must not be published without the permission of the person in the picture. The law also protects specifically against defamation".
This right to one's picture, however, does not extend to people who are not the main focus of the picture (e.g. who just wandered into a scene), or who are not even recognizable in the photo. It also does not usually extend to people who are public figures (e.g. politicians or celebrities).
If a picture is considered art, the courts will also consider the photographer's freedom of artistic expression; meaning that "artful" street photography can still be legally published in certain cases.
Greece
Production, publication and non-commercial sale of street photography is legal in Greece, without the need to have the consent of the shown person or persons. In Greece the right to take photographs and publish them or sell licensing rights over them as fine art or editorial content is protected by the Constitution of Greece (Article 14 and other articles) and free speech laws as well as by case law and legal cases. Photographing the police and publishing the photographs is also legal.
Photography and video-taking is also permitted across the whole Athens Metro transport network, which is very popular among Greek street photographers.
Hungary
In Hungary, from 15 March 2014 anyone taking photographs is technically breaking the law if someone wanders into shot, under a new civil code that outlaws taking pictures without the permission of everyone in the photograph. This expands the law on consent to include the taking of photographs, in addition to their publication.
Japan
In Japan permission, or at least signification of intent to photo and the absence of refusal, is needed both for photography and for publication of photos of recognisable people even in public places. 'Hidden photography' (kakushidori hidden, surreptitious photography) 'stolen photography' (tōsatsu with no intention of getting permission) and "fast photography' (hayayori before permission and refusal can be given) are forbidden unless in the former permission is obtained from the subject immediately after taking the photo. People have rights to their images (shōzōken, droit de image). The law is especially strict when that which is taken, or the taking, is in any sense shameful. Exception is made for photos of famous people in public places and news photography by registered news media outlets where favour is given to the public right to know.
South Africa
In South Africa, photographing people in public is legal. Reproducing and selling photographs of people is legal for editorial and limited fair use commercial purposes. There exists no case law to define what the limits on commercial use are. Civil law requires the consent of any identifiable persons for advertorial and promotional purposes. Property, including animals, do not enjoy any special consideration.
South Korea
In South Korea, taking pictures of women without their consent, even in public, is considered to be criminal sexual assault, punishable by a fine of up to 10 million won and up to 5 years imprisonment. In July 2017 an amendment to the law was voted on in favour of allowing for chemical castration of people taking such photographs.
United Kingdom
The United Kingdom has enacted domestic law in accordance with the Human Rights Act, which limits the publication of certain photographs in the context of the news media. However, as a general rule, the taking of photographs of other people, including children, in a public place is legal, whether or not the person consents.
In terms of photographing property, in general under UK law one cannot prevent photography of private property from a public place, and in general the right to take photographs on private land upon which permission has been obtained is similarly unrestricted. However, landowners are permitted to impose any conditions they wish upon entry to a property, such as forbidding or restricting photography. There are however nuances to these broad principles, and even where photography is restricted as a condition of entry, the landowner's remedies for a breach will usually be limited to asking the photographer to leave the premises. They cannot confiscate cameras or memory cards nor can they require photographs be deleted.
United States
In the US, the protection of free speech is generally interpreted widely, and encompasses art speech, including photography. As such, street photography is exempt from right to privacy claims.
For example, the case Nussenzweig v. DiCorcia established that taking, publishing and selling street photography (including street portraits) is legal, even without the consent of the person being portrayed, because photography is protected as free speech and art by the First Amendment. However, the Court of Appeals for the State of New York upheld the Nussenzweig decision solely on the basis of the statute of limitations expiring and did not address the free speech and First Amendment arguments.
Street photography is additionally protected by court precedent. As courts regularly uphold that individuals have no right to privacy in public places, there is little, if any, legal action that can be taken against a street photographer.
Ethical concerns
Street photography's nonconsensual nature can raise concerns about privacy and autonomy.
Privacy
An invasion of privacy occurs when an individual's right to privacy is infringed upon by unwelcome intrusion into their private life, including public disclosure of private information. While a person may lose their reasonable expectation of privacy when going out in public according to court precedent, some feel that individuals should be able to control their information (such as their image) even in public. These critics would contend that it cannot be said that every person in public accepts the possibility of being photographed because assumption of risk is based on conscious consent, and might also argue that a photograph's ability to accentuate details means that it does more than just record what the public sees.
Autonomy
As the right to privacy can be seen as protecting representations of oneself and since nonconsensual use of an individual's image in street photography denies the subject control of the final image, some view street photography as taking away autonomy. When a person is not asked for consent to use their picture, they do not get to decide whether or where the picture is published or how it is viewed.
| Technology | Photography | null |
975686 | https://en.wikipedia.org/wiki/Dike%20%28geology%29 | Dike (geology) | In geology, a dike or dyke is a sheet of rock that is formed in a fracture of a pre-existing rock body. Dikes can be either magmatic or sedimentary in origin. Magmatic dikes form when magma flows into a crack then solidifies as a sheet intrusion, either cutting across layers of rock or through a contiguous mass of rock. Clastic dikes are formed when sediment fills a pre-existing crack.
Magmatic dikes
A magmatic dike is a sheet of igneous rock that cuts across older rock beds. It is formed when magma fills a fracture in the older beds and then cools and solidifies. The dike rock is usually more resistant to weathering than the surrounding rock, so that erosion exposes the dike as a natural wall or ridge. It is from these natural walls that dikes get their name.
Dikes preserve a record of the fissures through which most mafic magma (fluid magma low in silica) reaches the surface. They are studied by geologists for the clues they provide on volcanic plumbing systems. They also record ancient episodes of extension of the Earth's crust, since large numbers of dikes (dike swarms) are formed when the crust is pulled apart by tectonic forces. The dikes show the direction of extension, since they form at right angles to the direction of maximum extension.
Description
The thickness of a dike is much smaller than its other two dimensions, and the opposite walls are roughly parallel, so that a dike is more or less constant in thickness. The thickness of different dikes can range from a few millimeters to hundreds of meters, but is most typically from about a meter to a few tens of meters. The lateral extent can be tens of kilometers, and dikes with a thickness of a few tens of meters or more commonly extend for over 100 km. Most dikes are steeply dipping; in other words, they are oriented nearly vertically. Subsequent tectonic deformation may rotate the sequence of strata through which the dike propagates so that the dike becomes horizontal.
It is common for a set of dikes, each a few kilometers long, to form en echelon. This pattern is seen in the Higganum dike set of New England. This dike set consists of individual dikes that are typically four kilometers in length at the surface and up to 60 meters wide. These short segments form longer groups extending for around 10 km. The entire set of dikes forms a line extending for 250 km. Individual segments overlap, with the overlapping portions thinner, so that the combined thickness of the two overlapped portions is about the same as the thickness of a single segment. Other examples of en echelon dikes are the Inyo dike of Long Valley, California, US; the Jagged Rocks complex, Arizona, US; and the dikes of oceanic spreading centers.
Dikes range in composition from basaltic to rhyolitic, but most are basaltic. The texture is typically slightly coarser than basalt erupted at the surface, forming a rock type called diabase. The grain size varies systematically across the dike, with the coarsest grains normally at the center of the dike. Dikes formed at shallow depth commonly have a glassy or fine-grained chilled margin 1 to 5 cm thick, formed where the magma was rapidly cooled by contact with the cold surrounding rock. Shallow dikes also typically show columnar jointing perpendicular to the margins. Here the dike rock fractures into columns as it cools and contracts. These are usually 5- to 6-sided, but 3- to 4-sided columns are also common. These are fairly uniform in size within a single dike, but range from a few centimeters to over 0.3 meters across in different dikes, tending to be thicker in wider dikes. Larger columns are likely a consequence of slower cooling.
Dike rock is usually dense, with almost no vesicles (frozen bubbles), but vesicles may be seen in the shallowest part of a dike. When vesicles are present, they tend to form bands parallel to walls and are elongated in direction of flow. Likewise, phenocrysts (larger crystals) on the margins of the dike show an alignment in the direction of flow.
In contrast to dikes, which cut across the bedding of layered rock, a sill is a sheet intrusion that forms within and parallel to the bedding.
Formation
Mafic magma (fluid magma low in silica) usually reaches the surface through fissures, forming dikes.
At the shallowest depths, dikes form when magma rises into an existing fissure. In the young, shallow dikes of the Hawaiian Islands, there is no indication of forceful intrusion of magma. For example, there is little penetration of magma into the walls of dikes even when the walls consist of highly porous volcanic clinker, and little wall material breaks off into the molten magma. These fissures likely open as a result of bulging of the rock beds above a magma chamber that is being filled with magma from deeper in the crust.
However, open fractures can exist only near the surface. Magma deeper in the crust must force its way through the rock, always opening a path along a plane normal to the minimum principal stress. This is the direction in which the crust is under the weakest compression and so requires the least work to fracture. At shallow depths, where the rock is brittle, the pressurized magma progressively fractures the rock as it advances upwards. Even if the magma is only slightly pressurized compared with the surrounding rock, tremendous stress is concentrated on the tip of the propagating fracture. In effect, the magma wedges apart the brittle rock in a process called hydraulic fracture. At greater depths, where the rock is hotter and less brittle, the magma forces the rock aside along brittle shear planes oriented 35 degrees to the sides of the dock. This bulldozer-like action produces a blunter dike tip. At the greatest depths, the shear planes become ductile faults, angled 45 degree from the sides of the dike. At depths where the rock is completely plastic, a diapir (a rising plug of magma) forms instead of a dike.
The walls of dikes often fit closely back together, providing strong evidence that the dike formed by dilatation of a fissure. However, a few large dikes, such as the 120-meter-thick Medford dike in Maine, US, or the 500-meter-thick Gardar dike in Greenland, show no dilatation. These may have formed by stoping, in which the magma fractured and disintegrated the rock at its advancing tip rather than prying the rock apart. Other dikes may have formed by metasomatism, in which fluids moving along a narrow fissure changed the chemical composition of the rock closest to the fissure.
There is an approximate relationship between the width of a dike and its maximum extent, expressed by the formula:
Here is the thickness of the dike; is its lateral extent; is the excess pressure in the magma relative to the host rock; is the density of the host rock; and is the P-wave velocity of the host rock (essentially, the speed of sound in the rock). This formula predicts that dikes will be longer and narrower at greater depths below the surface. The ratio of thickness to length is around 0.01 to 0.001 near the surface, but at depth it ranges from 0.001 to 0.0001. A surface dike 10 meters in thickness will extend about 3 km, while a dike of similar thickness at depth will extend about 30 km. This tendency of intruding magma to form shorter fissures at shallower depths has been put forward as an explanation of en echelon dikes. However, en echelon dikes have also been explained as a consequence of the direction of minimum principal stress changing as the magma ascends from deep to shallow levels in the crust.
An en echelon dike set may evolve into single dike with bridges connecting the formerly separate segments and horns showing former segment overlaps. In ancient dikes in deformed rock, the bridges and horns are used by geologists to determine the direction of magma flow.
Where there is rapid flow of molten magma through a fissure, the magma tends to erode the walls, either by melting the wall rock or by tearing off fragments of wall rock. This widens the fissure and increases flow. Where flow is less rapid, the magma may solidify next to the wall, narrowing the fissure and decreasing flow. This causes flow to become concentrated at a few points. At Hawaii, eruptions often begin with a curtain of fire where lava erupts along the entire length of a fissure several kilometers long. However, the length of erupting fissure diminishes over time, becoming focused on a short segment of less than half a kilometer. The minimum possible width of a dike is determined by the balance between magma movement and cooling.
Multiple and composite dikes
There may be more than one injection of magma along a given fissure. When multiple injections are all of similar composition, the dike is described as a multiple dike. However, subsequent injections are sometimes quite different in composition, and then the dike is described as a composite dike. The range of compositions in a composite dike can go all the way from diabase to granite, as is observed in some dikes of Scotland and northern Ireland.
After the initial formation of a dike, subsequent injections of magma are most likely to take place along the center of the dike. If the previous dike rock has cooled significantly, the subsequent injection can be characterized by fracturing of the old dike rock and the formation of chilled margins on the new injection.
Dike swarms
Sometimes dikes appear in swarms, consisting of several to hundreds of dikes emplaced more or less contemporaneously during a single intrusive event. Dike swarms are almost always composed of diabase and most often are associated with flood basalts of large igneous provinces. They are characteristic of divergent plate boundaries. For example, Jurassic dike swarms in New England and Paleogene swarms in the west of Scotland and running into northern England record the early opening of the Atlantic Ocean. Dike swarms are forming in the present day along the divergent plate boundary running through Iceland. Dike swarms often have a great cumulative thickness: Dikes in Iceland average 3 to 5 meters in width, but one 53-kilometer stretch of coast has about 1000 dikes with total thickness of 3 kilometers. The world's largest dike swarm is the Mackenzie dike swarm in the Northwest Territories, Canada.
Dike swarms (also called dike complexes) are exposed in the eroded rift zones of Hawaiian volcanoes. As with most other magmatic dikes, these were fissures through which lava reached the surface. The swarms are typically 2.5 to 5 km in width, with individual dikes about a meter in width. The dike swarms extend radially out from volcano summits and parallel to the long axis of the volcanic shield. Sills and stocks are occasionally present in the complexes. They are abruptly truncated at the margins of summit calderas. Typically, there are about 50 to 100 dikes per kilometer at the center of the rift zone, though the density can be as high as 500 per kilometer and the dikes then make up half the volume of the rock. The density drops to 5 to 50 per kilometer away from the center of the rift zone before abruptly dropping to very few dikes. It is likely that the number of dikes must increase with depth, reaching a typical value of 300 to 350 per kilometer at the level of the ocean floor. In some respects, these dike swarms resemble those of western Scotland associated with the flood eruptions that preceded the opening of the Atlantic Ocean.
Dikes often form as radial swarms from a central volcano or intrusion. Though they appear to originate in the central intrusion, the dikes often have a different age and composition from the intrusion. These radial swarms may have formed over the intrusion and were later cut by the rising body of magma, or the crust was already experiencing regional tension and the intrusion triggered formation of the fissures.
Sheeted dike complexes
In rock of the oceanic crust, pillow lava erupted onto the sea floor is underlain by sheeted dike complexes that preserve the conduits through which magma reached the ocean floor at mid-ocean ridges. These sheeted dikes characteristically show a chilled margin on only one side, indicating that each dike was split in half by a subsequent eruption of magma.
Ring dikes and cone sheets
Ring dikes and cone sheets are special types of dikes associated with caldera volcanism. These are distributed around a shallow magma chamber. Cone sheets form when magma is injected into a shallow magma chamber, which lifts and fractures the rock beds above it. The fractures take the form of a set of concentric cones dipping at a relatively shallow angle into the magma chamber. When the caldera is subsequently emptied by explosive volcanic activity, the roof of the magma chamber collapses as a plug of rock surrounded by a ring fracture. Magma rising into the ring fracture produces a ring dike. Good examples of ring dikes and cone sheets are found in the Ardnamurchan peninsula of Scotland.
Other special types
A feeder dike is a dike that acted as a conduit for magma moving from a magma chamber to a localized intrusion. For example, the Muskox intrusion in arctic Canada was fed by a large dike, with a thickness of 150 meters.
A sole injection is a dike injected along a thrust fault plane, where rock beds were fractured and thrust up over younger beds.
Clastic dikes
Clastic dikes (also known as sedimentary dikes) are vertical bodies of sedimentary rock that cut off other rock layers. They can form in two ways:
When shallow unconsolidated sediment is composed of alternating coarse-grained and impermeable clay layers the fluid pressure inside the coarser layers may reach a critical value due to lithostatic overburden. Driven by the fluid pressure the sediment breaks through overlying layers and forms a dike.
When a soil is under permafrost conditions the pore water is totally frozen. When cracks are formed in such rocks, they may fill up with sediments that fall in from above. The result is a vertical body of sediment that cuts through horizontal layers, a dike.
Gallery
| Physical sciences | Geologic features | Earth science |
976931 | https://en.wikipedia.org/wiki/Fire-bellied%20toad | Fire-bellied toad | The fire-bellied toads are a group of six species of small frogs (most species typically no longer than ) belonging to the genus Bombina.
The name "fire-bellied" is derived from the brightly colored red- or yellow-and-black patterns on the toads' ventral regions, which act as aposematic coloration, a warning to predators of the toads' reputedly foul taste. The other parts of the toads' skins are green or dark brown. When confronted with a potential predator, these toads commonly engage in an unkenreflex, Unken- being the combining form of Unke, German for fire-bellied toad. In the unkenreflex, the toad arches its back, raising its front and back legs to display the aposematic coloration of its ventral side.
Species
The currently recognized species are:
Biology
The female of the species typically lays 80–300 eggs that can be found hanging off plant stems. The offspring develop in pools or puddles. Their metamorphosis is complete within a few weeks, peaking in July–August. The toadlets attain a length of 12–15 mm. The eggs, laid in August, metamorphose only after the winter, with the toadlets attaining a length of 3–5 cm. These toadlets still have white bellies.
Tadpoles eat mainly algae and higher plants. The young toads and the adult toads consume insects, such as flies and beetles, shrimp and larvae; but also annelid worms and terrestrial arthropods. Fire-bellied toads are sometimes active during the day, but are more so during the night. The mating call of the male sounds like a dog's bark, rather than the typical drawn out croaking groan.
Distribution and habitat
The species can be found both in Europe and in areas in Asia with a moderate climate.
All kinds of toads prefer habitats of stagnant water, which they are reluctant to leave. The fire-bellied toad lives primarily in a continental climate in standing water or calmer backwaters of rivers or ponds. The species can also be found in flood pools and in floodplains. The yellow-bellied species typically live at higher altitude, where they are primarily found in small bodies of water like ponds or water-filled ruts, often near small mountain streams. The Asian species also live in small bodies of water and can live at altitudes of over 3000 meters.
Captivity
Several species in the genus Bombina, particularly B. orientalis, B bombina, and B. variegata, are commonly kept as exotic pets and are readily available in pet stores. In captivity, they are easily maintained in vivaria, and when provided with proper food and environmental conditions, often prove to be robust, flamboyant, and long-lived amphibians. Captive fire-bellied toads can live from 3–10 years and some captive specimens have reached over 20 years.
In captivity, they eat a wide variety of food, including crickets, moths, minnows, blood worms and pinkie mice, though some frogs cannot handle certain foods, due to their size. They can sometimes act very aggressively against each other, particularly males. They have a ferocious appetite. Due to this, its best to monitor their food intake to ensure they're not over eating.
Fire-bellied toads breed extremely easily in captivity. Pet owners can expect to hear their mating calls largely starting in May and continuing to mid-August. Breeding will happen unprovoked by the owner. Younger females will have smaller clutches of around 60 to 80 eggs where older females can lay around 200. Fire-bellied toads bred in captivity will often have darker and less vibrant coloration having a more orange underside. Wild caught specimens tend to be brighter and have deeper red stomachs.
Fire-bellied toads are easy to raise and handle in solitude. This makes them advantageous to study in various sciences.
Toxicity
Fire-bellied toads secrete bombesin and 5-hydroxytryptamine, which cause irritation to the skin and eyes. Most reported exposures are of young children, did not result in major clinical effects, and were treated by rinsing.
| Biology and health sciences | Frogs and toads | Animals |
977193 | https://en.wikipedia.org/wiki/Halocarbon | Halocarbon | Halocarbon compounds are chemical compounds in which one or more carbon atoms are linked by covalent bonds with one or more halogen atoms (fluorine, chlorine, bromine or iodine – ) resulting in the formation of organofluorine compounds, organochlorine compounds, organobromine compounds, and organoiodine compounds. Chlorine halocarbons are the most common and are called organochlorides.
Many synthetic organic compounds such as plastic polymers, and a few natural ones, contain halogen atoms; they are known as halogenated compounds or organohalogens. Organochlorides are the most common industrially used organohalides, although the other organohalides are used commonly in organic synthesis. Except for extremely rare cases, organohalides are not produced biologically, but many pharmaceuticals are organohalides. Notably, many pharmaceuticals such as Prozac have trifluoromethyl groups.
For information on inorganic halide chemistry, see halide.
Chemical families
Halocarbons are typically classified in the same ways as the similarly structured organic compounds that have hydrogen atoms occupying the molecular sites of the halogen atoms in halocarbons. Among the chemical families are:
haloalkanes—compounds with carbon atoms linked by single bonds
haloalkenes—compounds with one or more double bonds between carbon atoms
haloaromatics—compounds with carbons linked in one or more aromatic rings with a delocalised donut shaped pi cloud.
The halogen atoms in halocarbon molecules are often called "substituents," as though those atoms had been substituted for hydrogen atoms. However halocarbons are prepared in many ways that do not involve direct substitution of halogens for hydrogens.
History and context
A few halocarbons are produced in massive amounts by microorganisms. For example, several million tons of methyl bromide are estimated to be produced by marine organisms annually. Most of the halocarbons encountered in everyday life – solvents, medicines, plastics – are man-made. The first synthesis of halocarbons was achieved in the early 1800s. Production began accelerating when their useful properties as solvents and anesthetics were discovered. Development of plastics and synthetic elastomers has led to greatly expanded scale of production. A substantial percentage of drugs are halocarbons.
Natural halocarbons
A large amount of the naturally occurring halocarbons, such as dioxine, are created by wood fire and volcanic activity. A third major source is marine algae, which produce several chlorinated methane and ethane containing compounds. Several thousand complex halocarbons are known to be produced mainly by marine species. Although chlorine compounds are the majority of the discovered compounds, bromides, iodides and fluorides have also been found in nature. Tyrian purple is a bromide and is produced by certain sea snails. Thyroxine is secreted by the thyroid gland and is an iodide. The highly toxic fluoroacetate is one of the rare natural organofluorides and is produced by certain plants.
Organoiodine compounds, including biological derivatives
Organoiodine compounds, called organic iodides, are similar in structure to organochlorine and organobromine compounds, but the C-I bond is weaker. Many organic iodides are known, but few are of major industrial importance. Iodide compounds are mainly produced as nutritional supplements.
The thyroxin hormones are essential for human health, hence the usefulness of iodized salt.
Six mg of iodide a day can be used to treat patients with hyperthyroidism due to its ability to inhibit the organification process in thyroid hormone synthesis, the so-called Wolff–Chaikoff effect. Prior to 1940, iodides were the predominant antithyroid agents. In large doses, iodides inhibit proteolysis of thyroglobulin, which permits TH to be synthesized and stored in colloid, but not released into the bloodstream. This mechanism is referred to as Plummer effect.
This treatment is seldom used today as a stand-alone therapy despite the rapid improvement of patients immediately following administration. The major disadvantage of iodide treatment lies in the fact that excessive stores of TH accumulate, slowing the onset of action of thioamides (TH synthesis blockers). In addition, the functionality of iodides fades after the initial treatment period. An "escape from block" is also a concern, as extra stored TH may spike following discontinuation of treatment.
Uses
The first halocarbon commercially used was Tyrian purple, a natural organobromide of the Murex brandaris marine snail.
Common uses for halocarbons have been as solvents, pesticides, refrigerants, fire-resistant oils, ingredients of elastomers, adhesives and sealants, electrically insulating coatings, plasticizers, and plastics. Many halocarbons have specialized uses in industry. One halocarbon, sucralose, is a sweetener.
Before they became strictly regulated, the general public often encountered haloalkanes as paint and cleaning solvents such as trichloroethane (1,1,1-trichloroethane) and carbon tetrachloride (tetrachloromethane), pesticides like 1,2-dibromoethane (EDB, ethylene dibromide), and refrigerants like Freon-22 (duPont trademark for chlorodifluoromethane). Some haloalkanes are still widely used for industrial cleaning, such as methylene chloride (dichloromethane), and as refrigerants, such as R-134a (1,1,1,2-tetrafluoroethane).
Haloalkenes have also been used as solvents, including perchloroethylene (Perc, tetrachloroethene), widespread in dry cleaning, and trichloroethylene (TCE, 1,1,2-trichloroethene). Other haloalkenes have been chemical building blocks of plastics such as polyvinyl chloride ("vinyl" or PVC, polymerized chloroethene) and Teflon (duPont trademark for polymerized tetrafluoroethene, PTFE).
Haloaromatics include the former Aroclors (Monsanto Company trademark for polychlorinated biphenyls, PCBs), once widely used in power transformers and capacitors and in building caulk, the former Halowaxes (Union Carbide trademark for polychlorinated naphthalenes, PCNs), once used for electrical insulation, and the chlorobenzenes and their derivatives, used for disinfectants, pesticides such as dichloro-diphenyl-trichloroethane (DDT, 1,1,1-trichloro-2,2-bis(p-chlorophenyl)ethane), herbicides such as 2,4-D (2,4-dichlorophenoxyacetic acid), askarel dielectrics (mixed with PCBs, no longer used in most countries), and chemical feedstocks.
A few halocarbons, including acid halides like acetyl chloride, are highly reactive; these are rarely found outside chemical processing. The widespread uses of halocarbons were often driven by observations that most of them were more stable than other substances. They may be less affected by acids or alkalis; they may not burn as readily; they may not be attacked by bacteria or molds; or they may not be affected as much by sun exposure.
Hazards
The stability of halocarbons tended to encourage beliefs that they were mostly harmless, although in the mid-1920s physicians reported workers in polychlorinated naphthalene (PCN) manufacturing suffering from chloracne , and by the late 1930s it was known that workers exposed to PCNs could die from liver disease and that DDT would kill mosquitos and other insects . By the 1950s, there had been several reports and investigations of workplace hazards. In 1956, for example, after testing hydraulic oils containing polychlorinated biphenyl (PCB)s, the U.S. Navy found that skin contact caused fatal liver disease in animals and rejected them as "too toxic for use in a submarine" .
In 1962 a book by U.S. biologist Rachel Carson started a storm of concerns about environmental pollution, first focused on DDT and other pesticides, some of them also halocarbons. These concerns were amplified when in 1966 Danish chemist Soren Jensen reported widespread residues of PCBs among Arctic and sub-Arctic fish and birds . In 1974, Mexican chemist Mario Molina and U.S. chemist Sherwood Rowland predicted that common halocarbon refrigerants, the chlorofluorocarbons (CFCs), would accumulate in the upper atmosphere and destroy protective ozone . Within a few years, ozone depletion was being observed above Antarctica, leading to bans on production and use of chlorofluorocarbons in many countries. In 2007, the Intergovernmental Panel on Climate Change (IPCC) said halocarbons were a direct cause of global warming.
Since the 1970s there have been longstanding, unresolved controversies over potential health hazards of trichloroethylene (TCE) and other halocarbon solvents that had been widely used for industrial cleaning . More recently perfluorooctanoic acid (PFOA), a precursor in the most common manufacturing process for Teflon and also used to make coatings for fabrics and food packaging, became a health and environmental concern starting in 2006 , suggesting that halocarbons, though thought to be among the most inert, may also present hazards.
Halocarbons, including those that might not be hazards in themselves, can present waste disposal issues. Because they do not readily degrade in natural environments, halocarbons tend to accumulate. Incineration and accidental fires can create corrosive byproducts such as hydrochloric acid and hydrofluoric acid, and poisons like halogenated dioxins and furans. Species of Desulfitobacterium are being investigated for their potential in the bioremediation of halogenic organic compounds.
| Physical sciences | Halocarbons | Chemistry |
977244 | https://en.wikipedia.org/wiki/Substituent | Substituent | In organic chemistry, a substituent is one or a group of atoms that replaces (one or more) atoms, thereby becoming a moiety in the resultant (new) molecule. (In organic chemistry and biochemistry, the terms substituent and functional group, as well as side chain and pendant group, are used almost interchangeably to describe those branches from the parent structure, though certain distinctions are made in polymer chemistry. In polymers, side chains extend from the backbone structure. In proteins, side chains are attached to the alpha carbon atoms of the amino acid backbone.)
The suffix -yl is used when naming organic compounds that contain a single bond replacing one hydrogen; -ylidene and -ylidyne are used with double bonds and triple bonds, respectively. In addition, when naming hydrocarbons that contain a substituent, positional numbers are used to indicate which carbon atom the substituent attaches to when such information is needed to distinguish between isomers. Substituents can be a combination of the inductive effect and the mesomeric effect. Such effects are also described as electron-rich and electron withdrawing. Additional steric effects result from the volume occupied by a substituent.
The phrases most-substituted and least-substituted are frequently used to describe or compare molecules that are products of a chemical reaction. In this terminology, methane is used as a reference of comparison. Using methane as a reference, for each hydrogen atom that is replaced or "substituted" by something else, the molecule can be said to be more highly substituted. For example:
Markovnikov's rule predicts that the hydrogen atom is added to the carbon of the alkene functional group which has the greater number of hydrogen atoms (fewer alkyl substituents).
Zaitsev's rule predicts that the major reaction product is the alkene with the more highly substituted (more stable) double bond.
Nomenclature
The suffix -yl is used in organic chemistry to form names of radicals, either separate species (called free radicals) or chemically bonded parts of molecules (called moieties). It can be traced back to the old name of methanol, "methylene" (from , 'wine' and , 'wood', 'forest'), which became shortened to "methyl" in compound names, from which -yl was extracted. Several reforms of chemical nomenclature eventually generalized the use of the suffix to other organic substituents.
The use of the suffix is determined by the number of hydrogen atoms that the substituent replaces on a parent compound (and also, usually, on the substituent). According to the 1993 IUPAC recommendations:
-yl means that one hydrogen is replaced.
-ylidene means that two hydrogens are replaced by a double bond between parent and substituent.
-ylidyne means that three hydrogens are replaced by a triple bond between parent and substituent.
The suffix -ylidine is encountered sporadically, and appears to be a variant spelling of "-ylidene"; it is not mentioned in the IUPAC guidelines.
For multiple bonds of the same type, which link the substituent to the parent group, the infixes -di-, -tri-, -tetra-, etc., are used: -diyl (two single bonds), -triyl (three single bonds), -tetrayl (four single bonds), -diylidene (two double bonds).
For multiple bonds of different types, multiple suffixes are concatenated: -ylylidene (one single and one double), -ylylidyne (one single and one triple), -diylylidene (two single and one double).
The parent compound name can be altered in two ways:
For many common compounds the substituent is linked at one end (the 1 position) and historically not numbered in the name. The IUPAC 2013 Rules however do require an explicit locant for most substituents in a preferred IUPAC name. The substituent name is modified by stripping -ane (see alkane) and adding the appropriate suffix. This is "recommended only for saturated acyclic and monocyclic hydrocarbon substituent groups and for the mononuclear parent hydrides of silicon, germanium, tin, lead, and boron". Thus, if there is a carboxylic acid called "X-ic acid", an alcohol ending "X-anol" (or "X-yl alcohol"), or an alkane called "X-ane", then "X-yl" typically denotes the same carbon chain lacking these groups but modified by attachment to some other parent molecule.
The more general method omits only the terminal "e" of the substituent name, but requires explicit numbering of each yl prefix, even at position 1 (except for -ylidyne, which as a triple bond must terminate the substituent carbon chain). Pentan-1-yl is an example of a name by this method, and is synonymous with pentyl from the previous guideline.
Note that some popular terms such as "vinyl" (when used to mean "polyvinyl") represent only a portion of the full chemical name.
Methane substituents
According to the above rules, a carbon atom in a molecule, considered as a substituent, has the following names depending on the number of hydrogens bound to it, and the type of bonds formed with the remainder of the molecule:
Notation
In a chemical structural formula, an organic substituent such as methyl, ethyl, or aryl can be written as R (or R1, R2, etc.) It is a generic placeholder, the R derived from radical or rest, which may replace any portion of the formula as the author finds convenient. The first to use this symbol was Charles Frédéric Gerhardt in 1844.
The symbol X is often used to denote electronegative substituents such as the halides.
Statistical distribution
One cheminformatics study identified 849,574 unique substituents up to 12 non-hydrogen atoms large and containing only carbon, hydrogen, nitrogen, oxygen, sulfur, phosphorus, selenium, and the halogens in a set of 3,043,941 molecules. Fifty substituents can be considered common as they are found in more than 1% of this set, and 438 are found in more than 0.1%. 64% of the substituents are found in only one molecule. The top 5 most common are the methyl, phenyl, chlorine, methoxy, and hydroxyl substituents. The total number of organic substituents in organic chemistry is estimated at 3.1 million, creating a total of 6.7×1023 molecules. An infinite number of substituents can be obtained simply by increasing carbon chain length. For instance, the substituents methyl (-CH3) and pentyl (-C5H11).
| Physical sciences | Concepts_2 | Chemistry |
977418 | https://en.wikipedia.org/wiki/Santiago%20Metro | Santiago Metro | The Santiago Metro () is a rapid transit system serving the city of Santiago, the capital of Chile. It currently consists of seven lines (numbered 1-6 and 4A), 143 stations, and of revenue route. The system is managed by the state-owned Metro S.A. and is the first rapid transit system in the country.
The Santiago Metro carries around 2.5 million passengers daily. This figure represents an increase of more than a million passengers per day compared to 2007, when the ambitious Transantiago project was launched, in which the metro plays an important role in the public transport system serving the city. Its highest passenger peak was reached on 2 May 2019, reaching 2,951,962 passengers.
In June 2017 the government announced plans for the construction of Line 7, connecting Renca in the northwest of Santiago with Vitacura in the northeast. The new line will add and 19 new stations to the Metro network, running along the municipalities of Renca, Cerro Navia, Quinta Normal, Santiago, Providencia, Las Condes and Vitacura. Its cost has been initially estimated at US$2.53 bn, and it is projected to open in 2027.
In March 2012, the Santiago Metro was chosen as the best underground system in the Americas, after being honoured at the annual reception held by Metro Rail in London.
History and development
Early projections and construction of Line 1
The idea of constructing an underground railway network in Santiago dates back to 1944 when efforts to improve the chaotic transport system were initiated due to the rapid population growth the city had been experiencing since the early 1930s.
However, concrete plans began to materialize in the 1960s when Juan Parrochia was appointed as Chief Architect of the Intercommunal Plan of Santiago and began working on an urban master plan featuring a Metro network.
Consequently, the government issued an international tender for the development of an urban transport system. On 24 October 1968, the government of Eduardo Frei Montalva approved the draft submitted by the Franco-Chilean consortium BCEOM SOFRETU CADE, in which the construction of five lines with an extension of approximately by 1990 was proposed.
On 29 May 1969, works finally began for the construction of the first line, which would link the Civil District and the area of Barrancas (current-day Lo Prado).
On 15 September 1975, the first line of the metro was opened by Augusto Pinochet during the military dictatorship. Line 1, during its opening stage, was mostly underground from San Pablo to La Moneda, running below the Alameda. In 1977, the line was extended towards Providencia and by 1980, the line reached as far as Escuela Militar in Las Condes.
In March 1978, Line 2 was opened. Its initial section ran at ground level from Los Héroes to Franklin. By December, the second segment of the line was opened, running underground towards the south along the Gran Avenida up to Lo Ovalle.
Project changes
Despite the fast growth of the network, the severe economic crisis that affected the country in 1982 halted the original plans. Furthermore, studies showed that southeast Santiago was becoming more populated than the north end of the capital, area that was then covered by the planned extensions of the service.
In order to supply future demand, the layout for Line 2 was changed and the extension would start at Los Héroes and go around the Civic District, crossing Line 1 again at Baquedano to head south through Vicuña Mackenna. Meanwhile, Line 3 was projected through Independencia and Irarrázaval to supply the northern area that Line 2 was supposed to run.
However, these plans were affected once again when an earthquake struck the Chilean Central Valley on 3 March 1985. Most of the funds destined for the construction of the Line 2 extension and Line 3 were used to rebuild the city. The opening of two new stations towards the north in 1986 (Santa Ana) and 1987 (Puente Cal y Canto) were the only finalised works from these plans: Santa Ana and Mapocho stations on Line 2. The latter would change its name later, as remains of the old Calicanto Bridge –emblem of the city for over a century– were discovered during the excavation process. That same year, the Metrobús service was launched with services operating from Escuela Militar, Lo Ovalle and Las Rejas.
Institutionally, the management of Metro de Santiago was changed at the end of the decade. The former General Directorate of Metro, a branch of the Ministry of Public Works, became a state-funded public company, Metro S.A., with the provisions of Law 18,772 published on 28 January 1989.
Following the economic recovery after the second miracle, the metro's expansion plans resurged. Population growth in the southeastern area of the capital became unstoppable during the 1980s, and La Florida became the most populous commune in the country, thus the construction of a new line to supply that area was paramount. The first plans were drawn in 1989 and it was officially announced in 1991 by President Patricio Aylwin. This new line would start from Baquedano and head southwards to Américo Vespucio Avenue, crossing through Vicuña Mackenna.
Line 5 was opened on 5 April 1997 by President Eduardo Frei Ruiz-Tagle. This new line would have a length of initially running underground from Baquedano to Irarrázaval, emerging as a viaduct on Vicuña Mackenna and going underground before reaching its southeastern terminus, Bellavista de la Florida.
In March 2000, a new section of Line 5 crossing the historic centre of the capital was opened to the public. The new connection between Baquedano and Santa Ana through Plaza de Armas and Bellas Artes meant that all three at-the-time existing lines would be connected.
New lines 4 and 4A and line 2 extension
With the election of Ricardo Lagos as President of Chile in 2000, one of his main objectives was an overhaul of the transport system serving the capital. To achieve this, a new extension for Line 5 was designed, heading westwards to Quinta Normal, following Catedral street, and an extension for Line 2 from both ends of the line to reach the northern and southern ends of the Américo Vespucio ring road.
Despite this, the biggest announcement was made in 2002 when Lagos disclosed the construction of a fourth line for the metro, serving the southeastern communes of Santiago to reach the heart of Puente Alto, which had taken over La Florida as the most populous commune of the country. With these new projects, the Metro network would double its extension by 2010, year in which the country would celebrate its bicentennial.
These new projects were designed to make Metro the key element of the new transport reform plan for the city, Transantiago. Along with the new extensions, exchange stations were designed to allow for a better interaction between the urban railways and other means of transport, mainly buses. The first exchange station would open in Quinta Normal after the Line 5 extension was finalised on 31 March 2004. However, the original plan to host a railway station would be discarded following the failure of the Melitrén construction.
On September 8, 2004, the Metro would make another breakthrough when the Mapocho river was crossed underground, with the opening of Patronato and Cerro Blanco stations on Line 2. On 22 December 2004, the southern extension of the same line opened its new stations, El Parrón and La Cisterna. A second stretch of Line 2 towards the north would open on 25 November 2005, and the last in the series of extensions opened on 22 December 2005, with a total cost of US$170 million and a 27-million passenger increase annually.
On November 30, 2005, the first underground leg of Line 4 from Tobalaba to Grecia, and the viaduct between Vicente Valdés and Plaza de Puente Alto opened to the public. The unfinished track from Grecia and Vicente Valdés was covered by a rail replacement bus service operated by Transantiago until March 2, 2006, when the remaining stations and track were finished. Line 4 at this time was the longest of the network, with an extension of and 22 stations serving Providencia, Las Condes, Ñuñoa, La Reina, Peñalolén, Macul, La Florida and Puente Alto. This new line also saw the introduction of new rolling stock, the AS-2002, manufactured by Alstom in Brazil, featuring more interior space than those running other lines. Finally, Line 4 would be complemented with the opening of a branch service on August 16, 2006, Line 4A, which connected Line 2 from La Cisterna with Line 4 at Vicuña Mackenna, running through the Américo Vespucio ring road.
Extensions to Las Condes and Maipú
On November 15, 2005, President Ricardo Lagos announced the extension of Line 1 to the east, from Escuela Militar to Los Dominicos station, in the commune of Las Condes. To achieve this, three new stations were built, adding 4 kilometers to the railway network, which were inaugurated on January 7, 2010, during the presidency of Michelle Bachelet.
Along with the extension to Las Condes, one of the most important projects of the service was announced: the extension of the metro to the west, connecting the communes of Maipú, Pudahuel, Lo Prado and Quinta Normal to the Metro Network. In this way, the Metro approached the western sector of the city for the first time, reaching Maipú, the most populated commune in the country after surpassing Puente Alto in 2008.
On October 31, 2009, the final layout of the extension of Line 5 was approved, starting from the Quinta Normal station along Avenida San Pablo underground, turning south to come to the surface and travel along Avenida Teniente Cruz and later Avenida Pajaritos before becoming underground again and reaching the terminal station, in the Plaza de Armas of Maipú. The first section to the Pudahuel station was delivered on January 12, 2010, while the remaining section Until Maipú was opened to the public on February 3, 2011.
Along with the construction of the new extensions, important works were carried out that allowed the Pajaritos station on Line 1 to be renovated to convert it into the terminal of a loop, allowing greater efficiency to the most loaded section of said line and the postponed San José de la Estrella station was inaugurated on Line 4. The Del Sol station was also built in the extension to Maipú, which serves as a transfer to intercity buses.
In March 2012, the Santiago Metro was chosen as the best metro system in America, a distinction received at the annual Metro Rail dinner held in London, United Kingdom.
Line 2 and 3 extensions
On May 26, 2016, Metro announced the extension of Lines 2 and 3, adding 8.9 kilometers and 7 new stations to the Metro network. Both extensions are expected to begin operations during the second half of 2021. The extension of Line 2 to the south will add 5.1 kilometers and 4 new stations connecting the current terminal station in La Cisterna with San Bernardo locality. The new terminal station will be located next to a hospital called Hospital El Pino in San Bernardo. Meanwhile, the extension of Line 3 to the west will add 3.8 kilometers and 3 new stations to the Metro network, connecting future station Los Libertadores with Quilicura.
On November 2, 2017, Line 6 was inaugurated from Cerrillos to Los Leones adding 10 new stations. This new line does not have staffed ticket offices; instead there are automatic machines for ticket sales and loading money onto bip! cards. It has platform-edge doors to protect passengers, and traction power is supplied by overhead line equipment, not by conductor rails as on the other lines. It has new entrance and exit turnstiles at stations. The trains on Line 6 only have steel wheels, and are driverless.
On January 22, 2019, Line 3 was inaugurated, after 9 years of prospecting and construction and being delayed since the 1980s after the 1985 Algarrobo earthquake and the changing demographics of the city during the '80s and '90s. Its rolling stock material is identical to the Line 6 and were built simultaneously, so they're considered "twin lines".
On September 25, 2023, Line 3 was extended 3.8 km west from its northern terminus to Plaza Quilicura.
On November 27, 2023, Line 2 was extended 5.2 km south from its southern terminus to Hospital El Pino.
2019 protests
During the month of October 2019, the Santiago metro network was affected by social protests due to the increase in the fare of the entire Metropolitan Mobility Network. Initially, secondary students staged massive acts of evasion between 6 and 11 October. The protests quickly escalated to several metro stations, resulting in train service being repeatedly interrupted.
On Friday the 18th, the situation escalated and the entire network had to be closed due to attacks on stations and workers. At night, after the declaration of a state of emergency by President Sebastián Piñera, several stations of the Metro were destroyed and burned, some of which were attacked again the next day, even though a curfew had been established. Meanwhile, the Instituto Nacional de Derechos Humanos investigated accusations that the Baquedano station was used as a detention and torture center by police and military. On the morning of the same day, the site was reviewed by staff from the INDH, PDI and guarantee judges. The judges found no evidence of torture or illegal detentions at the site, but an investigation was launched to rule out any irregular situation. However, investigations conducted by the National Institute of Human Rights and the Public Prosecutor's Office found no evidence in this regard; in 2020, the allegations were dismissed, and the case was closed.
The Metro network had been partially reactivated as of Monday, October 21; however, due to the damage of some stations, the network will only be available in its entirety within a period of up to 7 months. Damage costs are estimated at more than $300 million. Metro de Santiago indicated that it does not have insurance contracted for the infrastructure of the stations and trains. Lines 3 and 6, meanwhile, opened on 23 October, Lines 2 and 5 on the 25th, Line 4 on the 28th, and line 4A on November 25, in all cases partially and on a shortened schedule.
On October 23, it was reported that 79 stations had been damaged in all, with lines 4, 4A, and 5 having the highest number of stations destroyed or vandalized. There were also damage to 6 trains, 5 on line 4 and one on line 1 - the latter set on fire at the San Pablo station. Upon the reopening of the last two stations (Trinidad and Protectora de la Infancia) on September 25, 2020, the metro system was back to 100% operation.
Lines 7, 8 and 9
On June 1, 2017, President Michelle Bachelet announced in her last public account the construction of Metro Line 7. The plan initially included 21 stations along a 25 km extension, between the commune of Renca in the northwestern sector, and Vitacura in the northeastern sector. The route, estimated to open around 2027, was designed with a line parallel to the Mapocho River and Line 1 in mind, which would allow it to be decongested by approximately 10 000 daily passengers. Line 7 would allow the incorporation of the communes of Renca, Cerro Navia and Vitacura into the Network, also connecting popular neighborhoods with part of the financial and commercial district of the city.
At the end of 2017, the newspaper El Mercurio published a report that indicated that the route of the line was modified, so that in the Providencia sector it would not circulate under Andrés Bello Avenue (as originally thought), but would go parallel to Line 1 along Providencia Avenue, eliminating the combination in Salvador and moving it to Pedro de Valdivia. In addition, Metro announced that it would extend Line 6 to Isidora Goyenechea of the future Line 7.
One year after the announcement of Line 7, President Sebastián Piñera announced in his 2018 annual account that studies would begin for the construction of two new metro lines in a north–south direction: Line 8, which will connect the communes of La Florida and Puente Alto with Providencia, while Line 9 would reach from the center to the commune of La Pintana, one of the last in the city to receive the Metro. In addition, he announced that Line 4 would be extended by three stations in the southern sector to reach Bajos de Mena in Puente Alto. It was projected at that time that lines 8 and 9 would be inaugurated in 2028.
The impact of the social outbreak of 2019 delayed the planning work for the extension of the three lines, being resumed in September 2021, so it is estimated that lines 7, 8 and 9 would be inaugurated from 2030. In August 2023, a modification to the layout of Line 9 was announced, expanding it in the north to the Puente Cal y Canto station — which will become the first station with four concurrent lines — and in the south to Plaza de Puente Alto, combining with Line 4 and absorbing the proposed extension to Bajos de Mena.
Future plans
Various proposals have been presented to expand the Santiago Metro once lines 7, 8 and 9 are built.
Two communes in Greater Santiago will not have a direct connection to the Metro Network: Lo Espejo and Lo Barnechea, while other three only have in theirs limits; San Bernardo, Peñalolén and Huechuraba. In the case of Lo Espejo, the municipality has proposed expanding Line 4A through Américo Vespucio towards the west, in order to connect the commune to the network, while Lo Barnechea has expressed its interest in building two additional stations on Line 7 to reach La Dehesa. Meanwhile, the municipality of Maipú —one of the most populated in the city— launched a campaign to request the government to extend Line 6 to the western sector. Other proposals include reach the International Airport, for example, through a branch of Line 7.
During the inauguration of Line 3 in 2019, President Sebastián Piñera declared that Line 10 was going to be built. Although Metro indicated that a tenth line was not officially in its project portfolio, the government indicated that the initiative attempted to connect the Avenida Mapocho sector with Avenida Tobalaba, following the so-called "central ring" along Las Rejas, Suiza and Departamental avenues.
Other alternatives for new lines have been analyzed in the media in recent years and have been momentarily discarded; a line in the eastern sector through Tobalaba-Vespucio or Manquehue, another parallel to Line 1 through 5 de Abril-Blanco Encalada-Santa Isabel-Bilbao and Manquehue, and the northern section of "Line 10" through Dorsal, Lo Espinoza and Radal.
Timeline
Rolling stock
The Santiago Metro currently operates 9 models of rolling stock: two models (the AS-2002 and the AS-2014) are steel-wheeled, while the others are all rubber-tyred. The NS 74 and NS 93 stock are based on the MP 73 and MP 89 stock of the Paris Metro respectively, while the NS-88 and NS-2007 stock are based on the FM-86 and NM-02 stock of the Mexico City Metro respectively. All rubber-tyred stock is preceded with the acronym NS (for Neumático Santiago); likewise, all steel-wheeled stock is preceded with the acronym AS (for Acero Santiago). The number representing each type of rubber-tyred and steel-wheeled rolling stock is the year of design of a particular rolling stock, not year of first use, similar to the practice in the Mexico City Metro and Paris Métro.
Currently, all the NS-2007 stock and a number of the NS-93 stock units are retrofitted with air conditioning, whereas the NS-2012, AS-2014 and NS-2016 were all built with air conditioning.
In September 2012, the NS 2012 trains went into service on Line 1. These trains are the first to be built with air conditioning.
On November 2, 2017, the line 6 entered revenue service. This line utilizes the AS-2014 (Acero Santiago 2014) which are the most modern stock of the system , being the first model in the system that are driverless. However, in the first and last car there is a control panel meant to control the train when necessary. It is also the first with security cameras, energy obtainment via overhead rigid catenary, and evacuation doors at the front of the first and last car (with an evacuation ramp for people on wheelchairs) as well as on the sides of each convoy. They are also the second to be built with air conditioning, and the third with LED lights. The line that they operate in is also the first in revenue service with platform safety barriers, followed by Line 3 opened in January 2019.
Stations
In bold are transfer stations. In grey are stations projected or currently under construction.
MetroArte
The Santiago Metro incorporates 73 public artworks in its stations through the MetroArte fundation. Universidad de Chile features Memoria visual de una nación ("Visual Memory of a Nation"), a 1,200 square-meters mural created by Chilean painter Mario Toral and represents the history of the country. Other pieces of art are in Baquedano (featuring modern art and a concert space), Bellas Artes (multimedia art), Santa Lucía (Portuguese azulejos, a gift made by the Lisbon Metro), La Moneda (with realistic painting representing typical landscape), and various other stations.
Station amenities
A diverse array of services are provided within each Metro station. Ticket offices, public telephones and metro-network information panels exist in every station; Redbanc, Cirrus and Plus-enabled ATMs, typically provided by either the Banco de Chile company or the BancoEstado national bank, are common. Automatic recharge machines are also common, with all such machines charging a customer's Bip! card with either cash deposits or a Redbanc-enabled card. In higher-traffic stations, there are screens that display MetroTV, featuring additional system information as well as music videos and short news segments.
Some 21 of the busiest stations contain a branch of Bibliometro, a system of lending libraries supported by the national Department of Libraries, Archives and Museums (Dibam). A Chilean ID or foreign passport allows any Metro customer to freely borrow from a reserve of books and other literature, but a registration is needed first.
Customers may rent a parking space for their bicycles through the Bicimetro network, which opened in 2008 at six stations and is slowly expanding, for a starting cost of $300 (approximately US$0.50) a day. There are weekly and monthly rental services too, that guarantee a fixed space for the bike (contrary to the daily rent which relies on random free-space).
Most underground Metro stations contain at least one shop or convenience store, with large line-transfer stations such as Baquedano featuring several food vendors and retailers, and even a small underground "shopping center" in Universidad de Chile.
Security and safety
Various private security agencies have day-to-day responsibility for maintaining order in the metro and deterring petty crime or attempts to board without paying. The largest transfer stations, such as Tobalaba, also feature depots of the Carabineros de Chile, the national military police force. Metro staff man the ticket counters in closed box offices and distribute tickets and money through small transaction windows.
Signage to advise customers of safety hazards is extensive, and each platform has a painted yellow line which customers are advised to not cross except to board a train. During rush hour, Metro staff line the platform edge to keep people from being crowded off the platform and to support disabled customers. There is no physical barrier between the edge and the tracks (with the exception of recently opened Lines 3 and 6), including the hazardous, electrified third rail. However, lines 3 and 6 use overhead lines as the powersource of the trains.
Metro travelers are advised to keep a close guard on their belongings, as petty or opportunistic theft is somewhat of a problem in lines that connect some districts to the center of Santiago. This is most apparent with passengers who reverse their backpacks so that the bag is across the stomach, to ensure that no one can pilfer the pockets out of sight.
Pricing and working hours
Metro is part of Red Metropolitana de Movilidad, the integrated public transport system that serves the capital using also feeder and main bus routes. Red works with an integrated fare system, which allows passengers to make bus-bus or bus-metro transfers on a two-hour time limit from the first trip (maximum of two changes) using a contactless smart card called "Bip! card". Bus-to-bus, metro-to-bus and metro-to-train transfers do not cost extra. Bus-to-metro transfers costs $20 (approx. US$0.03) during Horario Valle (low-use hours) and $80 (approx. US$0.12) during Horario Punta (rush hour).
Bip! cards are available in all the ticketing offices in every station at a cost of $1,550 (approx. US$2.23), with a minimum first charge of $1000 worth of credit (approx. US$1.41). Tickets are sold from 6:00 to 23:00 Monday to Friday, 6:30 to 23:00 on Saturdays, and 8:00 to 22:30 on Sundays and holidays. Cards can be topped up to $20000, and the credit only expires if the card it is not used in two years.
Metro also used to sell single-trip, Metro-use only tickets, but they went out of circulation in early 2017. Fares depended on the time of the use of the system. The cost of a ticket in the Horario Punta (rush hour, 7:00–8:59 and 18:00–19:59) was $700 (approx. US$1.01); in the Horario Valle (off-peak hours, 6:30–6:59, 9:00–18:00, 20:00–20:44, and all day on weekends and holidays) was $640 (approximately US$0.90); and in the Horario Bajo (low-use hours, 6:00–6:29 and 20:45–23:00) was $590 (approximately US0.85).
Senior citizens (65 and older) and students holding concession cards pay $200 (US$0.28). Senior concession fare does not apply during rush hours.
On weekdays, the metro operates from 5.35 am until 12.08 am, while on Saturdays it operates from 6.30 am until 12.08 am and on Sundays and public holidays the metro operates from 8 am (Line 1 from 9 am) until 11.48 pm. However, due to the COVID-19 Pandemic, operating hours have varied in accordance with national curfew.
(warning: stations close earlier - see timetable)
| Technology | Americas | null |
977592 | https://en.wikipedia.org/wiki/Rings%20of%20Saturn | Rings of Saturn | The rings of Saturn are the most extensive and complex ring system of any planet in the Solar System. They consist of countless small particles, ranging in size from micrometers to meters, that orbit around Saturn. The ring particles are made almost entirely of water ice, with a trace component of rocky material. There is still no consensus as to their mechanism of formation. Although theoretical models indicated that the rings were likely to have formed early in the Solar System's history, newer data from Cassini suggested they formed relatively late.
Although reflection from the rings increases Saturn's brightness, they are not visible from Earth with unaided vision. In 1610, the year after Galileo Galilei turned a telescope to the sky, he became the first person to observe Saturn's rings, though he could not see them well enough to discern their true nature. In 1655, Christiaan Huygens was the first person to describe them as a disk surrounding Saturn. The concept that Saturn's rings are made up of a series of tiny ringlets can be traced to Pierre-Simon Laplace, although true gaps are few – it is more correct to think of the rings as an annular disk with concentric local maxima and minima in density and brightness. On the scale of the clumps within the rings there is much empty space.
The rings have numerous gaps where particle density drops sharply: two opened by known moons embedded within them, and many others at locations of known destabilizing orbital resonances with the moons of Saturn. Other gaps remain unexplained. Stabilizing resonances, on the other hand, are responsible for the longevity of several rings, such as the Titan Ringlet and the G Ring.
Well beyond the main rings is the Phoebe ring, which is presumed to originate from Phoebe and thus share its retrograde orbital motion. It is aligned with the plane of Saturn's orbit. Saturn has an axial tilt of 27 degrees, so this ring is tilted at an angle of 27 degrees to the more visible rings orbiting above Saturn's equator.
In September 2023, astronomers reported studies suggesting that the rings of Saturn may have resulted from the collision of two moons "a few hundred million years ago".
History
Early observations
Galileo Galilei was the first to observe the rings of Saturn in 1610 using his telescope, but was unable to identify them as such. He wrote to the Duke of Tuscany that "The planet Saturn is not alone, but is composed of three, which almost touch one another and never move nor change with respect to one another. They are arranged in a line parallel to the zodiac, and the middle one (Saturn itself) is about three times the size of the lateral ones." He also described the rings as Saturn's "ears". In 1612 the Earth passed through the plane of the rings and they became invisible. Mystified, Galileo remarked "I do not know what to say in a case so surprising, so unlooked for and so novel." He mused, "Has Saturn swallowed his children?" — referring to the myth of the Titan Saturn devouring his offspring to forestall the prophecy of them overthrowing him. He was further confused when the rings again became visible in 1613.
Early astronomers used anagrams as a form of commitment scheme to lay claim to new discoveries before their results were ready for publication. Galileo used the anagram "" for Altissimum planetam tergeminum observavi ("I have observed the most distant planet to have a triple form") for discovering the rings of Saturn.
In 1657 Christopher Wren became Professor of Astronomy at Gresham College, London. He had been making observations of the planet Saturn from around 1652 with the aim of explaining its appearance. His hypothesis was written up in De corpore saturni, in which he came close to suggesting the planet had a ring. However, Wren was unsure whether the ring was independent of the planet, or physically attached to it. Before Wren's hypothesis was published Christiaan Huygens presented his hypothesis of the rings of Saturn. Immediately Wren recognised this as a better hypothesis than his own and De corpore saturni was never published. Robert Hooke was another early observer of the rings of Saturn, and noted the casting of shadows on the rings.
Huygens' ring hypothesis and later developments
Huygens began grinding lenses with his father Constantijn in 1655 and was able to observe Saturn with greater detail using a 43× power refracting telescope that he designed himself. He was the first to suggest that Saturn was surrounded by a ring detached from the planet, and famously published the letter string "". Three years later, he revealed it to mean Annulo cingitur, tenui, plano, nusquam coherente, ad eclipticam inclinato ("[Saturn] is surrounded by a thin, flat, ring, nowhere touching [the body of the planet], inclined to the ecliptic"). He published his ring hypothesis in Systema Saturnium (1659) which also included his discovery of Saturn's moon, Titan, as well as the first clear outline of the dimensions of the Solar System.
In 1675, Giovanni Domenico Cassini determined that Saturn's ring was composed of multiple smaller rings with gaps between them; the largest of these gaps was later named the Cassini Division. This division is a region between the A ring and B Ring.
In 1787, Pierre-Simon Laplace proved that a uniform solid ring would be unstable and suggested that the rings were composed of a large number of solid ringlets.
In 1859, James Clerk Maxwell demonstrated that a nonuniform solid ring, solid ringlets or a continuous fluid ring would also not be stable, indicating that the ring must be composed of numerous small particles, all independently orbiting Saturn. Later, Sofia Kovalevskaya also found that Saturn's rings cannot be liquid ring-shaped bodies. Spectroscopic studies of the rings which were carried out independently in 1895 by James Keeler of the Allegheny Observatory and by Aristarkh Belopolsky of the Pulkovo Observatory showed that Maxwell's analysis was correct.
Four robotic spacecraft have observed Saturn's rings from the vicinity of the planet. Pioneer 11s closest approach to Saturn occurred in September 1979 at a distance of . Pioneer 11 was responsible for the discovery of the F ring. Voyager 1s closest approach occurred in November 1980 at a distance of . A failed photopolarimeter prevented Voyager 1 from observing Saturn's rings at the planned resolution; nevertheless, images from the spacecraft provided unprecedented detail of the ring system and revealed the existence of the G ring. Voyager 2s closest approach occurred in August 1981 at a distance of . Voyager 2s working photopolarimeter allowed it to observe the ring system at higher resolution than Voyager 1, and to thereby discover many previously unseen ringlets. Cassini spacecraft entered into orbit around Saturn in July 2004. Cassini images of the rings are the most detailed to-date, and are responsible for the discovery of yet more ringlets.
The rings are named alphabetically in the order they were discovered: A and B in 1675 by Giovanni Domenico Cassini, C in 1850 by William Cranch Bond and his son George Phillips Bond, D in 1933 by Nikolai P. Barabachov and B. Semejkin, E in 1967 by Walter A. Feibelman, F in 1979 by Pioneer 11, and G in 1980 by Voyager 1. The main rings are, working outward from the planet, C, B and A, with the Cassini Division, the largest gap, separating Rings B and A. Several fainter rings were discovered more recently. The D Ring is exceedingly faint and closest to the planet. The narrow F Ring is just outside the A Ring. Beyond that are two far fainter rings named G and E. The rings show a tremendous amount of structure on all scales, some related to perturbations by Saturn's moons, but much unexplained.
In September 2023, astronomers reported studies suggesting that the rings of Saturn may have resulted from the collision of two moons "a few hundred million years ago".
Saturn's axial inclination
Saturn's axial tilt is 26.7°, meaning that widely varying views of the rings, of which the visible ones occupy its equatorial plane, are obtained from Earth at different times. Earth makes passes through the ring plane every 13 to 15 years, about every half Saturn year, and there are about equal chances of either a single or three crossings occurring in each such occasion. The most recent ring plane crossings were on 22 May 1995, 10 August 1995, 11 February 1996 and 4 September 2009; upcoming events will occur on 23 March 2025, 15 October 2038, 1 April 2039 and 9 July 2039. Favorable ring plane crossing viewing opportunities (with Saturn not close to the Sun) only come during triple crossings.
Saturn's equinoxes, when the Sun passes through the ring plane, are not evenly spaced. The sun passes south to north through the ring plane when Saturn's heliocentric longitude is 173.6 degrees (e.g. 11 August 2009), about the time Saturn crosses from Leo to Virgo. 15.7 years later Saturn's longitude reaches 353.6 degrees and the sun passes to the south side of the ring plane. On each orbit the Sun is north of the ring plane for 15.7 Earth years, then south of the plane for 13.7 years. Dates for north-to-south crossings include 19 November 1995 and 6 May 2025, with south-to-north crossings on 11 August 2009 and 23 January 2039. During the period around an equinox the illumination of most of the rings is greatly reduced, making possible unique observations highlighting features that depart from the ring plane.
Physical characteristics
The dense main rings extend from to away from Saturn's equator, whose radius is (see Major subdivisions). With an estimated local thickness of as little as 10 meters (32' 10") and as much as 1 km (1093 yards), they are composed of 99.9% pure water ice with a smattering of impurities that may include tholins or silicates. The main rings are primarily composed of particles smaller than 10 m.
Cassini directly measured the mass of the ring system via their gravitational effect during its final set of orbits that passed between the rings and the cloud tops, yielding a value of 1.54 (± 0.49) × 1019 kg, or 0.41 ± 0.13 Mimas masses. This is around two-thirds the mass of the Earth's entire Antarctic ice sheet, spread across a surface area 80 times larger than that of Earth. The estimate is close to the value of 0.40 Mimas masses derived from Cassini observations of density waves in the A, B and C rings. It is a small fraction of the total mass of Saturn (about 0.25 ppb). Earlier Voyager observations of density waves in the A and B rings and an optical depth profile had yielded a mass of about 0.75 Mimas masses, with later observations and computer modeling suggesting that was an underestimate.
Although the largest gaps in the rings, such as the Cassini Division and Encke Gap, can be seen from Earth, the Voyager spacecraft discovered that the rings have an intricate structure of thousands of thin gaps and ringlets. This structure is thought to arise, in several different ways, from the gravitational pull of Saturn's many moons. Some gaps are cleared out by the passage of tiny moonlets such as Pan, many more of which may yet be discovered, and some ringlets seem to be maintained by the gravitational effects of small shepherd satellites (similar to Prometheus and Pandora's maintenance of the F ring). Other gaps arise from resonances between the orbital period of particles in the gap and that of a more massive moon further out; Mimas maintains the Cassini Division in this manner. Still more structure in the rings consists of spiral waves raised by the inner moons' periodic gravitational perturbations at less disruptive resonances.
Data from the Cassini space probe indicate that the rings of Saturn possess their own atmosphere, independent of that of the planet itself. The atmosphere is composed of molecular oxygen gas (O2) produced when ultraviolet light from the Sun interacts with water ice in the rings. Chemical reactions between water molecule fragments and further ultraviolet stimulation create and eject, among other things, O2. According to models of this atmosphere, H2 is also present. The O2 and H2 atmospheres are so sparse that if the entire atmosphere were somehow condensed onto the rings, it would be about one atom thick. The rings also have a similarly sparse OH (hydroxide) atmosphere. Like the O2, this atmosphere is produced by the disintegration of water molecules, though in this case the disintegration is done by energetic ions that bombard water molecules ejected by Saturn's moon Enceladus. This atmosphere, despite being extremely sparse, was detected from Earth by the Hubble Space Telescope.
Saturn shows complex patterns in its brightness. Most of the variability is due to the changing aspect of the rings, and this goes through two cycles every orbit. However, superimposed on this is variability due to the eccentricity of the planet's orbit that causes the planet to display brighter oppositions in the northern hemisphere than it does in the southern.
In 1980, Voyager 1 made a fly-by of Saturn that showed the F ring to be composed of three narrow rings that appeared to be braided in a complex structure; it is now known that the outer two rings consist of knobs, kinks and lumps that give the illusion of braiding, with the less bright third ring lying inside them.
New images of the rings taken around the 11 August 2009 equinox of Saturn by NASA's Cassini spacecraft have shown that the rings extend significantly out of the nominal ring plane in a few places. This displacement reaches as much as at the border of the Keeler Gap, due to the out-of-plane orbit of Daphnis, the moon that creates the gap.
Formation and evolution of main rings
Estimates of the age of Saturn's rings vary widely, depending on the approach used. They have been considered to possibly be very old, dating to the formation of Saturn itself. However, data from Cassini suggest they are much younger, having most likely formed within the last 100 million years, and may thus be between 10 million and 100 million years old. This recent origin scenario is based on a new, low mass estimate modeling of the rings' dynamical evolution, and measurements of the flux of interplanetary dust, which feed into an estimate of the rate of ring darkening over time. Since the rings are continually losing material, they would have been more massive in the past than at present. The mass estimate alone is not very diagnostic, since high mass rings that formed early in the Solar System's history would have evolved by now to a mass close to that measured. Based on current depletion rates, they may disappear in 300 million years.
There are two main hypotheses regarding the origin of Saturn's inner rings. A hypothesis originally proposed by Édouard Roche in the 19th century is that the rings were once a moon of Saturn (named Veritas, after a Roman goddess who hid in a well). According to the hypothesis, the moon's orbit decayed until it was close enough to be ripped apart by tidal forces (see Roche limit). Numerical simulations carried out in 2022 support this hypothesis; the authors of that study proposed the name "Chrysalis" for the destroyed moon. A variation on this hypothesis is that this moon disintegrated after being struck by a large comet or asteroid. The second hypothesis is that the rings were never part of a moon, but are instead left over from the original nebular material from which Saturn formed.
A more traditional version of the disrupted-moon hypothesis is that the rings are composed of debris from a moon 400 to 600 km (200 to 400 miles) in diameter, slightly larger than Mimas. The last time there were collisions large enough to be likely to disrupt a moon that large was during the Late Heavy Bombardment some four billion years ago.
A more recent variant of this type of hypothesis by R. M. Canup is that the rings could represent part of the remains of the icy mantle of a much larger, Titan-sized, differentiated moon that was stripped of its outer layer as it spiraled into the planet during the formative period when Saturn was still surrounded by a gaseous nebula. This would explain the scarcity of rocky material within the rings. The rings would initially have been much more massive (≈1,000 times) and broader than at present; material in the outer portions of the rings would have coalesced into the innermost moons of Saturn (those closest to Saturn), out to Tethys, also explaining the lack of rocky material in the composition of most of these moons. Subsequent collisional or cryovolcanic evolution of Enceladus, which is another of these moons, might then have caused selective loss of ice from this moon, raising its density to its current value of 1.61 g/cm3, compared to values of 1.15 for Mimas and 0.97 for Tethys.
The idea of massive early rings was subsequently extended to explain the formation of Saturn's moons out to Rhea. If the initial massive rings contained chunks of rocky material (>100 km; 60 miles across) as well as ice, these silicate bodies would have accreted more ice and been expelled from the rings, due to gravitational interactions with the rings and tidal interaction with Saturn, into progressively wider orbits. Within the Roche limit, bodies of rocky material are dense enough to accrete additional material, whereas less-dense bodies of ice are not. Once outside the rings, the newly formed moons could have continued to evolve through random mergers. This process may explain the variation in silicate content of Saturn's moons out to Rhea, as well as the trend towards less silicate content closer to Saturn. Rhea would then be the oldest of the moons formed from the primordial rings, with moons closer to Saturn being progressively younger.
The brightness and purity of the water ice in Saturn's rings have also been cited as evidence that the rings are much younger than Saturn, as the infall of meteoric dust would have led to a darkening of the rings. However, new research indicates that the B Ring may be massive enough to have diluted infalling material and thus avoided substantial darkening over the age of the Solar System. Ring material may be recycled as clumps form within the rings and are then disrupted by impacts. This would explain the apparent youth of some of the material within the rings. Evidence suggesting a recent origin of the C ring has been gathered by researchers analyzing data from the Cassini Titan Radar Mapper, which focused on analyzing the proportion of rocky silicates within this ring. If much of this material was contributed by a recently disrupted centaur or moon, the age of this ring could be on the order of 100 million years or less. On the other hand, if the material came primarily from micrometeoroid influx, the age would be closer to a billion years.
The Cassini UVIS team, led by Larry Esposito, used stellar occultation to discover 13 objects, ranging from 27 meters (89') to 10 km (6 miles) across, within the F ring. They are translucent, suggesting they are temporary aggregates of ice boulders a few meters across. Esposito believes this to be the basic structure of the Saturnian rings, particles clumping together, then being blasted apart.
Research based on rates of infall into Saturn favors a younger ring system age of hundreds of millions of years. Ring material is continually spiraling down into Saturn; the faster this infall, the shorter the lifetime of the ring system. One mechanism involves gravity pulling electrically charged water ice grains down from the rings along planetary magnetic field lines, a process termed 'ring rain'. This flow rate was inferred to be 432–2870 kg/s using ground-based Keck telescope observations; as a consequence of this process alone, the rings will be gone in ~ million years. While traversing the gap between the rings and planet in September 2017, the Cassini spacecraft detected an equatorial flow of charge-neutral material from the rings to the planet of 4,800–44,000 kg/s. Assuming this influx rate is stable, adding it to the continuous 'ring rain' process implies the rings may be gone in under 100 million years.
Subdivisions and structures within the rings
The densest parts of the Saturnian ring system are the A and B Rings, which are separated by the Cassini Division (discovered in 1675 by Giovanni Domenico Cassini). Along with the C Ring, which was discovered in 1850 and is similar in character to the Cassini Division, these regions constitute the main rings. The main rings are denser and contain larger particles than the tenuous dusty rings. The latter include the D Ring, extending inward to Saturn's cloud tops, the G and E Rings and others beyond the main ring system. These diffuse rings are characterised as "dusty" because of the small size of their particles (often about a μm); their chemical composition is, like the main rings, almost entirely water ice. The narrow F Ring, just off the outer edge of the A Ring, is more difficult to categorize; parts of it are very dense, but it also contains a great deal of dust-size particles.
Physical parameters of the rings
Major subdivisions
C Ring structures
Cassini Division structures
Source:
A Ring structures
D Ring
The D Ring is the innermost ring, and is very faint. In 1980, Voyager 1 detected within this ring three ringlets designated D73, D72 and D68, with D68 being the discrete ringlet nearest to Saturn. Some 25 years later, Cassini images showed that D72 had become significantly broader and more diffuse, and had moved planetward by 200 km (100 miles).
Present in the D Ring is a finescale structure with waves 30 km (20 miles) apart. First seen in the gap between the C Ring and D73, the structure was found during Saturn's 2009 equinox to extend a radial distance of 19,000 km (12,000 miles) from the D Ring to the inner edge of the B Ring. The waves are interpreted as a spiral pattern of vertical corrugations of 2 to 20 m amplitude; the fact that the period of the waves is decreasing over time (from 60 km; 40 miles in 1995 to 30 km; 20 miles by 2006) allows a deduction that the pattern may have originated in late 1983 with the impact of a cloud of debris (with a mass of ≈1012 kg) from a disrupted comet that tilted the rings out of the equatorial plane. A similar spiral pattern in Jupiter's main ring has been attributed to a perturbation caused by impact of material from Comet Shoemaker-Levy 9 in 1994.
C Ring
The C Ring is a wide but faint ring located inward of the B Ring. It was discovered in 1850 by William and George Bond, though William R. Dawes and Johann Galle also saw it independently. William Lassell termed it the "Crepe Ring" because it seemed to be composed of darker material than the brighter A and B Rings.
Its vertical thickness is estimated at 5 meters (16'), its mass at around 1.1 kg, and its optical depth varies from 0.05 to 0.12. That is, between 5 and 12 percent of light shining perpendicularly through the ring is blocked, so that when seen from above, the ring is close to transparent. The 30-km wavelength spiral corrugations first seen in the D Ring were observed during Saturn's equinox of 2009 to extend throughout the C Ring (see above).
Colombo Gap and Titan Ringlet
The Colombo Gap lies in the inner C Ring. Within the gap lies the bright but narrow Colombo Ringlet, centered at 77,883 km (48,394 miles) from Saturn's center, which is slightly elliptical rather than circular. This ringlet is also called the Titan Ringlet as it is governed by an orbital resonance with the moon Titan. At this location within the rings, the length of a ring particle's apsidal precession is equal to the length of Titan's orbital motion, so that the outer end of this eccentric ringlet always points towards Titan.
Maxwell Gap and Ringlet
The Maxwell Gap lies within the outer part of the C Ring. It also contains a dense non-circular ringlet, the Maxwell Ringlet. In many respects this ringlet is similar to the ε ring of Uranus. There are wave-like structures in the middle of both rings. While the wave in the ε ring is thought to be caused by Uranian moon Cordelia, no moon has been discovered in the Maxwell gap as of July 2008.
B Ring
The B Ring is the largest, brightest, and most massive of the rings. Its thickness is estimated as 5 to 15 m and its optical depth varies from 0.4 to greater than 5, meaning that >99% of the light passing through some parts of the B Ring is blocked. The B Ring contains a great deal of variation in its density and brightness, nearly all of it unexplained. These are concentric, appearing as narrow ringlets, though the B Ring does not contain any gaps. In places, the outer edge of the B Ring contains vertical structures deviating up to 2.5 km (1½ miles) from the main ring plane, a significant deviation from the vertical thickness of the main A, B and C rings, which is generally only about 10 meters (about 30 feet). Vertical structures can be created by unseen embedded moonlets.
A 2016 study of spiral density waves using stellar occultations indicated that the B Ring's surface density is in the range of 40 to 140 g/cm2, lower than previously believed, and that the ring's optical depth has little correlation with its mass density (a finding previously reported for the A and C rings). The total mass of the B Ring was estimated to be somewhere in the range of 7 to kg. This compares to a mass for Mimas of kg.
Spokes
Until 1980, the structure of the rings of Saturn was explained as being caused exclusively by the action of gravitational forces. Then images from the Voyager spacecraft showed radial features in the B Ring, known as spokes, which could not be explained in this manner, as their persistence and rotation around the rings was not consistent with gravitational orbital mechanics. The spokes appear dark in backscattered light, and bright in forward-scattered light; the transition occurs at a phase angle near 60°. The leading hypothesis regarding the spokes' composition is that they consist of microscopic dust particles suspended away from the main ring by electrostatic repulsion, as they rotate almost synchronously with the magnetosphere of Saturn. The precise mechanism generating the spokes is still unknown. It has been suggested that the electrical disturbances might be caused by either lightning bolts in Saturn's atmosphere or micrometeoroid impacts on the rings. Alternatively, it is proposed that the spokes are very similar to a phenomenon known as lunar horizon glow or dust levitation, and caused by intense electric fields across the terminator of ring particles, not electrical disturbances.
The spokes were not observed again until some twenty-five years later, this time by the Cassini space probe. The spokes were not visible when Cassini arrived at Saturn in early 2004. Some scientists speculated that the spokes would not be visible again until 2007, based on models attempting to describe their formation. Nevertheless, the Cassini imaging team kept looking for spokes in images of the rings, and they were next seen in images taken on 5 September 2005.
The spokes appear to be a seasonal phenomenon, disappearing in the Saturnian midwinter and midsummer and reappearing as Saturn comes closer to equinox. Suggestions that the spokes may be a seasonal effect, varying with Saturn's 29.7-year orbit, were supported by their gradual reappearance in the later years of the Cassini mission.
Moonlet
In 2009, during equinox, a moonlet embedded in the B ring was discovered from the shadow it cast. It is estimated to be in diameter. The moonlet was given the provisional designation S/2009 S 1.
Cassini Division
The Cassini Division is a region in width between Saturn's A Ring and B Ring. It was discovered in 1675 by Giovanni Cassini at the Paris Observatory using a refracting telescope that had a 2.5-inch objective lens with a 20-foot-long focal length and a 90x magnification. From Earth it appears as a thin black gap in the rings. However, Voyager discovered that the gap is itself populated by ring material bearing much similarity to the C Ring. The division may appear bright in views of the unlit side of the rings, since the relatively low density of material allows more light to be transmitted through the thickness of the rings.
The inner edge of the Cassini Division is governed by a strong orbital resonance. Ring particles at this location orbit twice for every orbit of the moon Mimas. The resonance causes Mimas' pulls on these ring particles to accumulate, destabilizing their orbits and leading to a sharp cutoff in ring density. Many of the other gaps between ringlets within the Cassini Division, however, are unexplained.
Huygens Gap
Discovered in 1981 through images sent back by Voyager 2, the Huygens Gap is located at the inner edge of the Cassini Division. It contains the dense, eccentric Huygens Ringlet in the middle. This ringlet exhibits irregular azimuthal variations of geometrical width and optical depth, which may be caused by the nearby 2:1 resonance with Mimas and the influence of the eccentric outer edge of the B-ring. There is an additional narrow ringlet just outside the Huygens Ringlet.
A Ring
The A Ring is the outermost of the large, bright rings. Its inner boundary is the Cassini Division and its sharp outer boundary is close to the orbit of the small moon Atlas. The A Ring is interrupted at a location 22% of the ring width from its outer edge by the Encke Gap. A narrower gap 2% of the ring width from the outer edge is called the Keeler Gap.
The thickness of the A Ring is estimated to be 10 to 30 m, its surface density from 35 to 40 g/cm2 and its total mass as 4 to kg (just under the mass of Hyperion). Its optical depth varies from 0.4 to 0.9.
Similarly to the B Ring, the A Ring's outer edge is maintained by orbital resonances, albeit in this case a more complicated set. It is primarily acted on by the 7:6 resonance with Janus and Epimetheus, with other contributions from the 5:3 resonance with Mimas and various resonances with Prometheus and Pandora. Other orbital resonances also excite many spiral density waves in the A Ring (and, to a lesser extent, other rings as well), which account for most of its structure. These waves are described by the same physics that describes the spiral arms of galaxies. Spiral bending waves, also present in the A Ring and also described by the same theory, are vertical corrugations in the ring rather than compression waves.
In April 2014, NASA scientists reported observing the possible formative stage of a new moon near the outer edge of the A Ring.
Encke Gap
The Encke Gap is a 325-km (200 mile) wide gap within the A ring, centered at a distance of 133,590 km (83,000 miles) from Saturn's center. It is caused by the presence of the small moon Pan, which orbits within it. Images from the Cassini probe have shown that there are at least three thin, knotted ringlets within the gap. Spiral density waves visible on both sides of it are induced by resonances with nearby moons exterior to the rings, while Pan induces an additional set of spiralling wakes.
Johann Encke himself did not observe this gap; it was named in honour of his ring observations. The gap itself was discovered by James Edward Keeler in 1888. The second major gap in the A ring, discovered by Voyager, was named the Keeler Gap in his honor.
The Encke Gap is a gap because it is entirely within the A Ring. There was some ambiguity between the terms gap and division until the IAU clarified the definitions in 2008; before that, the separation was sometimes called the "Encke Division".
Keeler Gap
The Keeler Gap is a 42-km (26 mile) wide gap in the A ring, approximately 250 km (150 miles) from the ring's outer edge. The small moon Daphnis, discovered 1 May 2005, orbits within it, keeping it clear. The moon's passage induces waves in the edges of the gap (this is also influenced by its slight orbital eccentricity). Because the orbit of Daphnis is slightly inclined to the ring plane, the waves have a component that is perpendicular to the ring plane, reaching a distance of 1500 m "above" the plane.
The Keeler gap was discovered by Voyager, and named in honor of the astronomer James Edward Keeler. Keeler had in turn discovered and named the Encke Gap in honor of Johann Encke.
Propeller moonlets
In 2006, four tiny "moonlets" were found in Cassini images of the A Ring. The moonlets themselves are only about a hundred meters in diameter, too small to be seen directly; what Cassini sees are the "propeller"-shaped disturbances the moonlets create, which are several km (miles) across. It is estimated that the A Ring contains thousands of such objects. In 2007, the discovery of eight more moonlets revealed that they are largely confined to a 3,000 km (2000 mile) belt, about 130,000 km (80,000 miles) from Saturn's center, and by 2008 over 150 propeller moonlets had been detected. One that has been tracked for several years has been nicknamed Bleriot.
Roche Division
The separation between the A ring and the F Ring has been named the Roche Division in honor of the French physicist Édouard Roche. The Roche Division should not be confused with the Roche limit which is the distance at which a large object is so close to a planet (such as Saturn) that the planet's tidal forces will pull it apart. Lying at the outer edge of the main ring system, the Roche Division is in fact close to Saturn's Roche limit, which is why the rings have been unable to accrete into a moon.
Like the Cassini Division, the Roche Division is not empty but contains a sheet of material. The character of this material is similar to the tenuous and dusty D, E, and G Rings. Two locations in the Roche Division have a higher concentration of dust than the rest of the region. These were discovered by the Cassini probe imaging team and were given temporary designations: R/2004 S 1, which lies along the orbit of the moon Atlas; and R/2004 S 2, centered at 138,900 km (86,300 miles) from Saturn's center, inward of the orbit of Prometheus.
F Ring
The F Ring is the outermost discrete ring of Saturn and perhaps the most active ring in the Solar System, with features changing on a timescale of hours. It is located 3,000 km (2000 miles) beyond the outer edge of the A ring. The ring was discovered in 1979 by the Pioneer 11 imaging team. It is very thin, just a few hundred km (miles) in radial extent. While the traditional view has been that it is held together by two shepherd moons, Prometheus and Pandora, which orbit inside and outside it, recent studies indicate that only Prometheus contributes to the confinement. Numerical simulations suggest the ring was formed when Prometheus and Pandora collided with each other and were partially disrupted.
More recent closeup images from the Cassini probe show that the F Ring consists of one core ring and a spiral strand around it. They also show that when Prometheus encounters the ring at its apoapsis, its gravitational attraction creates kinks and knots in the F Ring as the moon 'steals' material from it, leaving a dark channel in the inner part of the ring. Since Prometheus orbits Saturn more rapidly than the material in the F ring, each new channel is carved about 3.2 degrees in front of the previous one.
In 2008, further dynamism was detected, suggesting that small unseen moons orbiting within the F Ring are continually passing through its narrow core because of perturbations from Prometheus. One of the small moons was tentatively identified as S/2004 S 6.
As of 2023, the clumpy structure of the ring "is thought to be caused by the presence of thousands of small parent bodies (1.0 to 0.1 km in size) that collide and produce dense strands of micrometer- to centimeter-sized particles that re-accrete over a few months onto the parent bodies in a steady-state regime."
Outer rings
Janus/Epimetheus Ring
A faint dust ring is present around the region occupied by the orbits of Janus and Epimetheus, as revealed by images taken in forward-scattered light by the Cassini spacecraft in 2006. The ring has a radial extent of about 5,000 km (3000 miles). Its source is particles blasted off the moons' surfaces by meteoroid impacts, which then form a diffuse ring around their orbital paths.
G Ring
The G Ring is a very thin, faint ring about halfway between the F Ring and the beginning of the E Ring, with its inner edge about 15,000 km (10,000 miles) inside the orbit of Mimas. It contains a single distinctly brighter arc near its inner edge (similar to the arcs in the rings of Neptune) that extends about one-sixth of its circumference, centered on the half-km (500 yard) diameter moonlet Aegaeon, which is held in place by a 7:6 orbital resonance with Mimas. The arc is believed to be composed of icy particles up to a few m in diameter, with the rest of the G Ring consisting of dust released from within the arc. The radial width of the arc is about 250 km (150 miles), compared to a width of 9,000 km (6000 miles) for the G Ring as a whole. The arc is thought to contain matter equivalent to a small icy moonlet about a hundred m in diameter. Dust released from Aegaeon and other source bodies within the arc by micrometeoroid impacts drifts outward from the arc because of interaction with Saturn's magnetosphere (whose plasma corotates with Saturn's magnetic field, which rotates much more rapidly than the orbital motion of the G Ring). These tiny particles are steadily eroded away by further impacts and dispersed by plasma drag. Over the course of thousands of years the ring gradually loses mass, which is replenished by further impacts on Aegaeon.
Methone Ring Arc
A faint ring arc, first detected in September 2006, covering a longitudinal extent of about 10 degrees is associated with the moon Methone. The material in the arc is believed to represent dust ejected from Methone by micrometeoroid impacts. The confinement of the dust within the arc is attributable to a 14:15 resonance with Mimas (similar to the mechanism of confinement of the arc within the G ring). Under the influence of the same resonance, Methone librates back and forth in its orbit with an amplitude of 5° of longitude.
Anthe Ring Arc
A faint ring arc, first detected in June 2007, covering a longitudinal extent of about 20 degrees is associated with the moon Anthe. The material in the arc is believed to represent dust knocked off Anthe by micrometeoroid impacts. The confinement of the dust within the arc is attributable to a 10:11 resonance with Mimas. Under the influence of the same resonance, Anthe drifts back and forth in its orbit over 14° of longitude.
Pallene Ring
A faint dust ring shares Pallene's orbit, as revealed by images taken in forward-scattered light by the Cassini spacecraft in 2006. The ring has a radial extent of about 2,500 km (1500 miles). Its source is particles blasted off Pallene's surface by meteoroid impacts, which then form a diffuse ring around its orbital path.
E Ring
Although not confirmed until 1980, the existence of the E ring was a subject of debate among astronomers at least as far back as 1908. In a narrative timeline of Saturn observations, Arthur Francis O'Donel Alexander attributes the first observation of what would come to be called the E Ring to Georges Fournier, who on 5 September 1907 at Mont Revard observed a "luminous zone" "surrounding the outer bright ring." The next year, on 7 October 1908, E. Schaer independently observed "a new dusky ring...surrounding the bright rings of Saturn" at the Geneva Observatory. Following up on Schaer's discovery, W. Boyer, T. Lewis, and Arthur Eddington found signs of a discontinuous ring matching Schaer's description, but described their observations as "uncertain." After Edward Barnard, using the what was at the time the world's best telescope, failed to find signs of a ring. E. M. Antoniadi argued for the ring's existence in a 1909 publication, recalling a observations by William Wray on 26 December 1861 of a "very faint light...so as to give the impression that it was the dusky ring," but after Barnard's negative result most astronomers became skeptical of the E Ring's existence.
Unlike the A, B, and C rings, the E Ring's small optical depth and large vertical extent mean it is best viewed edge-on, which is only possible once every 14–15 years, so perhaps for this reason, it was not until the 1960's that the E Ring was again the subject of observations. Although some sources credit Walter Feibelman with the E Ring's discovery in 1966, his paper published the following year announcing the observations begins by acknowledging the existing controversy and the long record of observations both supporting and disputing the ring's existence, and carefully stresses his interpretation of the data as a new ring as "tentative only." A reanalysis of Feibelman's original observations, conducted in anticipation of the coming Saturn flyby by Pioneer 11, once again called the evidence for this outer ring "shaky." Even polarimetric observations by Pioneer 11 failed to conclusively identify E Ring during its 1979 flyby, though "its existence was inferred from [particle, radiation, and magnetic field measurements]." Only after a digital reanalysis of the 1966 observations as well as several independent observations using ground- and space-based telescopes existence was finally confirmed in a 1980 paper by Feibelman and Klinglesmith.
The E Ring is the second outermost ring and is extremely wide; it consists of many tiny (micron and sub-micron) particles of water ice with silicates, carbon dioxide and ammonia. The E Ring is distributed between the orbits of Mimas and Titan. Unlike the other rings, it is composed of microscopic particles rather than macroscopic ice chunks. In 2005, the source of the E Ring's material was determined to be cryovolcanic plumes emanating from the "tiger stripes" of the south polar region of the moon Enceladus. Unlike the main rings, the E Ring is more than 2,000 km (1000 miles) thick and increases with its distance from Enceladus. Tendril-like structures observed within the E Ring can be related to the emissions of the most active south polar jets of Enceladus.
Particles of the E Ring tend to accumulate on moons that orbit within it. The equator of the leading hemisphere of Tethys is tinted slightly blue due to infalling material. The trojan moons Telesto, Calypso, Helene and Polydeuces are particularly affected as their orbits move up and down the ring plane. This results in their surfaces being coated with bright material that smooths out features.
Phoebe ring
In October 2009, the discovery of a tenuous disk of material just interior to the orbit of Phoebe was reported. The disk was aligned edge-on to Earth at the time of discovery. This disk can be loosely described as another ring. Although very large (as seen from Earth, the apparent size of two full moons), the ring is virtually invisible. It was discovered using NASA's infrared Spitzer Space Telescope, and was seen over the entire range of the observations, which extended from 128 to 207 times the radius of Saturn, with calculations indicating that it may extend outward up to 300 Saturn radii and inward to the orbit of Iapetus at 59 Saturn radii. The ring was subsequently studied using the WISE, Herschel and Cassini spacecraft; WISE observations show that it extends from at least between 50 and 100 to 270 Saturn radii (the inner edge is lost in the planet's glare). Data obtained with WISE indicate the ring particles are small; those with radii greater than 10 cm comprise 10% or less of the cross-sectional area.
Phoebe orbits the planet at a distance ranging from 180 to 250 radii. The ring has a thickness of about 40 radii. Because the ring's particles are presumed to have originated from impacts (micrometeoroid and larger) on Phoebe, they should share its retrograde orbit, which is opposite to the orbital motion of the next inner moon, Iapetus. This ring lies in the plane of Saturn's orbit, or roughly the ecliptic, and thus is tilted 27 degrees from Saturn's equatorial plane and the other rings. Phoebe is inclined by 5° with respect to Saturn's orbit plane (often written as 175°, due to Phoebe's retrograde orbital motion), and its resulting vertical excursions above and below the ring plane agree closely with the ring's observed thickness of 40 Saturn radii.
The existence of the ring was proposed in the 1970s by Steven Soter. The discovery was made by Anne J. Verbiscer and Michael F. Skrutskie (of the University of Virginia) and Douglas P. Hamilton (of the University of Maryland, College Park). The three had studied together at Cornell University as graduate students.
Ring material migrates inward due to reemission of solar radiation, with a speed inversely proportional to particle size; a 3 cm particle would migrate from the vicinity of Phoebe to that of Iapetus over the age of the Solar System. The material would thus strike the leading hemisphere of Iapetus. Infall of this material causes a slight darkening and reddening of the leading hemisphere of Iapetus (similar to what is seen on the Uranian moons Oberon and Titania) but does not directly create the dramatic two-tone coloration of that moon. Rather, the infalling material initiates a positive feedback thermal self-segregation process of ice sublimation from warmer regions, followed by vapor condensation onto cooler regions. This leaves a dark residue of "lag" material covering most of the equatorial region of Iapetus's leading hemisphere, which contrasts with the bright ice deposits covering the polar regions and most of the trailing hemisphere.
Possible ring system around Rhea
Saturn's second largest moon Rhea has been hypothesized to have a tenuous ring system of its own consisting of three narrow bands embedded in a disk of solid particles. These putative rings have not been imaged, but their existence has been inferred from Cassini observations in November 2005 of a depletion of energetic electrons in Saturn's magnetosphere near Rhea. The Magnetospheric Imaging Instrument (MIMI) observed a gentle gradient punctuated by three sharp drops in plasma flow on each side of the moon in a nearly symmetric pattern. This could be explained if they were absorbed by solid material in the form of an equatorial disk containing denser rings or arcs, with particles perhaps several decimeters to approximately a meter in diameter. A more recent piece of evidence consistent with the presence of Rhean rings is a set of small ultraviolet-bright spots distributed in a line that extends three quarters of the way around the moon's circumference, within 2 degrees of the equator. The spots have been interpreted as the impact points of deorbiting ring material. However, targeted observations by Cassini of the putative ring plane from several angles have turned up nothing, suggesting that another explanation for these enigmatic features is needed.
| Physical sciences | Solar System | Astronomy |
978913 | https://en.wikipedia.org/wiki/Brittleness | Brittleness | A material is brittle if, when subjected to stress, it fractures with little elastic deformation and without significant plastic deformation. Brittle materials absorb relatively little energy prior to fracture, even those of high strength. Breaking is often accompanied by a sharp snapping sound.
When used in materials science, it is generally applied to materials that fail when there is little or no plastic deformation before failure. One proof is to match the broken halves, which should fit exactly since no plastic deformation has occurred.
Brittleness in different materials
Polymers
Mechanical characteristics of polymers can be sensitive to temperature changes near room temperatures. For example, poly(methyl methacrylate) is extremely brittle at temperature 4˚C, but experiences increased ductility with increased temperature.
Amorphous polymers are polymers that can behave differently at different temperatures. They may behave like a glass at low temperatures (the glassy region), a rubbery solid at intermediate temperatures (the leathery or glass transition region), and a viscous liquid at higher temperatures (the rubbery flow and viscous flow region). This behavior is known as viscoelastic behavior. In the glassy region, the amorphous polymer will be rigid and brittle. With increasing temperature, the polymer will become less brittle.
Metals
Some metals show brittle characteristics due to their slip systems. The more slip systems a metal has, the less brittle it is, because plastic deformation can occur along many of these slip systems. Conversely, with fewer slip systems, less plastic deformation can occur, and the metal will be more brittle. For example, HCP (hexagonal close packed) metals have few active slip systems, and are typically brittle.
Ceramics
Ceramics are generally brittle due to the difficulty of dislocation motion, or slip. There are few slip systems in crystalline ceramics that a dislocation is able to move along, which makes deformation difficult and makes the ceramic more brittle.
Ceramic materials generally exhibit ionic bonding. Because of the ions’ electric charge and their repulsion of like-charged ions, slip is further restricted.
Changing brittle materials
Materials can be changed to become more brittle or less brittle.
Toughening
When a material has reached the limit of its strength, it usually has the option of either deformation or fracture. A naturally malleable metal can be made stronger by impeding the mechanisms of plastic deformation (reducing grain size, precipitation hardening, work hardening, etc.), but if this is taken to an extreme, fracture becomes the more likely outcome, and the material can become brittle. Improving material toughness is, therefore, a balancing act.
Naturally brittle materials, such as glass, are not difficult to toughen effectively. Most such techniques involve one of two mechanisms: to deflect or absorb the tip of a propagating crack or to create carefully controlled residual stresses so that cracks from certain predictable sources will be forced closed. The first principle is used in laminated glass where two sheets of glass are separated by an interlayer of polyvinyl butyral. The polyvinyl butyral, as a viscoelastic polymer, absorbs the growing crack. The second method is used in toughened glass and pre-stressed concrete. A demonstration of glass toughening is provided by Prince Rupert's Drop. Brittle polymers can be toughened by using metal particles to initiate crazes when a sample is stressed, a good example being high-impact polystyrene or HIPS. The least brittle structural ceramics are silicon carbide (mainly by virtue of its high strength) and transformation-toughened zirconia.
A different philosophy is used in composite materials, where brittle glass fibers, for example, are embedded in a ductile matrix such as polyester resin. When strained, cracks are formed at the glass–matrix interface, but so many are formed that much energy is absorbed and the material is thereby toughened. The same principle is used in creating metal matrix composites.
Effect of pressure
Generally, the brittle strength of a material can be increased by pressure. This happens as an example in the brittle–ductile transition zone at an approximate depth of in the Earth's crust, at which rock becomes less likely to fracture, and more likely to deform ductilely (see rheid).
Crack growth
Supersonic fracture is crack motion faster than the speed of sound in a brittle material. This phenomenon was first discovered by scientists from the Max Planck Institute for Metals Research in Stuttgart (Markus J. Buehler and Huajian Gao) and IBM Almaden Research Center in San Jose, California (Farid F. Abraham).
| Physical sciences | Solid mechanics | Physics |
979227 | https://en.wikipedia.org/wiki/Rings%20of%20Jupiter | Rings of Jupiter | The rings of Jupiter are a system of faint planetary rings. The Jovian rings were the third ring system to be discovered in the Solar System, after those of Saturn and Uranus. The main ring was discovered in 1979 by the Voyager 1 space probe and the system was more thoroughly investigated in the 1990s by the Galileo orbiter. The main ring has also been observed by the Hubble Space Telescope and from Earth for several years. Ground-based observation of the rings requires the largest available telescopes.
The Jovian ring system is faint and consists mainly of dust. It has four main components: a thick inner torus of particles known as the "halo ring"; a relatively bright, exceptionally thin "main ring"; and two wide, thick and faint outer "gossamer rings", named for the moons of whose material they are composed: Amalthea and Thebe.
The main and halo rings consist of dust ejected from the moons Metis, Adrastea and perhaps smaller, unobserved bodies as the result of high-velocity impacts. High-resolution images obtained in February and March 2007 by the New Horizons spacecraft revealed a rich fine structure in the main ring.
In visible and near-infrared light, the rings have a reddish color, except the halo ring, which is neutral or blue in color. The size of the dust in the rings varies, but the cross-sectional area is greatest for nonspherical particles of radius about 15 μm in all rings except the halo. The halo ring is probably dominated by submicrometre dust. The total mass of the ring system (including unresolved parent bodies) is poorly constrained, but is probably in the range of 1011 to 1016 kg. The age of the ring system is also not known, but it is possible that it has existed since the formation of Jupiter.
A ring or ring arc appears to exist close to the moon Himalia's orbit. One explanation is that a small moon recently crashed into Himalia and the force of the impact ejected the material that forms the ring.
Discovery and structure
Jupiter's ring system was the third to be discovered in the Solar System, after those of Saturn and Uranus. It was first observed on 4 March 1979 by the Voyager 1 space probe. It is composed of four main components: a thick inner torus of particles known as the "halo ring"; a relatively bright, exceptionally thin "main ring"; and two wide, thick and faint outer "gossamer rings", named after the moons of whose material they are composed: Amalthea and Thebe. The principal attributes of the known Jovian Rings are listed in the table.
In 2022, dynamical simulations suggested that the relative meagreness of Jupiter's ring system, compared to that of the smaller Saturn, is due to destabilising resonances created by the Galilean satellites.
Main ring
Appearance and structure
The narrow and relatively thin main ring is the brightest part of Jupiter's ring system. Its outer edge is located at a radius of about (; = equatorial radius of Jupiter or ) and coincides with the orbit of Jupiter's smallest inner satellite, Adrastea. Its inner edge is not marked by any satellite and is located at about ().
Thus the width of the main ring is around . The appearance of the main ring depends on the viewing geometry. In forward-scattered light the brightness of the main ring begins to decrease steeply at (just inward of the Adrastean orbit) and reaches the background level at —just outward of the Adrastean orbit. Therefore, Adrastea at clearly shepherds the ring. The brightness continues to increase in the direction of Jupiter and has a maximum near the ring's center at , although there is a pronounced gap (notch) near the Metidian orbit at . The inner boundary of the main ring, in contrast, appears to fade off slowly from to , merging into the halo ring. In forward-scattered light all Jovian rings are especially bright.
In back-scattered light the situation is different. The outer boundary of the main ring, located at , or slightly beyond the orbit of Adrastea, is very steep. The orbit of the moon is marked by a gap in the ring so there is a thin ringlet just outside its orbit. There is another ringlet just inside Adrastean orbit followed by a gap of unknown origin located at about . The third ringlet is found inward of the central gap, outside the orbit of Metis. The ring's brightness drops sharply just outward of the Metidian orbit, forming the Metis notch. Inward of the orbit of Metis, the brightness of the ring rises much less than in forward-scattered light. So in the back-scattered geometry the main ring appears to consist of two different parts: a narrow outer part extending from to , which itself includes three narrow ringlets separated by notches, and a fainter inner part from to , which lacks any visible structure like in the forward-scattering geometry. The Metis notch serves as their boundary. The fine structure of the main ring was discovered in data from the Galileo orbiter and is clearly visible in back-scattered images obtained from New Horizons in February–March 2007. The early observations by Hubble Space Telescope (HST), Keck and the Cassini spacecraft failed to detect it, probably due to insufficient spatial resolution. However the fine structure was observed by the Keck telescope using adaptive optics in 2002–2003.
Observed in back-scattered light the main ring appears to be razor thin, extending in the vertical direction no more than 30 km. In the side scatter geometry the ring thickness is 80–160 km, increasing somewhat in the direction of Jupiter. The ring appears to be much thicker in the forward-scattered light—about 300 km. One of the discoveries of the Galileo orbiter was the bloom of the main ring—a faint, relatively thick (about 600 km) cloud of material which surrounds its inner part. The bloom grows in thickness towards the inner boundary of the main ring, where it transitions into the halo.
Detailed analysis of the Galileo images revealed longitudinal variations of the main ring's brightness unconnected with the viewing geometry. The Galileo images also showed some patchiness in the ring on the scales 500–1000 km.
In February–March 2007 New Horizons spacecraft conducted a deep search for new small moons inside the main ring. While no satellites larger than 0.5 km were found, the cameras of the spacecraft detected seven small clumps of ring particles. They orbit just inside the orbit of Adrastea inside a dense ringlet. The conclusion, that they are clumps and not small moons, is based on their azimuthally extended appearance. They subtend 0.1–0.3° along the ring, which correspond to –. The clumps are divided into two groups of five and two members, respectively. The nature of the clumps is not clear, but their orbits are close to 115:116 and 114:115 resonances with Metis. They may be wavelike structures excited by this interaction.
Spectra and particle size distribution
Spectra of the main ring obtained by the HST, Keck, Galileo and Cassini have shown that particles forming it are red, i.e. their albedo is higher at longer wavelengths. The existing spectra span the range 0.5–2.5 μm. No spectral features have been found so far which can be attributed to particular chemical compounds, although the Cassini observations yielded evidence for absorption bands near 0.8 μm and 2.2 μm. The spectra of the main ring are very similar to Adrastea and Amalthea.
The properties of the main ring can be explained by the hypothesis that it contains significant amounts of dust with 0.1–10 μm particle sizes. This explains the stronger forward-scattering of light as compared to back-scattering. However, larger bodies are required to explain the strong back-scattering and fine structure in the bright outer part of the main ring.
Analysis of available phase and spectral data leads to a conclusion that the size distribution of small particles in the main ring obeys a power law
where n(r) dr is a number of particles with radii between r and r + dr and is a normalizing parameter chosen to match the known total light flux from the ring. The parameter q is 2.0 ± 0.2 for particles with r < 15 ± 0.3 μm and q = 5 ± 1 for those with r > 15 ± 0.3 μm. The distribution of large bodies in the mm–km size range is undetermined presently. The light scattering in this model is dominated by particles with r around 15 μm.
The power law mentioned above allows estimation of the optical depth of the main ring: for the large bodies and for the dust. This optical depth means that the total cross section of all particles inside the ring is about 5000 km². The particles in the main ring are expected to have aspherical shapes. The total mass of the dust is estimated to be 107−109 kg. The mass of large bodies, excluding Metis and Adrastea, is 1011−1016 kg. It depends on their maximum size— the upper value corresponds to about 1 km maximum diameter. These masses can be compared with masses of Adrastea, which is about 2 kg, Amalthea, about 2 kg, and Earth's Moon, 7.4 kg.
The presence of two populations of particles in the main ring explains why its appearance depends on the viewing geometry. The dust scatters light preferably in the forward direction and forms a relatively thick homogenous ring bounded by the orbit of Adrastea. In contrast, large particles, which scatter in the back direction, are confined in a number of ringlets between the Metidian and Adrastean orbits.
Origin and age
The dust is constantly being removed from the main ring by a combination of Poynting–Robertson drag and electromagnetic forces from the Jovian magnetosphere. Volatile materials such as ices, for example, evaporate quickly. The lifetime of dust particles in the ring is from 100 to , so the dust must be continuously replenished in the collisions between large bodies with sizes from 1 cm to 0.5 km and between the same large bodies and high velocity particles coming from outside the Jovian system. This parent body population is confined to the narrow—about —and bright outer part of the main ring, and includes Metis and Adrastea. The largest parent bodies must be less than 0.5 km in size. The upper limit on their size was obtained by New Horizons spacecraft. The previous upper limit, obtained from HST and Cassini observations, was near 4 km. The dust produced in collisions retains approximately the same orbital elements as the parent bodies and slowly spirals in the direction of Jupiter forming the faint (in back-scattered light) innermost part of the main ring and halo ring. The age of the main ring is currently unknown, but it may be the last remnant of a past population of small bodies near Jupiter.
Vertical corrugations
Images from the Galileo and New Horizons space probes show the presence of two sets of spiraling vertical corrugations in the main ring. These waves became more tightly wound over time at the rate expected for differential nodal regression in Jupiter's gravity field. Extrapolating backwards, the more prominent of the two sets of waves appears to have been excited in 1995, around the time of the impact of Comet Shoemaker-Levy 9 with Jupiter, while the smaller set appears to date to the first half of 1990. Galileo'''s November 1996 observations are consistent with wavelengths of and , and vertical amplitudes of and , for the larger and smaller sets of waves, respectively. The formation of the larger set of waves can be explained if the ring was impacted by a cloud of particles released by the comet with a total mass on the order of 2–5 × 1012 kg, which would have tilted the ring out of the equatorial plane by 2 km. A similar spiraling wave pattern that tightens over time has been observed by Cassini in Saturns's C and D rings.
Halo ring
Appearance and structure
The halo ring is the innermost and the vertically thickest Jovian ring. Its outer edge coincides with the inner boundary of the main ring approximately at the radius (). From this radius the ring becomes rapidly thicker towards Jupiter. The true vertical extent of the halo is not known but the presence of its material was detected as high as over the ring plane. The inner boundary of the halo is relatively sharp and located at the radius (), but some material is present further inward to approximately . Thus the width of the halo ring is about . Its shape resembles a thick torus without clear internal structure. In contrast to the main ring, the halo's appearance depends only slightly on the viewing geometry.
The halo ring appears brightest in forward-scattered light, in which it was extensively imaged by Galileo. While its surface brightness is much less than that of the main ring, its vertically (perpendicular to the ring plane) integrated photon flux is comparable due to its much larger thickness. Despite a claimed vertical extent of more than , the halo's brightness is strongly concentrated towards the ring plane and follows a power law of the form z−0.6 to z−1.5, where z is altitude over the ring plane. The halo's appearance in the back-scattered light, as observed by Keck and HST, is the same. However its total photon flux is several times lower than that of the main ring and is more strongly concentrated near the ring plane than in the forward-scattered light.
The spectral properties of the halo ring are different from the main ring. The flux distribution in the range 0.5–2.5 μm is flatter than in the main ring; the halo is not red and may even be blue.
Origin of the halo ring
The optical properties of the halo ring can be explained by the hypothesis that it comprises only dust with particle sizes less than 15 μm. Parts of the halo located far from the ring plane may consist of submicrometre dust. This dusty composition explains the much stronger forward-scattering, bluer colors and lack of visible structure in the halo. The dust probably originates in the main ring, a claim supported by the fact that the halo's optical depth is comparable with that of the dust in the main ring. The large thickness of the halo can be attributed to the excitation of orbital inclinations and eccentricities of dust particles by the electromagnetic forces in the Jovian magnetosphere. The outer boundary of the halo ring coincides with location of a strong 3:2 Lorentz resonance. As Poynting–Robertson drag causes particles to slowly drift towards Jupiter, their orbital inclinations are excited while passing through it. The bloom of the main ring may be a beginning of the halo. The halo ring's inner boundary is not far from the strongest 2:1 Lorentz resonance. In this resonance the excitation is probably very significant, forcing particles to plunge into the Jovian atmosphere thus defining a sharp inner boundary. Being derived from the main ring, the halo has the same age.
Gossamer rings
Amalthea gossamer ring
The Amalthea gossamer ring is a very faint structure with a rectangular cross section, stretching from the orbit of Amalthea at (2.54 RJ) to about (). Its inner boundary is not clearly defined because of the presence of the much brighter main ring and halo. The thickness of the ring is approximately 2300 km near the orbit of Amalthea and slightly decreases in the direction of Jupiter. The Amalthea gossamer ring is actually the brightest near its top and bottom edges and becomes gradually brighter towards Jupiter; one of the edges is often brighter than another. The outer boundary of the ring is relatively steep; the ring's brightness drops abruptly just inward of the orbit of Amalthea, although it may have a small extension beyond the orbit of the satellite ending near 4:3 resonance with Thebe. In forward-scattered light the ring appears to be about 30 times fainter than the main ring. In back-scattered light it has been detected only by the Keck telescope and the ACS (Advanced Camera for Surveys) on HST. Back-scattering images show additional structure in the ring: a peak in the brightness just inside the Amalthean orbit and confined to the top or bottom edge of the ring.
In 2002–2003 Galileo spacecraft had two passes through the gossamer rings. During them its dust counter detected dust particles in the size range 0.2–5 μm. In addition, the Galileo spacecraft's star scanner detected small, discrete bodies (< 1 km) near Amalthea. These may represent collisional debris generated from impacts with this satellite.
The detection of the Amalthea gossamer ring from the ground, in Galileo images and the direct dust measurements have allowed the determination of the particle size distribution, which appears to follow the same power law as the dust in the main ring with q=2 ± 0.5. The optical depth of this ring is about 10−7, which is an order of magnitude lower than that of the main ring, but the total mass of the dust (107–109 kg) is comparable.
Thebe gossamer ring
The Thebe gossamer ring is the faintest Jovian ring. It appears as a very faint structure with a rectangular cross section, stretching from the Thebean orbit at () to about (;). Its inner boundary is not clearly defined because of the presence of the much brighter main ring and halo. The thickness of the ring is approximately 8400 km near the orbit of Thebe and slightly decreases in the direction of the planet. The Thebe gossamer ring is brightest near its top and bottom edges and gradually becomes brighter towards Jupiter—much like the Amalthea ring. The outer boundary of the ring is not especially steep, stretching over . There is a barely visible continuation of the ring beyond the orbit of Thebe, extending up to () and called the Thebe Extension. In forward-scattered light the ring appears to be about 3 times fainter than the Amalthea gossamer ring. In back-scattered light it has been detected only by the Keck telescope. Back-scattering images show a peak of brightness just inside the orbit of Thebe. In 2002–2003 the dust counter of the Galileo spacecraft detected dust particles in the size range 0.2–5 μm—similar to those in the Amalthea ring—and confirmed the results obtained from imaging.
The optical depth of the Thebe gossamer ring is about 3, which is three times lower than the Amalthea gossamer ring, but the total mass of the dust is the same—about 107–109 kg. However the particle size distribution of the dust is somewhat shallower than in the Amalthea ring. It follows a power law with q < 2. In the Thebe extension the parameter q may be even smaller.
Origin of the gossamer rings
The dust in the gossamer rings originates in essentially the same way as that in the main ring and halo. Its sources are the inner Jovian moons Amalthea and Thebe respectively. High velocity impacts by projectiles coming from outside the Jovian system eject dust particles from their surfaces. These particles initially retain the same orbits as their moons but then gradually spiral inward by Poynting–Robertson drag. The thickness of the gossamer rings is determined by vertical excursions of the moons due to their nonzero orbital inclinations. This hypothesis naturally explains almost all observable properties of the rings: rectangular cross-section, decrease of thickness in the direction of Jupiter and brightening of the top and bottom edges of the rings.
However some properties have so far gone unexplained, like the Thebe Extension, which may be due to unseen bodies outside Thebe's orbit, and structures visible in the back-scattered light. One possible explanation of the Thebe Extension is influence of the electromagnetic forces from the Jovian magnetosphere. When the dust enters the shadow behind Jupiter, it loses its electrical charge fairly quickly. Since the small dust particles partially corotate with the planet, they will move outward during the shadow pass creating an outward extension of the Thebe gossamer ring. The same forces can explain a dip in the particle distribution and ring's brightness, which occurs between the orbits of Amalthea and Thebe.
The peak in the brightness just inside of the Amalthea's orbit and, therefore, the vertical asymmetry the Amalthea gossamer ring may be due to the dust particles trapped at the leading (L4) and trailing (L5) Lagrange points of this moon. The particles may also follow horseshoe orbits between the Lagrangian points. The dust may be present at the leading and trailing Lagrange points of Thebe as well. This discovery implies that there are two particle populations in the gossamer rings: one slowly drifts in the direction of Jupiter as described above, while another remains near a source moon trapped in 1:1 resonance with it.
Himalia ring
In September 2006, as NASA's New Horizons mission to Pluto approached Jupiter for a gravity assist, it photographed what appeared to be a faint, previously unknown planetary ring or ring arc, parallel with and slightly inside the orbit of the irregular satellite Himalia. The amount of material in the part of the ring or arc imaged by New Horizons was at least 0.04 km3, assuming it had the same albedo as Himalia. If the ring (arc) is debris from Himalia, it must have formed quite recently, given the century-scale precession of the Himalian orbit. It is possible that the ring could be debris from the impact of a very small undiscovered moon into Himalia, suggesting that Jupiter might continue to gain and lose small moons through collisions.
Exploration
The existence of the Jovian rings was inferred from observations of the planetary radiation belts by Pioneer 11 spacecraft in 1975. In 1979 the Voyager 1 spacecraft obtained a single overexposed image of the ring system. More extensive imaging was conducted by Voyager 2 in the same year, which allowed rough determination of the ring's structure. The superior quality of the images obtained by the Galileo orbiter between 1995 and 2003 greatly extended the existing knowledge about the Jovian rings. Ground-based observation of the rings by the Keck telescope in 1997 and 2002 and the HST in 1999 revealed the rich structure visible in back-scattered light. Images transmitted by the New Horizons spacecraft in February–March 2007 allowed observation of the fine structure in the main ring for the first time. In 2000, the Cassini'' spacecraft en route to Saturn conducted extensive observations of the Jovian ring system. Future missions to the Jovian system will provide additional information about the rings.
Gallery
| Physical sciences | Solar System | Astronomy |
979232 | https://en.wikipedia.org/wiki/Rings%20of%20Uranus | Rings of Uranus | The rings of Uranus consists of 13 planetary rings. They are intermediate in complexity between the more extensive set around Saturn and the simpler systems around Jupiter and Neptune. The rings of Uranus were discovered on March 10, 1977, by James L. Elliot, Edward W. Dunham, and Jessica Mink. William Herschel had also reported observing rings in 1789; modern astronomers are divided on whether he could have seen them, as they are very dark and faint.
By 1977, nine distinct rings were identified. Two additional rings were discovered in 1986 in images taken by the Voyager 2 spacecraft, and two outer rings were found in 2003–2005 in Hubble Space Telescope photos. In the order of increasing distance from the planet the 13 known rings are designated 1986U2R/ζ, 6, 5, 4, α, β, η, γ, δ, λ, ε, ν and μ. Their radii range from about 38,000 km for the 1986U2R/ζ ring to about 98,000 km for the μ ring. Additional faint dust bands and incomplete arcs may exist between the main rings. The rings are extremely dark—the Bond albedo of the rings' particles does not exceed 2%. They are probably composed of water ice with the addition of some dark radiation-processed organics.
The majority of Uranus' rings are opaque and only a few kilometres wide. The ring system contains little dust overall; it consists mostly of large bodies 20 cm to 20 m in diameter. Some rings are optically thin: the broad and faint 1986U2R/ζ, μ and ν rings are made of small dust particles, while the narrow and faint λ ring also contains larger bodies. The relative lack of dust in the ring system may be due to aerodynamic drag from the extended Uranian exosphere.
The rings of Uranus are thought to be relatively young, and not more than 600 million years old. The Uranian ring system probably originated from the collisional fragmentation of several moons that once existed around the planet. After colliding, the moons probably broke up into many particles, which survived as narrow and optically dense rings only in strictly confined zones of maximum stability.
The mechanism that confines the narrow rings is not well understood. Initially it was assumed that every narrow ring had a pair of nearby shepherd moons corralling it into shape. In 1986 Voyager 2 discovered only one such shepherd pair (Cordelia and Ophelia) around the brightest ring (ε), though the faint ν would later be discovered shepherded between Portia and Rosalind.
Discovery
The first mention of a Uranian ring system comes from William Herschel's notes detailing his observations of Uranus in the 18th century, which include the following passage: "February 22, 1789: A ring was suspected". Herschel drew a small diagram of the ring and noted that it was "a little inclined to the red". The Keck Telescope in Hawaii has since confirmed this to be the case, at least for the ν (nu) ring. Herschel's notes were published in a Royal Society journal in 1797. In the two centuries between 1797 and 1977 the rings are rarely mentioned, if at all. This casts serious doubt on whether Herschel could have seen anything of the sort while hundreds of other astronomers saw nothing. It has been claimed that Herschel gave accurate descriptions of the ε ring's size relative to Uranus, its changes as Uranus travelled around the Sun, and its color.
The definitive discovery of the Uranian rings was made by astronomers James L. Elliot, Edward W. Dunham, and Jessica Mink on March 10, 1977, using the Kuiper Airborne Observatory, and was serendipitous. They planned to use the occultation of the star SAO 158687 by Uranus to study the planet's atmosphere. When their observations were analysed, they found that the star disappeared briefly from view five times both before and after it was eclipsed by the planet. They deduced that a system of narrow rings was present. The five occultation events they observed were denoted by the Greek letters α, β, γ, δ and ε in their papers. These designations have been used as the rings' names since then. Later they found four additional rings: one between the β and γ rings and three inside the α ring. The former was named the η ring. The latter were dubbed rings 4, 5 and 6—according to the numbering of the occultation events in one paper. Uranus' ring system was the second to be discovered in the Solar System, after that of Saturn. In 1982, on the fifth anniversary of the rings' discovery, Uranus along with the eight other planets recognized at the time (i.e. including Pluto) aligned on the same side of the Sun.
The rings were directly imaged when the Voyager 2 spacecraft flew through the Uranian system in 1986. Two more faint rings were revealed, bringing the total to eleven. The Hubble Space Telescope detected an additional pair of previously unseen rings in 2003–2005, bringing the total number known to 13. The discovery of these outer rings doubled the known radius of the ring system. Hubble also imaged two small satellites for the first time, one of which, Mab, shares its orbit with the outermost newly discovered μ ring.
General properties
As currently understood, the ring system of Uranus comprises thirteen distinct rings. In order of increasing distance from the planet they are: 1986U2R/ζ, 6, 5, 4, α, β, η, γ, δ, λ, ε, ν, μ rings. They can be divided into three groups: nine narrow main rings (6, 5, 4, α, β, η, γ, δ, ε), two dusty rings (1986U2R/ζ, λ) and two outer rings (ν, μ). The rings of Uranus consist mainly of macroscopic particles and little dust, although dust is known to be present in 1986U2R/ζ, η, δ, λ, ν and μ rings. In addition to these well-known rings, there may be numerous optically thin dust bands and faint rings between them. These faint rings and dust bands may exist only temporarily or consist of a number of separate arcs, which are sometimes detected during occultations. Some of them became visible during a series of ring plane-crossing events in 2007. A number of dust bands between the rings were observed in forward-scattering geometry by Voyager 2. All rings of Uranus show azimuthal brightness variations.
The rings are made of an extremely dark material. The geometric albedo of the ring particles does not exceed 5–6%, while the Bond albedo is even lower—about 2%. The rings particles demonstrate a steep opposition surge—an increase of the albedo when the phase angle is close to zero. This means that their albedo is much lower when they are observed slightly off the opposition. The rings are slightly red in the ultraviolet and visible parts of the spectrum and grey in near-infrared. They exhibit no identifiable spectral features. The chemical composition of the ring particles is not known. They cannot be made of pure water ice like the rings of Saturn because they are too dark, darker than the inner moons of Uranus. This indicates that they are probably composed of a mixture of the ice and a dark material. The nature of this material is not clear, but it may be organic compounds considerably darkened by the charged particle irradiation from the Uranian magnetosphere. The rings' particles may consist of a heavily processed material which was initially similar to that of the inner moons.
As a whole, the ring system of Uranus is unlike either the faint dusty rings of Jupiter or the broad and complex rings of Saturn, some of which are composed of very bright material—water ice. There are similarities with some parts of the latter ring system; the Saturnian F ring and the Uranian ε ring are both narrow, relatively dark and are shepherded by a pair of moons. The newly discovered outer ν and μ rings of Uranus are similar to the outer G and E rings of Saturn. Narrow ringlets existing in the broad Saturnian rings also resemble the narrow rings of Uranus. In addition, dust bands observed between the main rings of Uranus may be similar to the rings of Jupiter. In contrast, the Neptunian ring system is quite similar to that of Uranus, although it is less complex, darker and contains more dust; the Neptunian rings are also positioned further from the planet.
Narrow main rings
ε (epsilon) ring
The ε ring is the brightest and densest part of the Uranian ring system, and is responsible for about two-thirds of the light reflected by the rings. While it is the most eccentric of the Uranian rings, it has negligible orbital inclination. The ring's eccentricity causes its brightness to vary over the course of its orbit. The radially integrated brightness of the ε ring is highest near apoapsis and lowest near periapsis. The maximum/minimum brightness ratio is about 2.5–3.0. These variations are connected with the variations of the ring width, which is 19.7 km at the periapsis and 96.4 km at the apoapsis. As the ring becomes wider, the amount of shadowing between particles decreases and more of them come into view, leading to higher integrated brightness. The width variations were measured directly from Voyager 2 images, as the ε ring was one of only two rings resolved by Voyager's cameras. Such behavior indicates that the ring is not optically thin. Indeed, occultation observations conducted from the ground and the spacecraft showed that its normal optical depth varies between 0.5 and 2.5, being highest near the periapsis. The equivalent depth of the ε ring is around 47 km and is invariant around the orbit.
The geometric thickness of the ε ring is not precisely known, although the ring is certainly very thin—by some estimates as thin as 150 m. Despite such infinitesimal thickness, it consists of several layers of particles. The ε ring is a rather crowded place with a filling factor near the apoapsis estimated by different sources at from 0.008 to 0.06. The mean size of the ring particles is 0.2–20.0 m, and the mean separation is around 4.5 times their radius. The ring is almost devoid of dust, possibly due to the aerodynamic drag from Uranus' extended atmospheric corona. Due to its razor-thin nature the ε ring is invisible when viewed edge-on. This happened in 2007 when a ring plane-crossing was observed. The temperature of the ε ring was measured by ALMA to be .
The Voyager 2 spacecraft observed a strange signal from the ε ring during the radio occultation experiment. The signal looked like a strong enhancement of the forward-scattering at the wavelength 3.6 cm near ring's apoapsis. Such strong scattering requires the existence of a coherent structure. That the ε ring does have such a fine structure has been confirmed by many occultation observations. The ε ring seems to consist of a number of narrow and optically dense ringlets, some of which may have incomplete arcs.
The ε ring is known to have interior and exterior shepherd moons—Cordelia and Ophelia, respectively. The inner edge of the ring is in 24:25 resonance with Cordelia, and the outer edge is in 14:13 resonance with Ophelia. The masses of the moons need to be at least three times the mass of the ring to confine it effectively. The mass of the ε ring is estimated to be about 1016 kg.
δ (delta) ring
The δ ring is circular and slightly inclined. It shows significant unexplained azimuthal variations in normal optical depth and width. One possible explanation is that the ring has an azimuthal wave-like structure, excited by a small moonlet just inside it. The sharp outer edge of the δ ring is in 23:22 resonance with Cordelia. The δ ring consists of two components: a narrow optically dense component and a broad inward shoulder with low optical depth. The width of the narrow component is 4.1–6.1 km and the equivalent depth is about 2.2 km, which corresponds to a normal optical depth of about 0.3–0.6. The ring's broad component is about 10–12 km wide and its equivalent depth is close to 0.3 km, indicating a low normal optical depth of 3 × 10−2. This is known only from occultation data because Voyager 2's imaging experiment failed to resolve the δ ring. When observed in forward-scattering geometry by Voyager 2, the δ ring appeared relatively bright, which is compatible with the presence of dust in its broad component. The broad component is geometrically thicker than the narrow component. This is supported by the observations of a ring plane-crossing event in 2007, when the δ ring remained visible, which is consistent with the behavior of a simultaneously geometrically thick and optically thin ring.
γ (gamma) ring
The γ ring is narrow, optically dense and slightly eccentric. Its orbital inclination is almost zero. The width of the ring varies in the range 3.6–4.7 km, although equivalent optical depth is constant at 3.3 km. The normal optical depth of the γ ring is 0.7–0.9. During a ring plane-crossing event in 2007 the γ ring disappeared, which means it is geometrically thin like the ε ring and devoid of dust. The width and normal optical depth of the γ ring show significant azimuthal variations. The mechanism of confinement of such a narrow ring is not known, but it has been noticed that the sharp inner edge of the γ ring is in a 6:5 resonance with Ophelia.
η (eta) ring
The η ring has zero orbital eccentricity and inclination. Like the δ ring, it consists of two components: a narrow optically dense component and a broad outward shoulder with low optical depth. The width of the narrow component is 1.9–2.7 km and the equivalent depth is about 0.42 km, which corresponds to the normal optical depth of about 0.16–0.25. The broad component is about 40 km wide and its equivalent depth is close to 0.85 km, indicating a low normal optical depth of 2 × 10−2. It was resolved in Voyager 2 images. In forward-scattered light, the η ring looked bright, which indicated the presence of a considerable amount of dust in this ring, probably in the broad component. The broad component is much thicker (geometrically) than the narrow one. This conclusion is supported by the observations of a ring plane-crossing event in 2007, when the η ring demonstrated increased brightness, becoming the second brightest feature in the ring system. This is consistent with the behavior of a geometrically thick but simultaneously optically thin ring. Like the majority of other rings, the η ring shows significant azimuthal variations in the normal optical depth and width. The narrow component even vanishes in some places.
The η ring is located close to a 3:2 Lindblad resonance with Uranian moon Cressida, which makes the ring to take the shape with three maxima and three minima in the radius, rotating with a pattering speed equal to the Cressida's orbital motion.
α (alpha) and β (beta) rings
After the ε ring, the α and β rings are the brightest of Uranus' rings. Like the ε ring, they exhibit regular variations in brightness and width. They are brightest and widest 30° from the apoapsis and dimmest and narrowest 30° from the periapsis. The α and β rings have sizable orbital eccentricity and non-negligible inclination. The widths of these rings are 4.8–10 km and 6.1–11.4 km, respectively. The equivalent optical depths are 3.29 km and 2.14 km, resulting in normal optical depths of 0.3–0.7 and 0.2–0.35, respectively. During a ring plane-crossing event in 2007 the rings disappeared, which means they are geometrically thin like the ε ring and devoid of dust. The same event revealed a thick and optically thin dust band just outside the β ring, which was also observed earlier by Voyager 2. The masses of the α and β rings are estimated to be about 5 kg (each)—half the mass of the ε ring.
Rings 6, 5 and 4
Rings 6, 5 and 4 are the innermost and dimmest of Uranus' narrow rings. They are the most inclined rings, and their orbital eccentricities are the largest excluding the ε ring. In fact, their inclinations (0.06°, 0.05° and 0.03°) were large enough for Voyager 2 to observe their elevations above the Uranian equatorial plane, which were 24–46 km. Rings 6, 5 and 4 are also the narrowest rings of Uranus, measuring 1.6–2.2 km, 1.9–4.9 km and 2.4–4.4 km wide, respectively. Their equivalent depths are 0.41 km, 0.91 and 0.71 km resulting in normal optical depth 0.18–0.25, 0.18–0.48 and 0.16–0.3. They were not visible during a ring plane-crossing event in 2007 due to their narrowness and lack of dust.
Dusty rings
λ (lambda) ring
The λ ring was one of two rings discovered by Voyager 2 in 1986. It is a narrow, faint ring located just inside the ε ring, between it and the shepherd moon Cordelia. This moon clears a dark lane just inside the λ ring. When viewed in back-scattered light, the λ ring is extremely narrow—about 1–2 km—and has the equivalent optical depth 0.1–0.2 km at the wavelength 2.2 μm. The normal optical depth is 0.1–0.2. The optical depth of the λ ring shows strong wavelength dependence, which is atypical for the Uranian ring system. The equivalent depth is as high as 0.36 km in the ultraviolet part of the spectrum, which explains why λ ring was initially detected only in UV stellar occultations by Voyager 2. The detection during a stellar occultation at the wavelength 2.2 μm was only announced in 1996.
The appearance of the λ ring changed dramatically when it was observed in forward-scattered light in 1986. In this geometry the ring became the brightest feature of the Uranian ring system, outshining the ε ring. This observation, together with the wavelength dependence of the optical depth, indicates that the λ ring contains significant amount of micrometre-sized dust. The normal optical depth of this dust is 10−4–10−3. Observations in 2007 by the Keck telescope during the ring plane-crossing event confirmed this conclusion, because the λ ring became one of the brightest features in the Uranian ring system.
Detailed analysis of the Voyager 2 images revealed azimuthal variations in the brightness of the λ ring. The variations appear to be periodic, resembling a standing wave. The origin of this fine structure in the λ ring remains a mystery.
1986U2R/ζ (zeta) ring
In 1986 Voyager 2 detected a broad and faint sheet of material inward of ring 6. This ring was given the temporary designation 1986U2R. It had a normal optical depth of 10−3 or less and was extremely faint. It was thought to be visible only in a single Voyager 2 image, until reanalysis of Voyager data in 2022 revealed the ring in post-encounter images. The ring was located between 37,000 and 39,500 km from the centre of Uranus, or only about 12,000 km above the clouds. It was not observed again until 2003–2004, when the Keck telescope found a broad and faint sheet of material just inside ring 6. This ring was dubbed the ζ ring. The position of the recovered ζ ring differs significantly from that observed in 1986. Now it is situated between 37,850 and 41,350 km from the centre of the planet. There is an inward gradually fading extension reaching to at least 32,600 km, or possibly even to 27,000 km—to the atmosphere of Uranus. These extensions are labelled as the ζc and ζcc rings respectively.
The ζ ring was observed again during the ring plane-crossing event in 2007 when it became the brightest feature of the ring system, outshining all other rings combined. The equivalent optical depth of this ring is near 1 km (0.6 km for the inward extension), while the normal optical depth is again less than 10−3. Rather different appearances of the 1986U2R and ζ rings may be caused by different viewing geometries: back-scattering geometry in 2003–2007 and side-scattering geometry in 1986. Changes during the past 20 years in the distribution of dust, which is thought to predominate in the ring, cannot be ruled out.
Other dust bands
In addition to the 1986U2R/ζ and λ rings, there are other extremely faint dust bands in the Uranian ring system. They are invisible during occultations because they have negligible optical depth, though they are bright in forward-scattered light. Voyager 2'''s images of forward-scattered light revealed the existence of bright dust bands between the λ and δ rings, between the η and β rings, and between the α ring and ring 4. Many of these bands were detected again in 2003–2004 by the Keck Telescope and during the 2007 ring-plane crossing event in backscattered light, but their precise locations and relative brightnesses were different from during the Voyager observations. The normal optical depth of the dust bands is about 10−5 or less. The dust particle size distribution is thought to obey a power law with the index p = 2.5 ± 0.5.
In addition to separate dust bands the system of Uranian rings appears to be immersed into wide and faint sheet of dust with the normal optical depth not exceeding 10−3.
μ (mu) and ν (nu) rings
In 2003–2005, the Hubble Space Telescope detected a pair of previously unknown rings, now called the outer ring system, which brought the number of known Uranian rings to 13. These rings were subsequently named the μ (mu) and ν (nu) rings. The μ ring is the outermost of the pair, and is twice the distance from the planet as the bright η ring. The outer rings differ from the inner narrow rings in a number of respects. They are broad, 17,000 and 3,800 km wide, respectively, and very faint. Their peak normal optical depths are 8.5 × 10−6 and 5.4 × 10−6, respectively. The resulting equivalent optical depths are 0.14 km and 0.012 km. The rings have triangular radial brightness profiles.
The peak brightness of the μ (mu) ring lies almost exactly on the orbit of the small Uranian moon Mab, which is probably the source of the ring’s particles. The ν (nu) ring is positioned between Portia and Rosalind and does not contain any moons inside it. A reanalysis of the Voyager 2 images of forward-scattered light clearly reveals the μ and ν rings. In this geometry the rings are much brighter, which indicates that they contain much micrometer-sized dust. The outer rings of Uranus may be similar to the G and E rings of Saturn as E ring is extremely broad and receives dust from Enceladus.
The μ ring may consist entirely of dust, without any large particles at all. This hypothesis is supported by observations performed by the Keck telescope, which failed to detect the μ ring in the near infrared at 2.2 μm, but detected the ν ring. This failure means that the μ ring is blue in color, which in turn indicates that very small (submicrometer) dust predominates within it. The dust may be made of water ice. In contrast, the ν ring is slightly red in color.
Dynamics and origin
An outstanding problem concerning the physics governing the narrow Uranian rings is their confinement. Without some mechanism to hold their particles together, the rings would quickly spread out radially. The lifetime of the Uranian rings without such a mechanism cannot be more than 1 million years. The most widely cited model for such confinement, proposed initially by Goldreich and Tremaine, is that a pair of nearby moons, outer and inner shepherds, interact gravitationally with a ring and act like sinks and donors, respectively, for excessive and insufficient angular momentum (or equivalently, energy). The shepherds thus keep ring particles in place, but gradually move away from the ring themselves. To be effective, the masses of the shepherds should exceed the mass of the ring by at least a factor of two to three. This mechanism is known to be at work in the case of the ε ring, where Cordelia and Ophelia serve as shepherds. Cordelia is also the outer shepherd of the δ ring, and Ophelia is the outer shepherd of the γ ring. No moon larger than 10 km is known in the vicinity of other rings. The current distance of Cordelia and Ophelia from the ε ring can be used to estimate the ring’s age. The calculations show that the ε ring cannot be older than 600 million years.
Since the rings of Uranus appear to be young, they must be continuously renewed by the collisional fragmentation of larger bodies. The estimates show that the lifetime against collisional disruption of a moon with the size like that of Puck is a few billion years. The lifetime of a smaller satellite is much shorter. Therefore, all current inner moons and rings can be products of disruption of several Puck-sized satellites during the last four and half billion years. Every such disruption would have started a collisional cascade that quickly ground almost all large bodies into much smaller particles, including dust. Eventually the majority of mass was lost, and particles survived only in positions that were stabilized by mutual resonances and shepherding. The end product of such a disruptive evolution would be a system of narrow rings. A few moonlets must still be embedded within the rings at present. The maximum size of such moonlets is probably around 10 km.
The origin of the dust bands is less problematic. The dust has a very short lifetime, 100–1000 years, and should be continuously replenished by collisions between larger ring particles, moonlets and meteoroids from outside the Uranian system. The belts of the parent moonlets and particles are themselves invisible due to their low optical depth, while the dust reveals itself in forward-scattered light. The narrow main rings and the moonlet belts that create dust bands are expected to differ in particle size distribution. The main rings have more centimeter to meter-sized bodies. Such a distribution increases the surface area of the material in the rings, leading to high optical density in back-scattered light. In contrast, the dust bands have relatively few large particles, which results in low optical depth.
Exploration
The rings were thoroughly investigated by the Voyager 2 spacecraft in January 1986. Two new faint rings—λ and 1986U2R—were discovered bringing the total number then known to eleven. Rings were studied by analyzing results of radio, ultraviolet and optical occultations. Voyager 2'' observed the rings in different geometries relative to the Sun, producing images with back-scattered, forward-scattered and side-scattered light. Analysis of these images allowed derivation of the complete phase function, geometrical and Bond albedo of ring particles. Two rings—ε and η—were resolved in the images revealing a complicated fine structure. Analysis of Voyager's images also led to discovery of eleven inner moons of Uranus, including the two shepherd moons of the ε ring—Cordelia and Ophelia.
List of properties
This table summarizes the properties of the planetary ring system of Uranus.
| Physical sciences | Solar System | Astronomy |
979237 | https://en.wikipedia.org/wiki/Rings%20of%20Neptune | Rings of Neptune | The rings of Neptune consist primarily of five principal rings. They were first discovered (as "arcs") by simultaneous observations of a stellar occultation on 22 July 1984 by André Brahic's and William B. Hubbard's teams at La Silla Observatory (ESO) and at Cerro Tololo Interamerican Observatory in Chile. They were eventually imaged in 1989 by the Voyager 2 spacecraft. At their densest, they are comparable to the less dense portions of Saturn's main rings such as the C ring and the Cassini Division, but much of Neptune's ring system is quite faint and dusty, in some aspects more closely resembling the rings of Jupiter. Neptune's rings are named after astronomers who contributed important work on the planet: Galle, Le Verrier, Lassell, Arago, and Adams. Neptune also has a faint unnamed ring coincident with the orbit of the moon Galatea. Three other moons orbit between the rings: Naiad, Thalassa and Despina.
The rings of Neptune are made of extremely dark material, likely organic compounds processed by radiation, similar to those found in the rings of Uranus. The proportion of dust in the rings (between 20% and 70%) is high, while their optical depth is low to moderate, at less than 0.1. Uniquely, the Adams ring includes five distinct arcs, named Fraternité, Égalité 1 and 2, Liberté, and Courage. The arcs occupy a narrow range of orbital longitudes and are remarkably stable, having changed only slightly since their initial detection in 1980. How the arcs are stabilized is still under debate. However, their stability is probably related to the resonant interaction between the Adams ring and its inner shepherd moon, Galatea.
Discovery and observations
The first mention of rings around Neptune dates back to 1846 when William Lassell, the discoverer of Neptune's largest moon, Triton, thought he had seen a ring around the planet. However, his claim was never confirmed and it is likely that it was an observational artifact. The first reliable detection of a ring was made in 1968 by stellar occultation, although that result would go unnoticed until 1977 when the rings of Uranus were discovered. Soon after the Uranus discovery, a team from Villanova University led by Harold J. Reitsema began searching for rings around Neptune. On 24 May 1981, they detected a dip in a star's brightness during one occultation; however, the manner in which the star dimmed did not suggest a ring. Later, after the Voyager fly-by, it was found that the occultation was due to the small Neptunian moon Larissa, a highly unusual event.
In the 1980s, significant occultations were much rarer for Neptune than for Uranus, which lay near the Milky Way at the time and was thus moving against a denser field of stars. Neptune's next occultation, on 12 September 1983, resulted in a possible detection of a ring. However, ground-based results were inconclusive. Over the next six years, approximately 50 other occultations were observed with only about one-third of them yielding positive results. Something (probably incomplete arcs) definitely existed around Neptune, but the features of the ring system remained a mystery. The Voyager 2 spacecraft made the definitive discovery of the Neptunian rings during its fly-by of Neptune in 1989, passing by as close as above the planet's atmosphere on 25 August. It confirmed that occasional occultation events observed before were indeed caused by the arcs within the Adams ring (see below). After the Voyager fly-by the previous terrestrial occultation observations were reanalyzed yielding features of the ring's arcs as they were in 1980s, which matched those found by Voyager 2 almost perfectly.
Since Voyager 2s fly-by, the brightest rings (Adams and Le Verrier) have been imaged with the Hubble Space Telescope and Earth-based telescopes, owing to advances in resolution and light-gathering power. They are visible, slightly above background noise levels, at methane-absorbed wavelengths in which the glare from Neptune is significantly reduced. The fainter rings are still far below the visibility threshold for these instruments. In 2022 the rings were imaged by the James Webb Space Telescope, which made the first observation of the fainter rings since the Voyager 2s fly-by.
General properties
Neptune possesses five distinct rings named, in order of increasing distance from the planet, Galle, Le Verrier, Lassell, Arago and Adams. In addition to these well-defined rings, Neptune may also possess an extremely faint sheet of material stretching inward from the Le Verrier to the Galle ring, and possibly farther in toward the planet. Three of the Neptunian rings are narrow, with widths of about 100 km or less; in contrast, the Galle and Lassell rings are broad—their widths are between 2,000 and 5,000 km. The Adams ring consists of five bright arcs embedded in a fainter continuous ring. Proceeding counterclockwise, the arcs are: Fraternité, Égalité 1 and 2, Liberté, and Courage. The first four names come from "liberty, equality, fraternity", the motto of the French Revolution and Republic. The terminology was suggested by their original discoverers, who had found them during stellar occultations in 1984 and 1985. Four small Neptunian moons have orbits inside the ring system: Naiad and Thalassa orbit in the gap between the Galle and Le Verrier rings; Despina is just inward of the Le Verrier ring; and Galatea lies slightly inward of the Adams ring, embedded in an unnamed faint, narrow ringlet.
The Neptunian rings contain a large quantity of micrometer-sized dust: the dust fraction by cross-section area is between 20% and 70%. In this respect they are similar to the rings of Jupiter, in which the dust fraction is 50%–100%, and are very different from the rings of Saturn and Uranus, which contain little dust (less than 0.1%). The particles in Neptune's rings are made from a dark material; probably a mixture of ice with radiation-processed organics. The rings are reddish in color, and their geometrical (0.05) and Bond (0.01–0.02) albedos are similar to those of the Uranian rings' particles and the inner Neptunian moons. The rings are generally optically thin (transparent); their normal optical depths do not exceed 0.1. As a whole, the Neptunian rings resemble those of Jupiter; both systems consist of faint, narrow, dusty ringlets and even fainter broad dusty rings.
The rings of Neptune, like those of Uranus, are thought to be relatively young; their age is probably significantly less than that of the Solar System. Also, like those of Uranus, Neptune's rings probably resulted from the collisional fragmentation of onetime inner moons. Such events create moonlet belts, which act as the sources of dust for the rings. In this respect the rings of Neptune are similar to faint dusty bands observed by Voyager 2 between the main rings of Uranus.
Inner rings
Galle ring
The innermost ring of Neptune is called the Galle ring after Johann Gottfried Galle, the first person to see Neptune through a telescope (1846). It is about 2,000 km wide and orbits 41,000–43,000 km from the planet. It is a faint ring with an average normal optical depth of around 10−4, and with an equivalent depth of 0.15 km. The fraction of dust in this ring is estimated from 40% to 70%.
Le Verrier ring
The next ring is named the Le Verrier ring after Urbain Le Verrier, who predicted Neptune's position in 1846. With an orbital radius of about 53,200 km, it is narrow, with a width of about 113 km. Its normal optical depth is 0.0062 ± 0.0015, which corresponds to an equivalent depth of 0.7 ± 0.2 km. The dust fraction in the Le Verrier ring ranges from 40% to 70%. The small moon Despina, which orbits just inside of it at 52,526 km, may play a role in the ring's confinement by acting as a shepherd.
Lassell ring
The Lassell ring, also known as the plateau, is the broadest ring in the Neptunian system. Its namesake is William Lassell, the English astronomer who discovered Neptune's largest moon, Triton. This ring is a faint sheet of material occupying the space between the Le Verrier ring at about 53,200 km and the Arago ring at 57,200 km. Its average normal optical depth is around 10−4, which corresponds to an equivalent depth of 0.4 km. The ring's dust fraction is in the range from 20% to 40%.
Potential ring
There is a small peak of brightness near the outer edge of the Lassell ring, located at 57,200 km from Neptune and less than 100 km wide, which some planetary scientists call the Arago ring after François Arago, a French mathematician, physicist, astronomer and politician. However, many publications do not mention the Arago ring at all.
Adams ring
The outer Adams ring, with an orbital radius of about 63,930 km, is the best studied of Neptune's rings. It is named after John Couch Adams, who predicted the position of Neptune independently of Le Verrier. This ring is narrow, slightly eccentric and inclined, with total width of about 35 km (15–50 km), and its normal optical depth is around 0.011 ± 0.003 outside the arcs, which corresponds to the equivalent depth of about 0.4 km. The fraction of dust in this ring is from 20% to 40%—lower than in other narrow rings. Neptune's small moon Galatea, which orbits just inside of the Adams ring at 61,953 km, acts like a shepherd, keeping ring particles inside a narrow range of orbital radii through a 42:43 outer Lindblad resonance. Galatea's gravitational influence creates 42 radial wiggles in the Adams ring with an amplitude of about 30 km, which have been used to infer Galatea's mass.
Arcs
The brightest parts of the Adams ring, the ring arcs, were the first elements of Neptune's ring system to be discovered. The arcs are discrete regions within the ring in which the particles that it comprises are mysteriously clustered together. The Adams ring is known to comprise five short arcs, which occupy a relatively narrow range of longitudes from 247° to 294°. In 1986 they were located between longitudes of:
247–257° (Fraternité),
261–264° (Égalité 1),
265–266° (Égalité 2),
276–280° (Liberté),
284.5–285.5° (Courage).
The brightest and longest arc was Fraternité; the faintest was Courage. The normal optical depths of the arcs are estimated to lie in the range 0.03–0.09 (0.034 ± 0.005 for the leading edge of Liberté arc as measured by stellar occultation); the radial widths are approximately the same as those of the continuous ring—about 30 km. The equivalent depths of arcs vary in the range 1.25–2.15 km (0.77 ± 0.13 km for the leading edge of Liberté arc). The fraction of dust in the arcs is from 40% to 70%. The arcs in the Adams ring are somewhat similar to the arc in Saturn's G ring.
The highest resolution Voyager 2 images revealed a pronounced clumpiness in the arcs, with a typical separation between visible clumps of 0.1° to 0.2°, which corresponds to 100–200 km along the ring. Because the clumps were not resolved, they may or may not include larger bodies, but are certainly associated with concentrations of microscopic dust as evidenced by their enhanced brightness when backlit by the Sun.
The arcs are quite stable structures. They were detected by ground-based stellar occultations in the 1980s, by Voyager 2 in 1989 and by Hubble Space Telescope and ground-based telescopes in 1997–2005 and remained at approximately the same orbital longitudes. However some changes have been noticed. The overall brightness of arcs decreased since 1986. The Courage arc jumped forward by 8° to 294° (it probably jumped over to the next stable co-rotation resonance position) while the Liberté arc had almost disappeared by 2003. The Fraternité and Égalité (1 and 2) arcs have demonstrated irregular variations in their relative brightness. Their observed dynamics is probably related to the exchange of dust between them. Courage, a very faint arc found during the Voyager flyby, was seen to flare in brightness in 1998; it was back to its usual dimness by June 2005. Visible light observations show that the total amount of material in the arcs has remained approximately constant, but they are dimmer in the infrared light wavelengths where previous observations were taken.
Confinement
The arcs in the Adams ring remain unexplained. Their existence is a puzzle because basic orbital dynamics imply that they should spread out into a uniform ring over a matter of years. Several hypotheses about the arcs' confinement have been suggested, the most widely publicized of which holds that Galatea confines the arcs via its 42:43 co-rotational inclination resonance (CIR). The resonance creates 84 stable sites along the ring's orbit, each 4° long, with arcs residing in the adjacent sites. However measurements of the rings' mean motion with Hubble and Keck telescopes in 1998 led to the conclusion that the rings are not in CIR with Galatea.
A later model suggested that confinement resulted from a co-rotational eccentricity resonance (CER). The model takes into account the finite mass of the Adams ring, which is necessary to move the resonance closer to the ring. A byproduct of this hypothesis is a mass estimate for the Adams ring—about 0.002 of the mass of Galatea. A third hypothesis proposed in 1986 requires an additional moon orbiting inside the ring; the arcs in this case are trapped in its stable Lagrangian points. However Voyager 2'''s observations placed strict constraints on the size and mass of any undiscovered moons, making such a hypothesis unlikely. Some other more complicated hypotheses hold that a number of moonlets are trapped in co-rotational resonances with Galatea, providing confinement of the arcs and simultaneously serving as sources of the dust.
Exploration
The rings were investigated in detail during the Voyager 2 spacecraft's flyby of Neptune in August 1989. They were studied with optical imaging, and through observations of occultations in ultraviolet and visible light. The spaceprobe observed the rings in different geometries relative to the Sun, producing images of back-scattered, forward-scattered and side-scattered light. Analysis of these images allowed derivation of the phase function (dependence of the ring's reflectivity on the angle between the observer and Sun), and geometrical and Bond albedo of ring particles. Analysis of Voyager's images also led to discovery of six inner moons of Neptune, including the Adams ring shepherd Galatea.
Properties *A question mark means that the parameter is not known.''
| Physical sciences | Solar System | Astronomy |
979306 | https://en.wikipedia.org/wiki/Orbital%20station-keeping | Orbital station-keeping | In astrodynamics, orbital station-keeping is keeping a spacecraft at a fixed distance from another spacecraft or celestial body. It requires a series of orbital maneuvers made with thruster burns to keep the active craft in the same orbit as its target. For many low Earth orbit satellites, the effects of non-Keplerian forces, i.e. the deviations of the gravitational force of the Earth from that of a homogeneous sphere, gravitational forces from Sun/Moon, solar radiation pressure and air drag, must be counteracted.
For spacecraft in a halo orbit around a Lagrange point, station-keeping is even more fundamental, as such an orbit is unstable; without an active control with thruster burns, the smallest deviation in position or velocity would result in the spacecraft leaving orbit completely.
Perturbations
The deviation of Earth's gravity field from that of a homogeneous sphere and gravitational forces from the Sun and Moon will in general perturb the orbital plane. For a Sun-synchronous orbit, the precession of the orbital plane caused by the oblateness of the Earth is a desirable feature that is part of mission design but the inclination change caused by the gravitational forces of the Sun and Moon is undesirable. For geostationary spacecraft, the inclination change caused by the gravitational forces of the Sun and Moon must be counteracted by a rather large expense of fuel, as the inclination should be kept sufficiently small for the spacecraft to be tracked by non-steerable antennae.
For spacecraft in a low orbit, the effects of atmospheric drag must often be compensated for, often to avoid re-entry; for missions requiring the orbit to be accurately synchronized with the Earth’s rotation, this is necessary to prevent a shortening of the orbital period.
Solar radiation pressure will in general perturb the eccentricity (i.e. the eccentricity vector); see Orbital perturbation analysis (spacecraft). For some missions, this must be actively counter-acted with maneuvers. For geostationary spacecraft, the eccentricity must be kept sufficiently small for a spacecraft to be tracked with a non-steerable antenna. Also for Earth observation spacecraft for which a very repetitive orbit with a fixed ground track is desirable, the eccentricity vector should be kept as fixed as possible. A large part of this compensation can be done by using a frozen orbit design, but often thrusters are needed for fine control maneuvers.
Low Earth orbit
For spacecraft in a very low orbit, the atmospheric drag is sufficiently strong to cause a re-entry before the intended end of mission if orbit raising maneuvers are not executed from time to time.
An example of this is the International Space Station (ISS), which has an operational altitude above Earth's surface of between 400 and 430 km (250-270 mi). Due to atmospheric drag the space station is constantly losing orbital energy. In order to compensate for this loss, which would eventually lead to a re-entry of the station, it has to be reboosted to a higher orbit from time to time. The chosen orbital altitude is a trade-off between the average thrust needed to counter-act the air drag and the impulse needed to send payloads and people to the station.
GOCE which orbited at 255 km (later reduced to 235 km) used ion thrusters to provide up to 20 mN of thrust to compensate for the drag on its frontal area of about 1 m2.
Earth observation spacecraft
For Earth observation spacecraft typically operated in an altitude above the Earth surface of about 700 – 800 km the air-drag is very faint and a re-entry due to air-drag is not a concern. But if the orbital period should be synchronous with the Earth's rotation to maintain a fixed ground track, the faint air-drag at this high altitude must also be counter-acted by orbit raising maneuvers in the form of thruster burns tangential to the orbit. These maneuvers will be very small, typically in the order of a few mm/s of delta-v. If a frozen orbit design is used these very small orbit raising maneuvers are sufficient to also control the eccentricity vector.
To maintain a fixed ground track it is also necessary to make out-of-plane maneuvers to compensate for the inclination change caused by Sun/Moon gravitation. These are executed as thruster burns orthogonal to the orbital plane. For Sun-synchronous spacecraft having a constant geometry relative to the Sun, the inclination change due to the solar gravitation is particularly large; a delta-v in the order of 1–2 m/s per year can be needed to keep the inclination constant.
Geostationary orbit
For geostationary spacecraft, thruster burns orthogonal to the orbital plane must be executed to compensate for the effect of the lunar/solar gravitation that perturbs the orbit pole with typically 0.85 degrees per year. The delta-v needed to compensate for this perturbation keeping the inclination to the equatorial plane amounts to in the order 45 m/s per year. This part of the GEO station-keeping is called North-South control.
The East-West control is the control of the orbital period and the eccentricity vector performed by making thruster burns tangential to the orbit. These burns are then designed to keep the orbital period perfectly synchronous with the Earth rotation and to keep the eccentricity sufficiently small. Perturbation of the orbital period results from the imperfect rotational symmetry of the Earth relative the North/South axis, sometimes called the ellipticity of the Earth equator. The eccentricity (i.e. the eccentricity vector) is perturbed by the solar radiation pressure. The fuel needed for this East-West control is much less than what is needed for the North-South control.
To extend the life-time of geostationary spacecraft with little fuel left one sometimes discontinues the North-South control only continuing with the East-West control. As seen from an observer on the rotating Earth the spacecraft will then move North-South with a period of 24 hours. When this North-South movement gets too large a steerable antenna is needed to track the spacecraft. An example of this is Artemis.
To save weight, it is crucial for GEO satellites to have the most fuel-efficient propulsion system. Almost all modern satellites are therefore employing a high specific impulse system like plasma or ion thrusters.
Lagrange points
Orbits of spacecraft are also possible around Lagrange points—also referred to as libration points—five equilibrium points that exist in relation to two larger solar system bodies. For example, there are five of these points in the Sun-Earth system, five in the Earth-Moon system, and so on. Spacecraft may orbit around these points with a minimum of propellant required for station-keeping purposes. Two orbits that have been used for such purposes include halo and Lissajous orbits.
One important Lagrange point is Earth-Sun , and three heliophysics missions have been orbiting L1 since approximately 2000. Station-keeping propellant use can be quite low, facilitating missions that can potentially last decades should other spacecraft systems remain operational. The three spacecraft—Advanced Composition Explorer (ACE), Solar Heliospheric Observatory (SOHO), and the Global Geoscience WIND satellite—each have annual station-keeping propellant requirements of approximately 1 m/s or less.
Earth-Sun —approximately 1.5 million kilometers from Earth in the anti-sun direction—is another important Lagrange point, and the ESA Herschel space observatory operated there in a Lissajous orbit during 2009–2013, at which time it ran out of coolant for the space telescope. Small station-keeping orbital maneuvers were executed approximately monthly to maintain the spacecraft in the station-keeping orbit.
The James Webb Space Telescope will use propellant to maintain its halo orbit around the Earth-Sun L2, which provides an upper limit to its designed lifetime: it is being designed to carry enough for ten years. However, the precision of trajectory following launch by an Ariane 5 is credited with potentially doubling the lifetime of the telescope by leaving more hydrazine propellant on-board than expected.
The CAPSTONE orbiter and the planned Lunar Gateway is stationed along a 9:2 synodically resonant Near Rectilinear Halo Orbit (NRHO) around the Earth-Moon L2 Lagrange point.
| Physical sciences | Orbital mechanics | Astronomy |
979452 | https://en.wikipedia.org/wiki/Sculptor%20Galaxy | Sculptor Galaxy | The Sculptor Galaxy (also known as the Silver Coin Galaxy, Silver Dollar Galaxy, NGC 253, or Caldwell 65) is an intermediate spiral galaxy in the constellation Sculptor. The Sculptor Galaxy is a starburst galaxy, which means that it is currently undergoing a period of intense star formation.
Observation
Observational history
The galaxy was discovered by Caroline Herschel in 1783 during one of her systematic comet searches. About half a century later, John Herschel observed it using his 18-inch metallic mirror reflector at the Cape of Good Hope. He wrote: "very bright and large (24′ in length); a superb object.... Its light is somewhat streaky, but I see no stars in it except 4 large and one very small one, and these seem not to belong to it, there being many near..."
In 1961, Allan Sandage wrote in the Hubble Atlas of Galaxies that the Sculptor Galaxy is "the prototype example of a special subgroup of Sc systems....photographic images of galaxies of the group are dominated by the dust pattern. Dust lanes and patches of great complexity are scattered throughout the surface. Spiral arms are often difficult to trace.... The arms are defined as much by the dust as by the spiral pattern." Bernard Y. Mills, working out of Sydney, discovered that the Sculptor Galaxy is also a fairly strong radio source.
In 1998, the Hubble Space Telescope took a detailed image of NGC 253.
Amateur
As one of the brightest galaxies in the sky, the Sculptor Galaxy can be seen through binoculars and is near the star Beta Ceti. It is considered one of the most easily viewed galaxies in the sky after the Andromeda Galaxy.
The Sculptor Galaxy is a good target for observation with a telescope with a 300 mm diameter or larger. In such telescopes, it appears as a galaxy with a long, oval bulge and a mottled galactic disc. Although the bulge appears only slightly brighter than the rest of the galaxy, it is fairly extended compared to the disk. In 400 mm scopes and larger, a dark dust lane northwest of the nucleus is visible, and over a dozen faint stars can be seen superimposed on the bulge. Some people claim to have observed the galaxy with the unaided eye under exceptional viewing conditions.
Features
The Sculptor Galaxy is located at the center of the Sculptor Group, one of the nearest groups of galaxies to the Milky Way. The Sculptor Galaxy (the brightest galaxy in the group and one of the intrinsically brightest galaxies in the vicinity of ours, only surpassed by the Andromeda Galaxy and the Sombrero Galaxy) and the companion galaxies NGC 247, PGC 2881, PGC 2933, Sculptor-dE1, and UGCA 15 form a gravitationally-bound core near the center of the group. Most other galaxies associated with the Sculptor Group are only weakly gravitationally bound to this core.
Starburst
NGC 253's starburst has created several super star clusters on NGC 253's center (discovered with the aid of the Hubble Space Telescope): one with a mass of solar masses, and absolute magnitude of at least −15, and two others with solar masses and absolute magnitudes around −11; later studies have discovered an even more massive cluster heavily obscured by NGC 253's interstellar dust with a mass of solar masses, an age of around years, and rich in Wolf-Rayet stars. The super star clusters are arranged in an ellipse around the center of NGC 253, which from the Earth's perspective appears as a flat line.
Star formation is also high in the northeast of NGC 253's disk, where a number of red supergiant stars can be found, and in its halo there are young stars as well as some amounts of neutral hydrogen. This, along with other peculiarities found in NGC 253, suggest that a gas-rich dwarf galaxy collided with it 200 million years ago, disturbing its disk and starting the present starburst.
As happens in other galaxies suffering strong star formation such as Messier 82, NGC 4631, or NGC 4666, the stellar winds of the massive stars produced in the starburst as well as their deaths as supernovae have blown out material to NGC 253's halo in the form of a superwind that seems to be inhibiting star formation in the galaxy.
Novae and Supernovae
Although supernovae are generally associated with starburst galaxies, only one has been detected within the Sculptor Galaxy. SN 1940E (type unknown, mag. 14) was discovered by Fritz Zwicky on 22 November 1940, located approximately 54″ southwest of the galaxy's nucleus.
NGC 253 is close enough that classical novae can also be detected. The first confirmed nova in this galaxy was discovered by BlackGEM at magnitude 19.6 on 12 July 2024, and designated AT 2024pid.
Central black hole
Research suggests the presence of a supermassive black hole in the center of this galaxy with a mass estimated to be 5 million times that of the Sun, which is slightly heavier than Sagittarius A*.
Distance estimates
At least two techniques have been used to measure distances to Sculptor in the past ten years.
Using the planetary nebula luminosity function method, an estimate of 10.89 million light years (or Mly; 3.34 Megaparsecs, or Mpc) was achieved in 2005.
The Sculptor Galaxy is close enough that the tip of the red-giant branch (TRGB) method may also be used to estimate its distance. The estimated distance to Sculptor using this technique in 2004 yielded ().
A weighted average of the most reliable distance estimates gives a distance of ().
Satellite
An international team of researchers has used the Subaru Telescope to identify a faint dwarf galaxy disrupted by NGC 253. The satellite galaxy is called NGC 253-dw2 and may not survive its next passage by its much larger host. The host galaxy may suffer some damage too if the dwarf is massive enough. The interplay between the two galaxies is responsible for the disturbance in NGC 253's structure.
| Physical sciences | Notable galaxies | Astronomy |
4096049 | https://en.wikipedia.org/wiki/Bosphorus%20Bridge | Bosphorus Bridge | The Bosphorus Bridge (), known officially as the 15 July Martyrs Bridge () and colloquially as the First Bridge (), is the oldest and southernmost of the three suspension bridges spanning the Bosphorus strait (Turkish: Boğaziçi) in Istanbul, Turkey, thus connecting Europe and Asia (alongside the Fatih Sultan Mehmet Bridge and Yavuz Sultan Selim Bridge). The bridge extends between Ortaköy (in Europe) and Beylerbeyi (in Asia).
It is a gravity-anchored suspension bridge with steel towers and inclined hangers. The aerodynamic deck hangs on steel cables. It is long with a deck width of . The distance between the towers (main span) is and the total height of the towers is . The clearance of the bridge from sea level is .
Upon its completion in 1973, the Bosphorus Bridge had the fourth-longest suspension bridge span in the world, and the longest outside the United States (only the Verrazano-Narrows Bridge, Golden Gate Bridge and Mackinac Bridge had a longer span in 1973). The Bosphorus Bridge remained the longest suspension bridge in Europe until the completion of the Humber Bridge in 1981, and the longest suspension bridge in Asia until the completion of the Fatih Sultan Mehmet Bridge (Second Bosphorus Bridge) in 1988 (which was surpassed by the Minami Bisan-Seto Bridge in 1989). Currently, the Bosphorus Bridge has the 40th-longest suspension bridge span in the world.
After a group of soldiers took control and partially closed off the bridge during the military coup d'état attempt on 15 July 2016, Prime Minister Binali Yıldırım proclaimed on 25 July 2016 the decision of the Cabinet of Turkey that the bridge will be formally renamed as the 15 Temmuz Şehitler Köprüsü (July 15th Martyrs Bridge) in memory of those killed while resisting the attempted coup.
The Bosphorus Bridge is famous for its important transport routes, connecting parts of Europe to Turkey.
Precedents and proposals
The idea of a bridge crossing the Bosphorus dates back to antiquity. The Greek writer Herodotus says in his Histories that, on the orders of Emperor Darius the Great of the Achaemenid Empire (522 BC–485 BC), Mandrocles of Samos once engineered a pontoon bridge across the Bosphorus, linking Asia to Europe; this bridge enabled Darius to pursue the fleeing Scythians as well as position his army in the Balkans to overwhelm Macedon. The first modern project for a permanent bridge across the Bosphorus was proposed to Sultan Abdul Hamid II of the Ottoman Empire by the Bosphorus Railroad Company in 1900, which included a rail link between the continents.
Construction
The decision to build a bridge across the Bosphorus was taken in 1957 by Prime Minister Adnan Menderes. For the structural engineering work, a contract was signed with the British firm Freeman Fox & Partners in 1968. The bridge was designed by the British civil engineers Gilbert Roberts, William Brown and Michael Parsons, who also designed the Humber Bridge, Severn Bridge, and Forth Road Bridge. David B Steinman, an American engineer who had recently designed the Mackinac Bridge was also contracted, but died early on in the design process in 1960. Construction started in February 1970 and ceremonies were attended by President Cevdet Sunay and Prime Minister Süleyman Demirel. The bridge was built by the Turkish firm Enka Construction & Industry Co. along with the co-contractors Cleveland Bridge & Engineering Company (England) and Hochtief AG (Germany).
The bridge was completed on 30 October 1973, one day after the 50th anniversary of the founding of the Republic of Turkey, and opened by President Fahri Korutürk and Prime Minister Naim Talu. The cost of the bridge was US$200 million ($ in dollars).
Upon the bridge's opening, it was often defined by the media as the first bridge between Asia and Europe since the pontoon bridge of Xerxes in 480 BC. That bridge, however, spanned the Hellespont (Dardanelles) strait to the southwest of the Bosphorus, across the Sea of Marmara, and was in fact the second pontoon bridge between Asia and Europe after an earlier one built by Darius the Great across the Bosphorus strait in 513 BC.
Operation and tolls
The bridge highway is eight lanes wide. Three standard lanes, one emergency lane and one pedestrian lane serve each direction. On weekday mornings, most commuter traffic flows westbound to Europe, so four of the six lanes run westbound and only two eastbound. Conversely, on weekday evenings, four lanes are dedicated to eastbound traffic and two lanes, to westbound traffic.
For the first three years, pedestrians could walk over the bridge, reaching it with elevators inside the towers on both sides. No pedestrians or commercial vehicles, such as trucks, are allowed to use the bridge today.
Today, around 180,000 vehicles pass daily in both directions, with almost 85% being cars. On 29 December 1997, the one-billionth vehicle passed the bridge. Fully loaded, the bridge sags about in the middle of the span.
It is a toll bridge. A toll is charged for passing from Europe to Asia, but not for passing in the reverse direction.
Between 1999 and 2006, some of the toll booths (#9 - #13), which were located to the far left as motorists approached them, were unmanned and equipped only with a remote payment system (Turkish: OGS). In addition to the OGS system, another toll pay system with special contactless smart cards (Turkish: KGS) was installed at specific toll booths in 2005. Toll payments in cash were stopped on 3 April 2006.
Between 2006 and 2012, toll booths accepted only OGS or KGS. An OGS device or KGS card could be obtained at various stations before reaching the toll plazas of highways and bridges. In 2006, the toll was 3.00 TL or about $2.00.
Since April 2007, a computerised LED lighting system of changing colours and patterns, developed by Philips, illuminates the bridge at night.
On 17 September 2012, the KGS system on the Bosphorus Bridge was replaced by the new HGS system (Turkish: Hızlı Geçiş Sistemi), which also replaced the OGS system a decade later, on 31 March 2022. The HGS system requires a batteryless front window sticker with a passive radio-frequency identification (RFID) chip, whereas the older OGS system required a small RFID device with a battery that was sticked to the front window.
In 2017, the toll increased by nearly 50% from 4.75 to 7 TRY. After 21 months, in late 2019, the toll went up another 20% to 10.50 TRY. Tolls need to be increased almost every year to keep up with high producers' price inflation.
Notable events
The bridge was depicted on the reverse of the Turkish 1000 lira banknotes of 1978–1986.
Since 1979, every October, the annual Intercontinental Istanbul Eurasia Marathon crosses the bridge on its way from Asia to Europe. During the marathon, the bridge is closed to vehicular traffic.
On 15 May 2005 at 07:00 local time, U.S. tennis star Venus Williams played a show game with Turkish player İpek Şenoğlu on the bridge, the first tennis match played on two continents. The event promoted the upcoming 2005 WTA İstanbul Cup and lasted five minutes. After the exhibition, they both threw a tennis ball into the Bosphorus.
On 17 July 2005 at 10:30 local time, British Formula One driver David Coulthard drove his Red Bull racing car across the bridge from the European side to the Asian side, then, after turning with a powerslide at the toll plaza, back to the European side for show. He parked his car in the garden of Dolmabahçe Palace where his ride had started. While crossing the bridge with his Formula 1 car, Coulthard was picked up by the automatic surveillance system and charged with a fine of 20 Euros because he passed through the toll booths without payment. His team agreed to pay for him.
On 5 November 2013, World No. 1 golfer Tiger Woods, visiting for the 2013 Turkish Airlines Open golf tournament held between 7 and 10 November, was brought to the bridge by helicopter and made a couple of show shots on the bridge, hitting balls from the Asian side to the European side on one side of the bridge, which was closed to traffic for about one hour.
On 15 July 2016, the bridge was blocked by a rogue faction of the Turkish Armed Forces during a coup attempt. They arrested civilians and police officers. The soldiers involved surrendered to police and to civilians the next day. On 25 July 2016, Binali Yıldırım, Turkey's last prime minister before a presidential system was adopted with a referendum in 2017, announced that the bridge would be renamed as the 15 Temmuz Şehitler Köprüsü (15 July Martyrs Bridge). In honor of the victims who were martyred while resisting the coup attempt, a monument, museum and mosque were built on a roadside hill near the Asian (Anatolian) end of the bridge.
| Technology | Bridges | null |
4101529 | https://en.wikipedia.org/wiki/Rocker%20box | Rocker box | A rocker box (also known as a cradle or a big box) is a gold mining implement for separating alluvial placer gold from sand and gravel which was used in placer mining in the 19th century. It consists of a high-sided box, which is open on one end and on top, and was placed on rockers.
The inside bottom of the box is lined with riffles and usually a carpet (called Miner's Moss) similar to a sluice box. On top of the box is a classifier sieve (usually with half-inch or quarter-inch openings) which screens-out larger pieces of rock and other material, allowing only finer sand and gravel through. Between the sieve and the lower sluice section is a baffle, which acts as another trap for fine gold and also ensures that the aggregate material being processed is evenly distributed before it enters the sluice section. It sits at an angle and points towards the closed back of the box. Traditionally, the baffle consisted of a flexible apron made of canvas or a similar material, which had a sag of about an inch and a half in the center, to act as a collection pocket for fine gold. Later rockers (including most modern ones) dispensed with the flexible apron and used a pair of solid wood or metal baffle boards. These are sometimes covered with carpet to trap fine gold. The entire device sits on rockers at a slight gradient, which allows it to be rocked side to side.
Today, the rocker box is not used as extensively as the sluice, but still is an effective method of recovering gold in areas where there is not enough available water to operate a sluice effectively. Like a sluice box, the rocker box has riffles and a carpet in it to trap gold. It was designed to be used in areas with less water than a sluice box. The mineral processing involves pouring water out of a small cup and then rocking the small sluice box like a cradle, thus the name rocker box or cradle.
Rocker boxes must be manipulated carefully, to prevent losing the gold. Although big, and difficult to move, the rocker can pick up twice the amount of the gravel, and therefore more gold in one day than an ordinary gold mining pan. The rocker, like the pan, is used extensively in small-scale placer work, in sampling, and for washing sluice concentrates and material cleaned by hand from bedrock in other placer operations. One to three cubic yards, bank measure, can be dug and washed in a rocker per man-shift, depending upon the distance the gravel or water has to be carried, the character of the gravel, and the size of the rocker.
Rockers are usually homemade and display a variety of designs. A favorite design consists essentially of a combination washing box and screen, a canvas or carpet apron under the screen, a short sluice with two or more riffles, and rockers under the sluice. The bottom of the washing box consists of sheet metal with holes about a half an inch in diameter punched in it, or a half-inch mesh screen can be used. Dimensions shown are satisfactory, but variations are possible. The bottom of the rocker should be made of a single wide, smooth board, which will greatly facilitate cleanups. The materials for building a rocker cost only a few dollars, depending mainly on the source of lumber.
| Technology | Metallurgy | null |
1513400 | https://en.wikipedia.org/wiki/Seine%20fishing | Seine fishing | Seine fishing (or seine-haul fishing; ) is a method of fishing that employs a surrounding net, called a seine, that hangs vertically in the water with its bottom edge held down by weights and its top edge buoyed by floats. Seine nets can be deployed from the shore as a beach seine, or from a boat.
Boats deploying seine nets are known as seiners. Two main types of seine net are deployed from seiners: purse seines and Danish seines. A seine differs from a gillnet, in that a seine encloses fish, where a gillnet directly snares fish.
Etymology
The word seine has its origins in the Old English segne, which entered the language via Latin sagena, from the original Greek σαγήνη sagēnē (a drag-net).
History
Seines have been used widely in the past, including by Stone Age societies. For example, the Māori used large canoes to deploy seine nets which could be over a kilometer long. The nets were woven from green flax, with stone weights and light wood or gourd floats, and could require hundreds of men to haul.
Native Americans on the Columbia River wove seine nets from spruce root fibers or wild grass, again using stones as weights. For floats they used sticks made of cedar which moved in a way which frightened the fish and helped keep them together.
Arrian's description of Alexander the Great's expedition on the Makran coast in 325 B.C. includes a detailed description of seine fishing by a tribe known as the Ichthyophagi (Fish-eaters).
Seine nets are also well documented in ancient cultures in the Mediterranean region. They appear in Egyptian tomb paintings from 3000 BCE. In ancient Roman literature, the poet Ovid makes many references to seine nets, including the use of cork floats and lead weights.
Beach seine
The beach seine is employed by anchoring a section of netting on the shoreline, then dragging the net into the water and surrounding the fish, before pulling it ashore. Several countries have prohibited the use of the seines. Kenya outlawed the use of beach seines in 2001.
Purse seine
A common type of seine is a purse seine, named such because along the bottom are a number of rings. A line (referred to as a purse-line) passes through all the rings, and when pulled, draws the rings close to one another, preventing the fish from "sounding", or swimming down to escape the net. This operation is similar to a traditional style purse, which has a drawstring.
The purse seine is a preferred technique for capturing fish species which school, or aggregate, close to the surface: sardines, mackerel, anchovies, herring, and certain species of tuna (schooling); and salmon soon before they swim up rivers and streams to spawn (aggregation). Boats equipped with purse seines are called purse seiners.
Purse seines are ranked by experts as one of the most sustainable commercial fishing methods when compared with other options. Purse seine fishing can result in smaller amounts of by-catch (unintentionally caught fish), especially when used to catch large species of fish (like herring or mackerel) that shoal tightly together. When used to catch fish that shoal together with other species, or when used in parallel with fish aggregating devices, the percentage of by-catch greatly increases.
Use of purse seines is regulated by many countries; in Sri Lanka, for example, using this type of net within of the shore is illegal. However, they can be used in the deep sea, after obtaining permission from authorities. Purse seine fishing can have negative impacts on fish stocks because it can involve the bycatch of non-target species and it can put too much pressure on fish stocks.
Power block
The power block is a mechanized pulley used on some seiners to haul in the nets. According to the UN Food and Agriculture Organization, no single invention has contributed more to the effectiveness of purse seine net hauling than the power block.
The Puretic power block line was introduced in the 1950s and was the key factor in the mechanization of purse seining. The combination of these blocks with advances in fluid hydraulics and the new large synthetic nets changed the character of purse seine fishing. The original Puretic power block was driven by an endless rope from the warping head of a winch. Nowadays, power blocks are usually driven by hydraulic pumps powered by the main or auxiliary engine. Their rpm, pull and direction can be controlled remotely.
A minimum of three people are required for power block seining; the skipper, skiff operator, and corkline stacker. In many operations a fourth person stacks the leadline, and often a fifth person stacks the web.
Drum
In certain parts of the western United States as well as Canada, specifically on the coast of British Columbia, drum seining is a method of seine fishing which was adopted in the late 1950s and is used exclusively in that region.
The drum seine uses a horizontally mounted drum to haul and store the net instead of a power block. The net is pulled in over a roller, which spans the stern, and then passes through a spooling gear with upright rollers. The spooling gear is moved from side to side across the stern which allows the net to be guided and wound tightly on the drum.
There are several advantages to the drum seine over the power block. The net can be hauled very quickly - at more than twice the speed of using a power block, the net does not require overhead handling, and the process is therefore safer. The most important advantage is that the drum system can be operated with fewer deckhands. However, it is illegal to use a seine drum in the state of Alaska.
Danish seine
A Danish seine, also occasionally called an anchor seine, consists of a conical net with two long wings with a bag where the fish collect. Drag lines extend from the wings, and are long so they can surround an area.
A Danish seine is similar to a small trawl net, but the wire warps are much longer and there are no otter boards. The seine boat drags the warps and the net in a circle around the fish. The motion of the warps herds the fish into the central net.
Danish seiner vessels are usually larger than purse seiners, though they are often accompanied by a smaller vessel. The drag lines are often stored on drums or coiled onto the deck by a coiling machine. A brightly coloured buoy, anchored as a "marker", serves as a fixed point when hauling the seine. A power block, usually mounted on a boom or a slewing deck crane, hauls the seine net.
Danish seining works best on demersal fish which are either scattered on or close to the bottom of the sea, or are aggregated (schooling). They are used when there are flat but rough seabeds which are not trawlable. It is especially useful in northern regions, but not much in tropical to sub-tropical areas.
The net is deployed, with one end attached to an anchored dan (marker) buoy, by the main vessel, the seiner, or by a smaller auxiliary boat. A drag line is paid out, followed by a net wing. As the seiner sweeps in a big circle returning to the buoy, the deployment continues with the seine bag and the remaining wing, finishing with the remaining drag line. In this way a large area can be surrounded. Next the drag lines are hauled in using rope-coiling machines until the catch bag can be secured.
The seine netting method developed in Denmark. Scottish seining ("fly dragging") was a later modification. The original procedure is much the same as fly dragging except for the use of an anchored marker buoy when hauling, and closing the net and warps and net by winch.
Other images
| Technology | Hunting and fishing | null |
1514469 | https://en.wikipedia.org/wiki/Column%20chromatography | Column chromatography | Column chromatography in chemistry is a chromatography method used to isolate a single chemical compound from a mixture. Chromatography is able to separate substances based on differential absorption of compounds to the adsorbent; compounds move through the column at different rates, allowing them to be separated into fractions. The technique is widely applicable, as many different adsorbents (normal phase, reversed phase, or otherwise) can be used with a wide range of solvents. The technique can be used on scales from micrograms up to kilograms. The main advantage of column chromatography is the relatively low cost and disposability of the stationary phase used in the process. The latter prevents cross-contamination and stationary phase degradation due to recycling. Column chromatography can be done using gravity to move the solvent, or using compressed gas to push the solvent through the column.
A thin-layer chromatograph can show how a mixture of compounds will behave when purified by column chromatography. The separation is first optimised using thin-layer chromatography before performing column chromatography.
Column preparation
A column is prepared by packing a solid adsorbent into a cylindrical glass or plastic tube. The size will depend on the amount of compound being isolated. The base of the tube contains a filter, either a cotton or glass wool plug, or glass frit to hold the solid phase in place. A solvent reservoir may be attached at the top of the column.
Two methods are generally used to prepare a column: the dry method and the wet method. For the dry method, the column is first filled with dry stationary phase powder, followed by the addition of mobile phase, which is flushed through the column until it is completely wet, and from this point is never allowed to run dry. For the wet method, a slurry is prepared of the eluent with the stationary phase powder and then carefully poured into the column. The top of the silica should be flat, and the top of the silica can be protected by a layer of sand. Eluent is slowly passed through the column to advance the organic material.
The individual components are retained by the stationary phase differently and separate from each other while they are running at different speeds through the column with the eluent. At the end of the column they elute one at a time. During the entire chromatography process the eluent is collected in a series of fractions. Fractions can be collected automatically by means of fraction collectors. The productivity of chromatography can be increased by running several columns at a time. In this case multi stream collectors are used. The composition of the eluent flow can be monitored and each fraction is analyzed for dissolved compounds, e.g. by analytical chromatography, UV absorption spectra, or fluorescence. Colored compounds (or fluorescent compounds with the aid of a UV lamp) can be seen through the glass wall as moving bands.
Stationary phase
The stationary phase or adsorbent in column chromatography is a solid. The most common stationary phase for column chromatography is silica gel, the next most common being alumina. Cellulose powder has often been used in the past. A wide range of stationary phases are available in order to perform ion exchange chromatography, reversed-phase chromatography (RP), affinity chromatography or expanded bed adsorption (EBA). The stationary phases are usually finely ground powders or gels and/or are microporous for an increased surface, though in EBA a fluidized bed is used. There is an important ratio between the stationary phase weight and the dry weight of the analyte mixture that can be applied onto the column. For silica column chromatography, this ratio lies within 20:1 to 100:1, depending on how close to each other the analyte components are being eluted.
Mobile phase (eluent)
The mobile phase or eluent is a solvent or a mixture of solvents used to move the compounds through the column. It is chosen so that the retention factor value of the compound of interest is roughly around 0.2 - 0.3 in order to minimize the time and the amount of eluent to run the chromatography. The eluent has also been chosen so that the different compounds can be separated effectively. The eluent is optimized in small scale pretests, often using thin layer chromatography (TLC) with the same stationary phase, using solvents of different polarity until a suitable solvent system is found. Common mobile phase solvents, in order of increasing polarity, include hexane, dichloromethane, ethyl acetate, acetone, and methanol. A common solvent system is a mixture of hexane and ethyl acetate, with proportions adjusted until the target compound has a retention factor of 0.2 - 0.3. Contrary to common misconception, methanol alone can be used as an eluent for highly polar compounds, and does not dissolve silica gel.
There is an optimum flow rate for each particular separation. A faster flow rate of the eluent minimizes the time required to run a column and thereby minimizes diffusion, resulting in a better separation. However, the maximum flow rate is limited because a finite time is required for the analyte to equilibrate between the stationary phase and mobile phase, see Van Deemter's equation. A simple laboratory column runs by gravity flow. The flow rate of such a column can be increased by extending the fresh eluent filled column above the top of the stationary phase or decreased by the tap controls. Faster flow rates can be achieved by using a pump or by using compressed gas (e.g. air, nitrogen, or argon) to push the solvent through the column (flash column chromatography).
The particle size of the stationary phase is generally finer in flash column chromatography than in gravity column chromatography. For example, one of the most widely used silica gel grades in the former technique is mesh 230 – 400 (40 – 63 μm), while the latter technique typically requires mesh 70 – 230 (63 – 200 μm) silica gel.
A spreadsheet that assists in the successful development of flash columns has been developed. The spreadsheet estimates the retention volume and band volume of analytes, the fraction numbers expected to contain each analyte, and the resolution between adjacent peaks. This information allows users to select optimal parameters for preparative-scale separations before the flash column itself is attempted.
Automated systems
Column chromatography is an extremely time-consuming stage in any lab and can quickly become the bottleneck for any process lab. Many manufacturers like Biotage, Buchi, Interchim and Teledyne Isco have developed automated flash chromatography systems (typically referred to as LPLC, low pressure liquid chromatography, around ) that minimize human involvement in the purification process. Automated systems will include components normally found on more expensive high performance liquid chromatography (HPLC) systems such as a gradient pump, sample injection ports, a UV detector and a fraction collector to collect the eluent. Typically these automated systems can separate samples from a few milligrams up to an industrial many kilogram scale and offer a much cheaper and quicker solution to doing multiple injections on prep-HPLC systems.
The resolution (or the ability to separate a mixture) on an LPLC system will always be lower compared to HPLC, as the packing material in an HPLC column can be much smaller, typically only 5 micrometre thus increasing stationary phase surface area, increasing surface interactions and giving better separation. However, the use of this small packing media causes the high back pressure and is why it is termed high pressure liquid chromatography. The LPLC columns are typically packed with silica of around 50 micrometres, thus reducing back pressure and resolution, but it also removes the need for expensive high pressure pumps. Manufacturers are now starting to move into higher pressure flash chromatography systems and have termed these as medium pressure liquid chromatography (MPLC) systems which operate above .
Column chromatogram resolution calculation
Typically, column chromatography is set up with peristaltic pumps, flowing buffers and the solution sample through the top of the column. The solutions and buffers pass through the column where a fraction collector at the end of the column setup collects the eluted samples. Prior to the fraction collection, the samples that are eluted from the column pass through a detector such as a spectrophotometer or mass spectrometer so that the concentration of the separated samples in the sample solution mixture can be determined.
For example, if you were to separate two different proteins with different binding capacities to the column from a solution sample, a good type of detector would be a spectrophotometer using a wavelength of 280 nm. The higher the concentration of protein that passes through the eluted solution through the column, the higher the absorbance of that wavelength.
Because the column chromatography has a constant flow of eluted solution passing through the detector at varying concentrations, the detector must plot the concentration of the eluted sample over a course of time. This plot of sample concentration versus time is called a chromatogram.
The ultimate goal of chromatography is to separate different components from a solution mixture. The resolution expresses the extent of separation between the components from the mixture. The higher the resolution of the chromatogram, the better the extent of separation of the samples the column gives. This data is a good way of determining the column's separation properties of that particular sample. The resolution can be calculated from the chromatogram.
The separate curves in the diagram represent different sample elution concentration profiles over time based on their affinity to the column resin. To calculate resolution, the retention time and curve width are required.
Retention time is the time from the start of signal detection by the detector to the peak height of the elution concentration profile of each different sample.
Curve width is the width of the concentration profile curve of the different samples in the chromatogram in units of time.
A simplified method of calculating chromatogram resolution is to use the plate model. The plate model assumes that the column can be divided into a certain number of sections, or plates and the mass balance can be calculated for each individual plate. This approach approximates a typical chromatogram curve as a Gaussian distribution curve. By doing this, the curve width is estimated as 4 times the standard deviation of the curve, 4σ. The retention time is the time from the start of signal detection to the time of the peak height of the Gaussian curve.
From the variables in the figure above, the resolution, plate number, and plate height of the column plate model can be calculated using the equations:
Resolution (Rs):
Rs = 2(tRB – tRA)/(wB + wA),
where:
tRB = retention time of solute B
tRA = retention time of solute A
wB = Gaussian curve width of solute B
wA = Gaussian curve width of solute A
Plate Number (N):
N = (tR)2/(w/4)2
Plate Height (H):
H = L/N
where L is the length of the column.
Column adsorption equilibrium
For an adsorption column, the column resin (the stationary phase) is composed of microbeads. Even smaller particles such as proteins, carbohydrates, metal ions, or other chemical compounds are conjugated onto the microbeads. Each binding particle that is attached to the microbead can be assumed to bind in a 1:1 ratio with the solute sample sent through the column that needs to be purified or separated.
Binding between the target molecule to be separated and the binding molecule on the column beads can be modeled using a simple equilibrium reaction Keq = [CS]/([C][S]) where Keq is the equilibrium constant, [C] and [S] are the concentrations of the target molecule and the binding molecule on the column resin, respectively. [CS] is the concentration of the complex of the target molecule bound to the column resin.
Using this as a basis, three different isotherms can be used to describe the binding dynamics of a column chromatography: linear, Langmuir, and Freundlich.
The linear isotherm occurs when the solute concentration needed to be purified is very small relative to the binding molecule. Thus, the equilibrium can be defined as:
[CS] = Keq[C].
For industrial scale uses, the total binding molecules on the column resin beads must be factored in because unoccupied sites must be taken into account. The Langmuir isotherm and Freundlich isotherm are useful in describing this equilibrium. The Langmuir isotherm is given by:
[CS] = (KeqStot[C])/(1 + Keq[C]), where Stot is the total binding molecules on the beads.
The Freundlich isotherm is given by:
[CS] = Keq[C]1/n
The Freundlich isotherm is used when the column can bind to many different samples in the solution that needs to be purified. Because the many different samples have different binding constants to the beads, there are many different Keqs. Therefore, the Langmuir isotherm is not a good model for binding in this case.
| Physical sciences | Chromatography | Chemistry |
1514751 | https://en.wikipedia.org/wiki/Triple%20junction | Triple junction | A triple junction is the point where the boundaries of three tectonic plates meet. At the triple junction each of the three boundaries will be one of three types – a ridge (R), trench (T) or transform fault (F) – and triple junctions can be described according to the types of plate margin that meet at them (e.g. fault–fault–trench, ridge–ridge–ridge, or abbreviated F-F-T, R-R-R). Of the ten possible types of triple junctions only a few are stable through time (stable in this context means that the geometrical configuration of the triple junction will not change through geologic time). The meeting of four or more plates is also theoretically possible, but junctions will only exist instantaneously.
History
The first scientific paper detailing the triple-junction concept was published in 1969 by Dan McKenzie and W. Jason Morgan. The term had traditionally been used for the intersection of three divergent boundaries or spreading ridges. These three divergent boundaries ideally meet at near 120° angles.
In plate tectonics theory during the breakup of a continent, three divergent boundaries form, radiating out from a central point (the triple junction). One of these divergent plate boundaries fails (see aulacogen) and the other two continue spreading to form an ocean. The opening of the south Atlantic Ocean started at the south of the South American and African continents, reaching a triple junction in the present Gulf of Guinea, from where it continued to the west. The NE-trending Benue Trough is the failed arm of this junction.
In the years since, the term triple-junction has come to refer to any point where three tectonic plates meet.
Interpretation
The properties of triple junctions are most easily understood from the purely kinematic point of view where the plates are rigid and moving over the surface of the Earth. No knowledge of the Earth's interior or the geological details of the crust are then needed. Another useful simplification is that the kinematics of triple junctions on a flat Earth are essentially the same as those on the surface of a sphere. On a sphere, plate motions are described as relative rotations about Euler poles (see Plate reconstruction), and the relative motion at every point along a plate boundary can be calculated from this rotation. But the area around a triple junction is small enough (relative to the size of the sphere) and (usually) far enough from the pole of rotation, that the relative motion across a boundary can be assumed to be constant along that boundary. Thus, analysis of triple junctions can usually be done on a flat surface with motions defined by vectors.
Stability
Triple junctions may be described and their stability assessed without use of the geological details but simply by defining the properties of the ridges, trenches and transform faults involved, making some simplifying assumptions and applying simple velocity calculations. This assessment can generalise to most actual triple junction settings provided the assumptions and definitions broadly apply to the real Earth.
A stable junction is one at which the geometry of the junction is retained with time as the plates involved move. This places restrictions on relative velocities and plate boundary orientation. An unstable triple junction will change with time, either to become another form of triple junction (RRF junctions easily evolve to FFR junctions), will change geometry or are simply not feasible (as in the case of FFF junctions). The inherent instability of an FFF junction is believed to have caused the formation of the Pacific plate about 190 million years ago.
By assuming that plates are rigid and that the Earth is spherical, Leonhard Euler's theorem of motion on a sphere can be used to reduce the stability assessment to determining boundaries and relative motions of the interacting plates. The rigid assumption holds very well in the case of oceanic crust, and the radius of the Earth at the equator and poles only varies by a factor of roughly one part in 300 so the Earth approximates very well to a sphere.
McKenzie and Morgan first analysed the stability of triple junctions using these assumptions with the additional assumption that the Euler poles describing the motions of the plates were such that they approximated to straight line motion on a flat surface. This simplification applies when the Euler poles are distant from the triple junction concerned. The definitions they used for R, T and F are as follows:
R – structures that produce lithosphere symmetrically and perpendicular to the relative velocity of the plates on either side (this does not always apply, for example in the Gulf of Aden).
T – structures that consume lithosphere from one side only. The relative velocity vector can be oblique to the plate boundary.
F – active faults parallel to the slip vector.
Stability criteria
For a triple junction between the plates A, B and C to exist, the following condition must be satisfied:
AvB + BvC + CvA = 0
where AvB is the relative motion of B with respect to A.
This condition can be represented in velocity space by constructing a velocity triangle ABC where the lengths AB, BC and CA are proportional to the velocities AvB, BvC and CvA respectively.
Further conditions must also be met for the triple junction to exist stably – the plates must move in a way that leaves their individual geometries unchanged. Alternatively the triple junction must move in such a way that it remains on all three of the plate boundaries involved.
McKenzie and Morgan demonstrated that these criteria can be represented on the same velocity space diagrams in the following way. The lines ab, bc and ca join points in velocity space which will leave the geometry of AB, BC and CA unchanged. These lines are the same as those that join points in velocity space at which an observer could move at the given velocity and still remain on the plate boundary. When these are drawn onto the diagram containing the velocity triangle these lines must be able to meet at a single point, for the triple junction to exist stably.
These lines necessarily are parallel to the plate boundaries as to remain on the plate boundaries the observer must either move along the plate boundary or remain stationary on it.
For a ridge the line constructed must be the perpendicular bisector of the relative motion vector as to remain in the middle of the ridge an observer would have to move at half the relative speeds of the plates either side but could also move in a perpendicular direction along the plate boundary.
For a transform fault the line must be parallel to the relative motion vector as all of the motion is parallel to the boundary direction and so the line ab must lie along AB for a transform fault separating the plates A and B.
For an observer to remain on a trench boundary they must walk along the strike of the trench but remaining on the overriding plate. Therefore, the line constructed will lie parallel to the plate boundary but passing through the point in velocity space occupied by the overriding plate.
The point at which these lines meet, J, gives the overall motion of the triple junction with respect to the Earth.
Using these criteria it can easily be shown why the FFF triple junction is not stable: the only case in which three lines lying along the sides of a triangle can meet at a point is the trivial case in which the triangle has sides lengths zero, corresponding to zero relative motion between the plates. As faults are required to be active for the purpose of this assessment, an FFF junction can never be stable.
Types
McKenzie and Morgan determined that there were 16 types of triple junction theoretically possible, though several of these are speculative and have not necessarily been seen on Earth. These junctions were classified firstly by the types of plate boundaries meeting – for example RRR, TTR, RRT, FFT etc. – and secondly by the relative motion directions of the plates involved. Some configurations such as RRR can only have one set of relative motions whereas TTT junctions may be classified into TTT(a) and TTT(b). These differences in motion direction affect the stability criteria.
McKenzie and Morgan claimed that of these 16 types, 14 were stable with FFF and RRF configurations unstable, however, York later showed that the RRF configuration could be stable under certain conditions.
Ridge–ridge–ridge junctions
An RRR junction is always stable using these definitions and therefore very common on Earth, though in a geological sense ridge spreading is usually discontinued in one direction leaving a failed rift zone. There are many examples of these present both now and in the geological past such as the South Atlantic opening with ridges spreading North and South to form the Mid-Atlantic Ridge, and an associated aulacogen, the Benue Trough, in the Niger Delta region of Africa. RRR junctions are also common as rifting along three fractures at 120° is the best way to relieve stresses from uplift at the surface of a sphere; on Earth, stresses similar to these are believed to be caused by the mantle hotspots thought to initiate rifting in continents.
The stability of RRR junctions is demonstrated below – as the perpendicular bisectors of the sides of a triangle always meet at a single point, the lines ab, bc and ca can always be made to meet regardless of relative velocities.
Ridge–trench–fault junctions
RTF junctions are less common, an unstable junction of this type (an RTF(a)) is thought to have existed at roughly 12Ma at the mouth of the Gulf of California where the East Pacific Rise currently meets the San Andreas Fault zone. The Guadeloupe and Farallon microplates were previously being subducted under the North American plate and the northern end of this boundary met the San Andreas Fault. Material for this subduction was provided by a ridge equivalent to the modern East Pacific Rise slightly displaced to the west of the trench. As the ridge itself was subducted an RTF triple junction momentarily existed but subduction of the ridge caused the subducted lithosphere to weaken and 'tear' from the point of the triple junction. The loss of slab pull caused by the detachment of this lithosphere ended the RTF junction giving the present day ridge – fault system. An RTF(a) is stable if ab goes through the point in velocity space C, or if ac and bc are colinear.
Trench–trench–trench junctions
A TTT(a) junction can be found in central Japan where the Eurasian plate overrides the Philippine and Pacific plates, with the Philippine plate also overriding the Pacific. Here the Japan Trench effectively branches to form the Ryukyu and Bonin arcs. The stability criteria for this type of junction are either ab and ac form a straight line or that the line bc is parallel to CA.
Examples
The junction of the Red Sea, the Gulf of Aden and the East African Rift centered in the Afar Triangle (the Afar triple junction) is the only R-R-R triple junction above sea level.
The Rodrigues triple junction is a R-R-R triple junction in the southern Indian Ocean, where the African, the Indo-Australian and the Antarctic Plates meet.
The Galapagos triple junction is an R-R-R triple junction where the Nazca, the Cocos, and the Pacific plates meet. The East Pacific Rise extends north and south from this junction and the Cocos–Nazca spreading centre goes to the east. This example is made more complex by the Galapagos Microplate which is a small separate plate on the rise just to the southeast of the triple junction.
Chiapas coast off Tapachula where Guatemala, North America and Pacific join and small earthquakes occur weekly. This is pushed eastward by the Cocos plate.
On the west coast of North America is another unstable triple junction offshore of Cape Mendocino. To the south, the San Andreas Fault, a strike-slip fault and transform plate boundary, separates the Pacific plate and the North American plate. To the north lies the Cascadia subduction zone, where a section of the Juan de Fuca plate called the Gorda plate is being subducted under the North American plate, forming a trench (T). Another transform fault, the Mendocino Fault (F), runs along the boundary between the Pacific plate and the Gorda plate. Where the three intersect is the seismically active, F-F-T Mendocino triple junction.
The Amurian plate, the Okhotsk microplate, and the Philippine Sea plate meet in Japan near Mount Fuji. (see Mount Fuji's Geology)
The Azores triple junction is a geologic triple junction where the boundaries of three tectonic plates intersect: the North American plate, the Eurasian plate and the African plate, R-R-R.
The Boso triple junction offshore of Japan is a T-T-T triple junction between the Okhotsk microplate, Pacific plate and Philippine Sea plate.
The North Sea is located at the extinct triple junction of three former continental plates of the Palaeozoic era: Avalonia, Laurentia and Baltica.
The South Greenland triple junction was an R-R-R triple junction where the Eurasian, Greenland and North American plates diverged during the Paleogene.
The Chile triple junction is where the South American plate, the Nazca plate, and the Antarctic plate meet.
| Physical sciences | Tectonics | Earth science |
1515472 | https://en.wikipedia.org/wiki/Stokes%20parameters | Stokes parameters | The Stokes parameters are a set of values that describe the polarization state of electromagnetic radiation. They were defined by George Gabriel Stokes in 1851, as a mathematically convenient alternative to the more common description of incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. The effect of an optical system on the polarization of light can be determined by constructing the Stokes vector for the input light and applying Mueller calculus, to obtain the Stokes vector of the light leaving the system. They can be determined from directly observable phenomena. The original Stokes paper was discovered independently by Francis Perrin in 1942 and by Subrahamanyan Chandrasekhar in 1947, who named it as the Stokes parameters.
Definitions
The relationship of the Stokes parameters S0, S1, S2, S3 to intensity and polarization ellipse parameters is shown in the equations below and the figure on the right.
Here , and are the spherical coordinates of the three-dimensional vector of cartesian coordinates . is the total intensity of the beam, and is the degree of polarization, constrained by . The factor of two before represents the fact that any polarization ellipse is indistinguishable from one rotated by 180°, while the factor of two before indicates that an ellipse is indistinguishable from one with the semi-axis lengths swapped accompanied by a 90° rotation. The phase information of the polarized light is not recorded in the Stokes parameters. The four Stokes parameters are sometimes denoted I, Q, U and V, respectively.
Given the Stokes parameters, one can solve for the spherical coordinates with the following equations:
Stokes vectors
The Stokes parameters are often combined into a vector, known as the Stokes vector:
The Stokes vector spans the space of unpolarized, partially polarized, and fully polarized light. For comparison, the Jones vector only spans the space of fully polarized light, but is more useful for problems involving coherent light. The four Stokes parameters are not a preferred coordinate system of the space, but rather were chosen because they can be easily measured or calculated.
Note that there is an ambiguous sign for the component depending on the physical convention used. In practice, there are two separate conventions used, either defining the Stokes parameters when looking down the beam towards the source (opposite the direction of light propagation) or looking down the beam away from the source (coincident with the direction of light propagation). These two conventions result in different signs for , and a convention must be chosen and adhered to.
Examples
Below are shown some Stokes vectors for common states of polarization of light.
{|
|-
| || Linearly polarized (horizontal)
|-
| || Linearly polarized (vertical)
|-
| || Linearly polarized (+45°)
|-
| || Linearly polarized (−45°)
|-
| || Right-hand circularly polarized
|-
| || Left-hand circularly polarized
|-
| || Unpolarized
|}
Alternative explanation
A monochromatic plane wave is specified by its propagation vector, , and the complex amplitudes of the electric field, and , in a basis . The pair is called a Jones vector. Alternatively, one may specify the propagation vector, the phase, , and the polarization state, , where is the curve traced out by the electric field as a function of time in a fixed plane. The most familiar polarization states are linear and circular, which are degenerate cases of the most general state, an ellipse.
One way to describe polarization is by giving the semi-major and semi-minor axes of the polarization ellipse, its orientation, and the direction of rotation (See the above figure). The Stokes parameters , , , and , provide an alternative description of the polarization state which is experimentally convenient because each parameter corresponds to a sum or difference of measurable intensities. The next figure shows examples of the Stokes parameters in degenerate states.
Definitions
The Stokes parameters are defined by
where the subscripts refer to three different bases of the space of Jones vectors: the standard Cartesian basis (), a Cartesian basis rotated by 45° (), and a circular basis (). The circular basis is defined so that , .
The symbols ⟨⋅⟩ represent expectation values. The light can be viewed as a random variable taking values in the space C2 of Jones vectors . Any given measurement yields a specific wave (with a specific phase, polarization ellipse, and magnitude), but it keeps flickering and wobbling between different outcomes. The expectation values are various averages of these outcomes. Intense, but unpolarized light will have I > 0 but
Q = U = V = 0, reflecting that no polarization type predominates. A convincing waveform is depicted at the article on coherence.
The opposite would be perfectly polarized light which, in addition, has a fixed, nonvarying amplitude—a pure sine curve. This is represented by a random variable with only a single possible value, say . In this case one may replace the brackets by absolute value bars, obtaining a well-defined quadratic map
from the Jones vectors to the corresponding Stokes vectors; more convenient forms are given below. The map takes its image in the cone defined by |I |2 = |Q |2 + |U |2 + |V |2, where the purity of the state satisfies p = 1 (see below).
The next figure shows how the signs of the Stokes parameters are determined by the helicity and the orientation of the semi-major axis of the polarization ellipse.
Representations in fixed bases
In a fixed () basis, the Stokes parameters when using an increasing phase convention are
while for , they are
and for , they are
Properties
For purely monochromatic coherent radiation, it follows from the above equations that
whereas for the whole (non-coherent) beam radiation, the Stokes parameters are defined as averaged quantities, and the previous equation becomes an inequality:
However, we can define a total polarization intensity , so that
where is the total polarization fraction.
Let us define the complex intensity of linear polarization to be
Under a rotation of the polarization ellipse, it can be shown that and are invariant, but
With these properties, the Stokes parameters may be thought of as constituting three generalized intensities:
where is the total intensity, is the intensity of circular polarization, and is the intensity of linear polarization. The total intensity of polarization is , and the orientation and sense of rotation are given by
Since and , we have
Relation to the polarization ellipse
In terms of the parameters of the polarization ellipse, the Stokes parameters are
Inverting the previous equation gives
Measurement
The Stokes parameters (and thus the polarization of some electromagnetic radiation) can be directly determined from observation. Using a linear polarizer and a quarter-wave plate, the following system of equations relating the Stokes parameters to measured intensity can be obtained:
where is the irradiance of the radiation at a point when the linear polarizer is rotated at an angle of , and similarly is the irradiance at a point when the quarter-wave plate is rotated at an angle of . A system can be implemented using both plates at once at different angles to measure the parameters. This can give a more accurate measure of the relative magnitudes of the parameters (which is often the main result desired) due to all parameters being affected by the same losses.
Relationship to Hermitian operators and quantum mixed states
From a geometric and algebraic point of view, the Stokes parameters stand in one-to-one correspondence with the closed, convex, 4-real-dimensional cone of nonnegative Hermitian operators on the Hilbert space C2. The parameter I serves as the trace of the operator, whereas the entries of the matrix of the operator are simple linear functions of the four parameters I, Q, U, V, serving as coefficients in a linear combination of the Stokes operators. The eigenvalues and eigenvectors of the operator can be calculated from the polarization ellipse parameters I, p, ψ, χ.
The Stokes parameters with I set equal to 1 (i.e. the trace 1 operators) are in one-to-one correspondence with the closed unit 3-dimensional ball of mixed states (or density operators) of the quantum space C2, whose boundary is the Bloch sphere. The Jones vectors correspond to the underlying space C2, that is, the (unnormalized) pure states of the same system. Note that the overall phase (i.e. the common phase factor between the two component waves on the two perpendicular polarization axes) is lost when passing from a pure state |φ⟩ to the corresponding mixed state |φ⟩⟨φ|, just as it is lost when passing from a Jones vector to the corresponding Stokes vector.
In the basis of horizontal polarization state and vertical polarization state , the +45° linear polarization state is , the -45° linear polarization state is , the left hand circular polarization state is , and the right hand circular polarization state is . It's easy to see that these states are the eigenvectors of Pauli matrices, and that the normalized Stokes parameters (U/I, V/I, Q/I) correspond to the coordinates of the Bloch vector (, , ). Equivalently, we have , , , where is the density matrix of the mixed state.
Generally, a linear polarization at angle θ has a pure quantum state ; therefore, the transmittance of a linear polarizer/analyzer at angle θ for a mixed state light source with density matrix is , with a maximum transmittance of at if , or at if ; the minimum transmittance of is reached at the perpendicular to the maximum transmittance direction. Here, the ratio of maximum transmittance to minimum transmittance is defined as the extinction ratio , where the degree of linear polarization is . Equivalently, the formula for the transmittance can be rewritten as , which is an extended form of Malus's law; here, are both non-negative, and is related to the extinction ratio by . Two of the normalized Stokes parameters can also be calculated by .
It's also worth noting that a rotation of polarization axis by angle θ corresponds to the Bloch sphere rotation operator . For example, the horizontal polarization state would rotate to . The effect of a quarter-wave plate aligned to the horizontal axis is described by , or equivalently the Phase gate S, and the resulting Bloch vector becomes . With this configuration, if we perform the rotating analyzer method to measure the extinction ratio, we will be able to calculate and also verify . For this method to work, the fast axis and the slow axis of the waveplate must be aligned with the reference directions for the basis states.
The effect of a quarter-wave plate rotated by angle θ can be determined by Rodrigues' rotation formula as , with . The transmittance of the resulting light through a linear polarizer (analyzer plate) along the horizontal axis can be calculated using the same Rodrigues' rotation formula and focusing on its components on and :
The above expression is the theory basis of many polarimeters. For unpolarized light, T=1/2 is a constant. For purely circularly polarized light, T has a sinusoidal dependence on angle θ with a period of 180 degrees, and can reach absolute extinction where T=0. For purely linearly polarized light, T has a sinusoidal dependence on angle θ with a period of 90 degrees, and absolute extinction is only reachable when the original light's polarization is at 90 degrees from the polarizer (i.e. ). In this configuration, and , with a maximum of 1/2 at θ=45°, and an extinction point at θ=0°. This result can be used to precisely determine the fast or slow axis of a quarter-wave plate, for example, by using a polarizing beam splitter to obtain a linearly polarized light aligned to the analyzer plate and rotating the quarter-wave plate in between.
Similarly, the effect of a half-wave plate rotated by angle θ is described by , which transforms the density matrix to:
The above expression demonstrates that if the original light is of pure linear polarization (i.e. ), the resulting light after the half-wave plate is still of pure linear polariztion (i.e. without component) with a rotated major axis. Such rotation of the linear polarization has a sinusoidal dependence on angle θ with a period of 90 degrees.
| Physical sciences | Optics | Physics |
1515653 | https://en.wikipedia.org/wiki/Satellite%20navigation | Satellite navigation | A satellite navigation or satnav system is a system that uses satellites to provide autonomous geopositioning. A satellite navigation system with global coverage is termed global navigation satellite system (GNSS). , four global systems are operational: the United States's Global Positioning System (GPS), Russia's Global Navigation Satellite System (GLONASS), China's BeiDou Navigation Satellite System (BDS), and the European Union's Galileo.
Satellite-based augmentation systems (SBAS), designed to enhance the accuracy of GNSS, include Japan's Quasi-Zenith Satellite System (QZSS), India's GAGAN and the European EGNOS, all of them based on GPS.
Previous iterations of the BeiDou navigation system and the present Indian Regional Navigation Satellite System (IRNSS), operationally known as NavIC, are examples of stand-alone operating regional navigation satellite systems (RNSS).
Satellite navigation devices determine their location (longitude, latitude, and altitude/elevation) to high precision (within a few centimeters to meters) using time signals transmitted along a line of sight by radio from satellites. The system can be used for providing position, navigation or for tracking the position of something fitted with a receiver (satellite tracking). The signals also allow the electronic receiver to calculate the current local time to a high precision, which allows time synchronisation. These uses are collectively known as Positioning, Navigation and Timing (PNT). Satnav systems operate independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the positioning information generated.
Global coverage for each system is generally achieved by a satellite constellation of 18–30 medium Earth orbit (MEO) satellites spread between several orbital planes. The actual systems vary, but all use orbital inclinations of >50° and orbital periods of roughly twelve hours (at an altitude of about ).
Classification
GNSS systems that provide enhanced accuracy and integrity monitoring usable for civil navigation are classified as follows:
is the first generation system and is the combination of existing satellite navigation systems (GPS and GLONASS), with Satellite Based Augmentation Systems (SBAS) or Ground Based Augmentation Systems (GBAS). In the United States, the satellite-based component is the Wide Area Augmentation System (WAAS); in Europe, it is the European Geostationary Navigation Overlay Service (EGNOS); in Japan, it is the Multi-Functional Satellite Augmentation System (MSAS); and in India, it is the GPS-aided GEO augmented navigation (GAGAN). Ground-based augmentation is provided by systems like the Local Area Augmentation System (LAAS).
is the second generation of systems that independently provide a full civilian satellite navigation system, exemplified by the European Galileo positioning system. These systems will provide the accuracy and integrity monitoring necessary for civil navigation; including aircraft. Initially, this system consisted of only Upper L Band frequency sets (L1 for GPS, E1 for Galileo, and G1 for GLONASS). In recent years, GNSS systems have begun activating Lower L Band frequency sets (L2 and L5 for GPS, E5a and E5b for Galileo, and G3 for GLONASS) for civilian use; they feature higher aggregate accuracy and fewer problems with signal reflection. As of late 2018, a few consumer-grade GNSS devices are being sold that leverage both. They are typically called "Dual band GNSS" or "Dual band GPS" devices.
By their roles in the navigation system, systems can be classified as:
There are four global satellite navigation systems, currently GPS (United States), GLONASS (Russian Federation), Beidou (China) and Galileo (European Union).
Global Satellite-Based Augmentation Systems (SBAS) such as OmniSTAR and StarFire.
Regional SBAS including WAAS (US), EGNOS (EU), MSAS (Japan), GAGAN (India) and SDCM (Russia).
Regional Satellite Navigation Systems such as India's NAVIC, and Japan's QZSS.
Continental scale Ground Based Augmentation Systems (GBAS) for example the Australian GRAS and the joint US Coast Guard, Canadian Coast Guard, US Army Corps of Engineers and US Department of Transportation National Differential GPS (DGPS) service.
Regional scale GBAS such as CORS networks.
Local GBAS typified by a single GPS reference station operating Real Time Kinematic (RTK) corrections.
As many of the global GNSS systems (and augmentation systems) use similar frequencies and signals around L1, many "Multi-GNSS" receivers capable of using multiple systems have been produced. While some systems strive to interoperate with GPS as well as possible by providing the same clock, others do not.
History
Ground-based radio navigation is decades old. The DECCA, LORAN, GEE and Omega systems used terrestrial longwave radio transmitters which broadcast a radio pulse from a known "master" location, followed by a pulse repeated from a number of "slave" stations. The delay between the reception of the master signal and the slave signals allowed the receiver to deduce the distance to each of the slaves, providing a fix.
The first satellite navigation system was Transit, a system deployed by the US military in the 1960s. Transit's operation was based on the Doppler effect: the satellites travelled on well-known paths and broadcast their signals on a well-known radio frequency. The received frequency will differ slightly from the broadcast frequency because of the movement of the satellite with respect to the receiver. By monitoring this frequency shift over a short time interval, the receiver can determine its location to one side or the other of the satellite, and several such measurements combined with a precise knowledge of the satellite's orbit can fix a particular position. Satellite orbital position errors are caused by radio-wave refraction, gravity field changes (as the Earth's gravitational field is not uniform), and other phenomena. A team, led by Harold L Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, found solutions and/or corrections for many error sources. Using real-time data and recursive estimation, the systematic and residual errors were narrowed down to accuracy sufficient for navigation.
Principles
Part of an orbiting satellite's broadcast includes its precise orbital data. Originally, the US Naval Observatory (USNO) continuously observed the precise orbits of these satellites. As a satellite's orbit deviated, the USNO sent the updated information to the satellite. Subsequent broadcasts from an updated satellite would contain its most recent ephemeris.
Modern systems are more direct. The satellite broadcasts a signal that contains orbital data (from which the position of the satellite can be calculated) and the precise time the signal was transmitted. Orbital data include a rough almanac for all satellites to aid in finding them, and a precise ephemeris for this satellite. The orbital ephemeris is transmitted in a data message that is superimposed on a code that serves as a timing reference. The satellite uses an atomic clock to maintain synchronization of all the satellites in the constellation. The receiver compares the time of broadcast encoded in the transmission of three (at sea level) or four (which allows an altitude calculation also) different satellites, measuring the time-of-flight to each satellite. Several such measurements can be made at the same time to different satellites, allowing a continual fix to be generated in real time using an adapted version of trilateration: see GNSS positioning calculation for details.
Each distance measurement, regardless of the system being used, places the receiver on a spherical shell centred on the broadcaster, at the measured distance from the broadcaster. By taking several such measurements and then looking for a point where the shells meet, a fix is generated. However, in the case of fast-moving receivers, the position of the receiver moves as signals are received from several satellites. In addition, the radio signals slow slightly as they pass through the ionosphere, and this slowing varies with the receiver's angle to the satellite, because that angle corresponds to the distance which the signal travels through the ionosphere. The basic computation thus attempts to find the shortest directed line tangent to four oblate spherical shells centred on four satellites. Satellite navigation receivers reduce errors by using combinations of signals from multiple satellites and multiple correlators, and then using techniques such as Kalman filtering to combine the noisy, partial, and constantly changing data into a single estimate for position, time, and velocity.
Einstein's theory of general relativity is applied to GPS time correction, the net result is that time on a GPS satellite clock advances faster than a clock on the ground by about 38 microseconds per day.
Applications
The original motivation for satellite navigation was for military applications. Satellite navigation allows precision in the delivery of weapons to targets, greatly increasing their lethality whilst reducing inadvertent casualties from mis-directed weapons. (See Guided bomb). Satellite navigation also allows forces to be directed and to locate themselves more easily, reducing the fog of war.
Now a global navigation satellite system, such as Galileo, is used to determine users location and the location of other people or objects at any given moment. The range of application of satellite navigation in the future is enormous, including both the public and private sectors across numerous market segments such as science, transport, agriculture, insurance, energy, etc.
The ability to supply satellite navigation signals is also the ability to deny their availability. The operator of a satellite navigation system potentially has the ability to degrade or eliminate satellite navigation services over any territory it desires.
Global navigation satellite systems
In order of first launch year:
GPS
First launch year: 1978
The United States' Global Positioning System (GPS) consists of up to 32 medium Earth orbit satellites in six different orbital planes. The exact number of satellites varies as older satellites are retired and replaced. Operational since 1978 and globally available since 1994, GPS is the world's most utilized satellite navigation system.
GLONASS
First launch year: 1982
The formerly Soviet, and now Russian, Global'naya Navigatsionnaya Sputnikovaya Sistema, (GLObal NAvigation Satellite System or GLONASS), is a space-based satellite navigation system that provides a civilian radionavigation-satellite service and is also used by the Russian Aerospace Defence Forces. GLONASS has full global coverage since 1995 and with 24 active satellites.
BeiDou
First launch year: 2000
BeiDou started as the now-decommissioned Beidou-1, an Asia-Pacific local network on the geostationary orbits. The second generation of the system BeiDou-2 became operational in China in December 2011. The BeiDou-3 system is proposed to consist of 30 MEO satellites and five geostationary satellites (IGSO). A 16-satellite regional version (covering Asia and Pacific area) was completed by December 2012. Global service was completed by December 2018. On 23 June 2020, the BDS-3 constellation deployment is fully completed after the last satellite was successfully launched at the Xichang Satellite Launch Center.
Galileo
First launch year: 2011
The European Union and European Space Agency agreed in March 2002 to introduce their own alternative to GPS, called the Galileo positioning system. Galileo became operational on 15 December 2016 (global Early Operational Capability, EOC). At an estimated cost of €10 billion, the system of 30 MEO satellites was originally scheduled to be operational in 2010. The original year to become operational was 2014. The first experimental satellite was launched on 28 December 2005. Galileo is expected to be compatible with the modernized GPS system. The receivers will be able to combine the signals from both Galileo and GPS satellites to greatly increase the accuracy. The full Galileo constellation consists of 24 active satellites, the last of which was launched in December 2021. The main modulation used in Galileo Open Service signal is the Composite Binary Offset Carrier (CBOC) modulation.
Regional navigation satellite systems
NavIC
The NavIC (acronym for Navigation with Indian Constellation) is an autonomous regional satellite navigation system developed by the Indian Space Research Organisation (ISRO). The Indian government approved the project in May 2006. It consists of a constellation of 7 navigational satellites. Three of the satellites are placed in geostationary orbit (GEO) and the remaining 4 in geosynchronous orbit (GSO) to have a larger signal footprint and lower number of satellites to map the region. It is intended to provide an all-weather absolute position accuracy of better than throughout India and within a region extending approximately around it. An Extended Service Area lies between the primary service area and a rectangle area enclosed by the 30th parallel south to the 50th parallel north and the 30th meridian east to the 130th meridian east, 1,500–6,000 km beyond borders. A goal of complete Indian control has been stated, with the space segment, ground segment and user receivers all being built in India.
The constellation was in orbit as of 2018, and the system was available for public use in early 2018. NavIC provides two levels of service, the "standard positioning service", which will be open for civilian use, and a "restricted service" (an encrypted one) for authorized users (including military). There are plans to expand NavIC system by increasing constellation size from 7 to 11.
India plans to make the NavIC global by adding 24 more MEO satellites. The Global NavIC will be free to use for the global public.
Early BeiDou
The first two generations of China's BeiDou navigation system were designed to provide regional coverage.
Augmentation
GNSS augmentation is a method of improving a navigation system's attributes, such as accuracy, reliability, and availability, through the integration of external information into the calculation process, for example, the Wide Area Augmentation System, the European Geostationary Navigation Overlay Service, the Multi-functional Satellite Augmentation System, Differential GPS, GPS-aided GEO augmented navigation (GAGAN) and inertial navigation systems.
QZSS
The Quasi-Zenith Satellite System (QZSS) is a four-satellite regional time transfer system and enhancement for GPS covering Japan and the Asia-Oceania regions. QZSS services were available on a trial basis as of January 12, 2018, and were started in November 2018. The first satellite was launched in September 2010. An independent satellite navigation system (from GPS) with 7 satellites is planned for 2023.
EGNOS
Comparison of systems
Using multiple GNSS systems for user positioning increases the number of visible satellites, improves precise point positioning (PPP) and shortens the average convergence time.
The signal-in-space ranging error (SISRE) in November 2019 were 1.6 cm for Galileo, 2.3 cm for GPS, 5.2 cm for GLONASS and 5.5 cm for BeiDou when using real-time corrections for satellite orbits and clocks. The average SISREs of the BDS-3 MEO, IGSO, and GEO satellites were 0.52 m, 0.90 m and 1.15 m, respectively. Compared to the four major global satellite navigation systems consisting of MEO satellites, the SISRE of the BDS-3 MEO satellites was slightly inferior to 0.4 m of Galileo, slightly superior to 0.59 m of GPS, and remarkably superior to 2.33 m of GLONASS. The SISRE of BDS-3 IGSO was 0.90 m, which was on par with the 0.92 m of QZSS IGSO. However, as the BDS-3 GEO satellites were newly launched and not completely functioning in orbit, their average SISRE was marginally worse than the 0.91 m of the QZSS GEO satellites.
Related techniques
DORIS
Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS) is a French precision navigation system. Unlike other GNSS systems, it is based on static emitting stations around the world, the receivers being on satellites, in order to precisely determine their orbital position. The system may be used also for mobile receivers on land with more limited usage and coverage. Used with traditional GNSS systems, it pushes the accuracy of positions to centimetric precision (and to millimetric precision for altimetric application and also allows monitoring very tiny seasonal changes of Earth rotation and deformations), in order to build a much more precise geodesic reference system.
LEO satellites
The two current operational low Earth orbit (LEO) satellite phone networks are able to track transceiver units with accuracy of a few kilometres using doppler shift calculations from the satellite. The coordinates are sent back to the transceiver unit where they can be read using AT commands or a graphical user interface. This can also be used by the gateway to enforce restrictions on geographically bound calling plans.
International regulation
The International Telecommunication Union (ITU) defines a radionavigation-satellite service (RNSS) as "a radiodetermination-satellite service used for the purpose of radionavigation. This service may also include feeder links necessary for its operation".
RNSS is regarded as a safety-of-life service and an essential part of navigation which must be protected from interferences.
Aeronautical radionavigation-satellite (ARNSS) is – according to Article 1.47 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as «A radionavigation service in which earth stations are located on board aircraft.»
Maritime radionavigation-satellite service (MRNSS) is – according to Article 1.45 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as «A radionavigation-satellite service in which earth stations are located on board ships.»
Classification
ITU Radio Regulations (article 1) classifies radiocommunication services as:
Radiodetermination service (article 1.40)
Radiodetermination-satellite service (article 1.41)
Radionavigation service (article 1.42)
Radionavigation-satellite service (article 1.43)
Maritime radionavigation service (article 1.44)
Maritime radionavigation-satellite service (article 1.45)
Aeronautical radionavigation service (article 1.46)
Aeronautical radionavigation-satellite service (article 1.47)
Examples of RNSS use
Augmentation system GNSS augmentation
Automatic Dependent Surveillance–Broadcast
BeiDou Navigation Satellite System (BDS)
GALILEO, European GNSS
Global Positioning System (GPS), with Differential GPS (DGPS)
GLONASS
NAVIC
Quasi-Zenith Satellite System (QZSS)
Frequency allocation
The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012).
To improve harmonisation in spectrum utilisation, most service allocations are incorporated in national Tables of Frequency Allocations and Utilisations within the responsibility of the appropriate national administration. Allocations are:
primary: indicated by writing in capital letters
secondary: indicated by small letters
exclusive or shared utilization: within the responsibility of administrations.
| Technology | Navigation | null |
1515866 | https://en.wikipedia.org/wiki/Pye-dog | Pye-dog | Pye-dog, or sometimes pariah dog, is a term used to describe an ownerless, half-wild, free-ranging dog that lives in or close to human settlements throughout Asia. The term is derived from the Sanskrit para, which translates to "outsider".
The United Kennel Club uses the term pariah dog to classify various breeds in a sighthound and pariah group.
Gallery
| Biology and health sciences | Dogs | Animals |
1515898 | https://en.wikipedia.org/wiki/Thermodynamic%20equations | Thermodynamic equations | Thermodynamics is expressed by a mathematical framework of thermodynamic equations which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics.
Introduction
One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work, or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot. Carnot used the phrase motive power for work. In the footnotes to his famous On the Motive Power of Fire, he states: “We use here the expression motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.” With the inclusion of a unit of time in Carnot's definition, one arrives at the modern definition for power:
During the latter half of the 19th century, physicists such as Rudolf Clausius, Peter Guthrie Tait, and Willard Gibbs worked to develop the concept of a thermodynamic system and the correlative energetic laws which govern its associated processes. The equilibrium state of a thermodynamic system is described by specifying its "state". The state of a thermodynamic system is specified by a number of extensive quantities, the most familiar of which are volume, internal energy, and the amount of each constituent particle (particle numbers). Extensive parameters are properties of the entire system, as contrasted with intensive parameters which can be defined at a single point, such as temperature and pressure. The extensive parameters (except entropy) are generally conserved in some way as long as the system is "insulated" to changes to that parameter from the outside. The truth of this statement for volume is trivial, for particles one might say that the total particle number of each atomic element is conserved. In the case of energy, the statement of the conservation of energy is known as the first law of thermodynamics.
A thermodynamic system is in equilibrium when it is no longer changing in time. This may happen in a very short time, or it may happen with glacial slowness. A thermodynamic system may be composed of many subsystems which may or may not be "insulated" from each other with respect to the various extensive quantities. If we have a thermodynamic system in equilibrium in which we relax some of its constraints, it will move to a new equilibrium state. The thermodynamic parameters may now be thought of as variables and the state may be thought of as a particular point in a space of thermodynamic parameters. The change in the state of the system can be seen as a path in this state space. This change is called a thermodynamic process. Thermodynamic equations are now used to express the relationships between the state parameters at these different equilibrium state.
The concept which governs the path that a thermodynamic system traces in state space as it goes from one equilibrium state to another is that of entropy. The entropy is first viewed as an extensive function of all of the extensive thermodynamic parameters. If we have a thermodynamic system in equilibrium, and we release some of the extensive constraints on the system, there are many equilibrium states that it could move to consistent with the conservation of energy, volume, etc. The second law of thermodynamics specifies that the equilibrium state that it moves to is in fact the one with the greatest entropy. Once we know the entropy as a function of the extensive variables of the system, we will be able to predict the final equilibrium state.
Notation
Some of the most common thermodynamic quantities are:
The conjugate variable pairs are the fundamental state variables used to formulate the thermodynamic functions.
The most important thermodynamic potentials are the following functions:
Thermodynamic systems are typically affected by the following types of system interactions. The types under consideration are used to classify systems as open systems, closed systems, and isolated systems.
Common material properties determined from the thermodynamic functions are the following:
The following constants are constants that occur in many relationships due to the application of a standard system of units.
Laws of thermodynamics
The behavior of a thermodynamic system is summarized in the laws of Thermodynamics, which concisely are:
Zeroth law of thermodynamics
If A, B, C are thermodynamic systems such that A is in thermal equilibrium with B and B is in thermal equilibrium with C, then A is in thermal equilibrium with C.
The zeroth law is of importance in thermometry, because it implies the existence of temperature scales. In practice, C is a thermometer, and the zeroth law says that systems that are in thermodynamic equilibrium with each other have the same temperature. The law was actually the last of the laws to be formulated.
First law of thermodynamics
where is the infinitesimal increase in internal energy of the system, is the infinitesimal heat flow into the system, and is the infinitesimal work done by the system.
The first law is the law of conservation of energy. The symbol instead of the plain d, originated in the work of German mathematician Carl Gottfried Neumann and is used to denote an inexact differential and to indicate that Q and W are path-dependent (i.e., they are not state functions). In some fields such as physical chemistry, positive work is conventionally considered work done on the system rather than by the system, and the law is expressed as .
Second law of thermodynamics
The entropy of an isolated system never decreases: for an isolated system.
A concept related to the second law which is important in thermodynamics is that of reversibility. A process within a given isolated system is said to be reversible if throughout the process the entropy never increases (i.e. the entropy remains unchanged).
Third law of thermodynamics
when
The third law of thermodynamics states that at the absolute zero of temperature, the entropy is zero for a perfect crystalline structure.
Onsager reciprocal relations – sometimes called the Fourth law of thermodynamics
The fourth law of thermodynamics is not yet an agreed upon law (many supposed variations exist); historically, however, the Onsager reciprocal relations have been frequently referred to as the fourth law.
The fundamental equation
The first and second law of thermodynamics are the most fundamental equations of thermodynamics. They may be combined into what is known as fundamental thermodynamic relation which describes all of the changes of thermodynamic state functions of a system of uniform temperature and pressure. As a simple example, consider a system composed of a number of k different types of particles and has the volume as its only external variable. The fundamental thermodynamic relation may then be expressed in terms of the internal energy as:
Some important aspects of this equation should be noted: , ,
The thermodynamic space has k+2 dimensions
The differential quantities (U, S, V, Ni) are all extensive quantities. The coefficients of the differential quantities are intensive quantities (temperature, pressure, chemical potential). Each pair in the equation are known as a conjugate pair with respect to the internal energy. The intensive variables may be viewed as a generalized "force". An imbalance in the intensive variable will cause a "flow" of the extensive variable in a direction to counter the imbalance.
The equation may be seen as a particular case of the chain rule. In other words: from which the following identifications can be made: These equations are known as "equations of state" with respect to the internal energy. (Note - the relation between pressure, volume, temperature, and particle number which is commonly called "the equation of state" is just one of many possible equations of state.) If we know all k+2 of the above equations of state, we may reconstitute the fundamental equation and recover all thermodynamic properties of the system.
The fundamental equation can be solved for any other differential and similar expressions can be found. For example, we may solve for and find that
Thermodynamic potentials
By the principle of minimum energy, the second law can be restated by saying that for a fixed entropy, when the constraints on the system are relaxed, the internal energy assumes a minimum value. This will require that the system be connected to its surroundings, since otherwise the energy would remain constant.
By the principle of minimum energy, there are a number of other state functions which may be defined which have the dimensions of energy and which are minimized according to the second law under certain conditions other than constant entropy. These are called thermodynamic potentials. For each such potential, the relevant fundamental equation results from the same Second-Law principle that gives rise to energy minimization under restricted conditions: that the total entropy of the system and its environment is maximized in equilibrium. The intensive parameters give the derivatives of the environment entropy with respect to the extensive properties of the system.
The four most common thermodynamic potentials are:
After each potential is shown its "natural variables". These variables are important because if the thermodynamic potential is expressed in terms of its natural variables, then it will contain all of the thermodynamic relationships necessary to derive any other relationship. In other words, it too will be a fundamental equation. For the above four potentials, the fundamental equations are expressed as:
The thermodynamic square can be used as a tool to recall and derive these potentials.
First order equations
Just as with the internal energy version of the fundamental equation, the chain rule can be used on the above equations to find k+2 equations of state with respect to the particular potential. If Φ is a thermodynamic potential, then the fundamental equation may be expressed as:
where the are the natural variables of the potential. If is conjugate to then we have the equations of state for that potential, one for each set of conjugate variables.
Only one equation of state will not be sufficient to reconstitute the fundamental equation. All equations of state will be needed to fully characterize the thermodynamic system. Note that what is commonly called "the equation of state" is just the "mechanical" equation of state involving the Helmholtz potential and the volume:
For an ideal gas, this becomes the familiar PV=NkBT.
Euler integrals
Because all of the natural variables of the internal energy U are extensive quantities, it follows from Euler's homogeneous function theorem that
Substituting into the expressions for the other main potentials we have the following expressions for the thermodynamic potentials:
Note that the Euler integrals are sometimes also referred to as fundamental equations.
Gibbs–Duhem relationship
Differentiating the Euler equation for the internal energy and combining with the fundamental equation for internal energy, it follows that:
which is known as the Gibbs-Duhem relationship. The Gibbs-Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with r components, there will be r+1 independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Willard Gibbs and Pierre Duhem.
Second order equations
There are many relationships that follow mathematically from the above basic equations. See Exact differential for a list of mathematical relationships. Many equations are expressed as second derivatives of the thermodynamic potentials (see Bridgman equations).
Maxwell relations
Maxwell relations are equalities involving the second derivatives of thermodynamic potentials with respect to their natural variables. They follow directly from the fact that the order of differentiation does not matter when taking the second derivative. The four most common Maxwell relations are:
{|
|-
|
|width="80"|
|
|-
|
|width="80"|
|
|}
The thermodynamic square can be used as a tool to recall and derive these relations.
Material properties
Second derivatives of thermodynamic potentials generally describe the response of the system to small changes. The number of second derivatives which are independent of each other is relatively small, which means that most material properties can be described in terms of just a few "standard" properties. For the case of a single component system, there are three properties generally considered "standard" from which all others may be derived:
Compressibility at constant temperature or constant entropy
Specific heat (per-particle) at constant pressure or constant volume
Coefficient of thermal expansion
These properties are seen to be the three possible second derivative of the Gibbs free energy with respect to temperature and pressure.
Thermodynamic property relations
Properties such as pressure, volume, temperature, unit cell volume, bulk modulus and mass are easily measured. Other properties are measured through simple relations, such as density, specific volume, specific weight. Properties such as internal energy, entropy, enthalpy, and heat transfer are not so easily measured or determined through simple relations. Thus, we use more complex relations such as Maxwell relations, the Clapeyron equation, and the Mayer relation.
Maxwell relations in thermodynamics are critical because they provide a means of simply measuring the change in properties of pressure, temperature, and specific volume, to determine a change in entropy. Entropy cannot be measured directly. The change in entropy with respect to pressure at a constant temperature is the same as the negative change in specific volume with respect to temperature at a constant pressure, for a simple compressible system. Maxwell relations in thermodynamics are often used to derive thermodynamic relations.
The Clapeyron equation allows us to use pressure, temperature, and specific volume to determine an enthalpy change that is connected to a phase change. It is significant to any phase change process that happens at a constant pressure and temperature. One of the relations it resolved to is the enthalpy of vaporization at a provided temperature by measuring the slope of a saturation curve on a pressure vs. temperature graph. It also allows us to determine the specific volume of a saturated vapor and liquid at that provided temperature. In the equation below, represents the specific latent heat, represents temperature, and represents the change in specific volume.
The Mayer relation states that the specific heat capacity of a gas at constant volume is slightly less than at constant pressure. This relation was built on the reasoning that energy must be supplied to raise the temperature of the gas and for the gas to do work in a volume changing case. According to this relation, the difference between the specific heat capacities is the same as the universal gas constant. This relation is represented by the difference between Cp and Cv:
Cp – Cv = R
| Physical sciences | Thermodynamics | Physics |
7131739 | https://en.wikipedia.org/wiki/Tapejaridae | Tapejaridae | Tapejaridae (from a Tupi word meaning "the lord of the ways") is a family of azhdarchoid pterosaurs from the Cretaceous period. Members are currently known from Brazil, England, Hungary, Morocco, Spain, the United States, and China. The most primitive genera were found in China, indicating that the family has an Asian origin.
Description
Tapejarids were small to medium-sized pterosaurs with several unique, shared characteristics, mainly relating to the skull. Most tapejarids possessed a bony crest arising from the snout (formed mostly by the premaxillary bones of the upper jaw tip). In some species, this bony crest is known to have supported an even larger crest of softer, fibrous tissue that extends back along the skull. Tapejarids are also characterized by their large nasoantorbital fenestra, the main opening in the skull in front of the eyes, which spans at least half the length of the entire skull in this family. Their eye sockets were small and pear-shaped. Studies of tapejarid brain cases show that they had extremely good vision, more so than in other pterosaur groups, and probably relied nearly exclusively on vision when hunting or interacting with other members of their species. Tapejarids had unusually reduced shoulder girdles that would have been slung low on the torso, resulting in wings that protruded from near the belly rather than near the back, a "bottom decker" arrangement reminiscent of some planes.
Biology
Tapejarids appear to have been arboreal, having more curved claws than other azhdarchoid pterosaurs and occurring more commonly in fossil sites with other arboreal flying vertebrates such as early birds. Tapejarids have long been speculated as having been frugivores or omnivores, based on their parrot-like beaks. Direct evidence for plant-eating is known in a specimen of Sinopterus that preserves seeds in the abdominal cavity. The Barremian-
Aptian distribution of some tapejarids may even be partially associated with the first radiation phase of the angiosperms, especially of the genus Klitzschophyllites which represents a more basal angiosperm.
Classification
Tapejaridae was named and defined by Brazilian paleontologist Alexander Kellner in 1989 as the clade containing both Tapejara and Tupuxuara, plus all descendants of their most recent common ancestor. In 2007, Kellner divided the family: Tapejarinae, consisting of Tapejara and its close relatives, and Thalassodrominae, consisting of Thalassodromeus and Tupuxuara. A 2011 subsumed the family Chaoyangopterinae in as the subfamily Chaoyangopterinae, something not followed by future authors. Kellner's concept of a Tapejaridae consisting of Tapejarinae and Thalassodrominae would be the basis for numerous subsequent phylogenetic analyses.
Various opposing studies have arose challenging Kellner's concept of Tapejaridae. The 2003 model of paleontologist David Unwin found Tupuxara and Thalassodromeus to be more distantly related to Tapejara and therefore outside of Tapejaridae, instead being related to Azhdarchidae. Later, in 2006, British paleontologists David Martill and Darren Naish followed Unwin's concept, and provided a revised definition for Tapejaridae was also proposed: the clade containing all species more closely related to Tapejara than to Quetzalcoatlus. A 2008 study by Lü Junchang and colleagues also corroborated this model, and used the term "Tupuxuaridae" to include both genera. In 2009, British paleontologist Mark Witton also agreed with the Unwin model. However, he noted that the term Thalassodrominae was created before Tupuxuaridae, meaning it had naming priority. He elevated Thalassodrominae to family level, thus creating the denomination Thalassodromidae.
Regarding the core tapejarid clade, American paleontologist Brian Andres and colleagues formally defined Tapejaridae as the clade containing Tapejara and Sinopterus in 2014. They also re-defined the subfamily Tapejarinae as all species closer to Tapejara than to Sinopterus, and added a new clade, Tapejarini, to include all descendants of the last common ancestor of Tapejara and Tupandactylus. In 2020, in the description of the genus Wightia, an opposing subfamily was named, Sinopterinae, consisting of tapejarids more closely related to Sinopterus than Tapejara. These studies follow the Unwin model, opposing Kellner's model of Tapejaridae while corroborating the close relationship between thalassodromids, azhdarchids, rather than tapejarids.
In 2023, paleontologist Rodrigo Pêgas and colleagues argued that despite the disagreements about the position of Thalassodromeus and its relatives, the species in question were consistently related. Therefore, they favored the term Thalassodromidae to have consistency with other studies that used the same name, despite finding them to form a natural grouping with Tapejaridae in their phylogenetic analysis (per the Kellner model). Thus, Thalassodromidae and Tapejaridae would be separate families within Tapejaromorpha.In their 2023 study, Pêgas and colleagues redefined Tapejaridae to be the most recent common ancestor of Sinopterus, Tapejara, and Caupedactylus in order to preserve the scope of the family in light of finding Caupedactylus, traditionally a tapejarine, outside of the Andres definition of Tapejaridae. They divided this redefined Tapejaridae into the groups Eutapejaria, containing the subfamilies Sinopterinae and Tapejarinae, and Caupedactylia, containing the pterosaurs Caupedactylus and Aymberedactylus. In 2024, Pêgas rejected this redefinition of Tapejaridae in light of non-compliance with phylocode rules, applying the Tapejara and Sinopterus definition and deeming Eutapejaria a synonym. Instead, he created the larger group contain Tapejaridae and Caupedactylia, removing Caupedactylus and Aymberedactylus from the family itself.
The cladogram below shows the phylogenetic analysis conducted by paleontologist Gabriela Cerqueira and colleagues in 2021, which uses Kellner's nomenclature of Tapejaridae.
Below are two cladograms representing different concepts of Tapejaridae. The first one shows the phylogenetic analysis conducted by Andres in 2021, in which Tapejaridae consists of the subfamilies Tapejarinae and Sinopterinae. He found the pterosaurs Lacusovagus and Keresdrakon as tapejarines, an arrangement that had never been recovered in previous analyses. Regarding the interrelationships of Tapejaridae, Andres follows Unwin's concept. The second cladogram shows the phylogenetic analysis conducted by Pêgas in 2024. He also found Tapejaridae to consist of both Tapejarinae and Sinopterinae, but differed from Andres in recovering the tapejarid Bakonydraco as a sinopterine instead of tapejarine. He created the new subtribe Caiuajarina within Tapejarini to include Caiuajara and Torukjara. Additionally, his analysis further differs from that of Andres in finding both Tapejaridae and Thalassodromidae within Tapejaromorpha, which corroborates the close relationship between thalassodromids and tapejarids, similar to Kellner.
Topology 1: Andres (2021).
Topology 2: Pêgas (2024).
Subclades
Summary of the phylogenetic definitons of tapejarid subclades as discussed in the classification section.
| Biology and health sciences | Pterosaurs | Animals |
5464288 | https://en.wikipedia.org/wiki/Telluric%20contamination | Telluric contamination | Telluric contamination is contamination of the astronomical spectra by the Earth's atmosphere.
Interference with astronomical observations
Most astronomical observations are conducted by measuring photons (electromagnetic waves) which originate beyond the sky. The molecules in the Earth's atmosphere, however, absorb and emit their own light, especially in the visible and near-IR portion of the spectrum, and any ground-based observation is subject to contamination from these telluric (earth-originating) sources. Water vapor and oxygen are two of the more important molecules in telluric contamination. Contamination by water vapor was particularly pronounced in the Mount Wilson solar Doppler measurements.
Many scientific telescopes have spectrographs, which measure photons as a function of wavelength or frequency, with typical resolution on the order of a nanometer of visible light. Spectroscopic observations can be used in myriad contexts, including measuring the chemical composition and physical properties of astronomical objects as well as measuring object velocities from the Doppler shift of spectral lines. Unless they are corrected for, telluric contamination can produce errors or reduce precision in such data.
Telluric contamination can also be important for photometric measurements.
Telluric correction
It is possible to correct for the effects of telluric contamination in an astronomical spectrum. This is done by preparing a telluric correction function, made by dividing a model spectrum of a star by an observation of an astronomical photometric standard star. This function can then be multiplied by an astronomical observation at each wavelength point.
While this method can restore the original shape of the spectrum, the regions affected can be prone to high levels noise due to the low number of counts in that area of the spectrum.
| Physical sciences | Basics | Astronomy |
5465118 | https://en.wikipedia.org/wiki/K%C5%91nig%27s%20theorem%20%28graph%20theory%29 | Kőnig's theorem (graph theory) | In the mathematical area of graph theory, Kőnig's theorem, proved by , describes an equivalence between the maximum matching problem and the minimum vertex cover problem in bipartite graphs. It was discovered independently, also in 1931, by Jenő Egerváry in the more general case of weighted graphs.
Setting
A vertex cover in a graph is a set of vertices that includes at least one endpoint of every edge, and a vertex cover is minimum if no other vertex cover has fewer vertices. A matching in a graph is a set of edges no two of which share an endpoint, and a matching is maximum if no other matching has more edges.
It is obvious from the definition that any vertex-cover set must be at least as large as any matching set (since for every edge in the matching, at least one vertex is needed in the cover). In particular, the minimum vertex cover set is at least as large as the maximum matching set. Kőnig's theorem states that, in any bipartite graph, the minimum vertex cover set and the maximum matching set have in fact the same size.
Statement of the theorem
In any bipartite graph, the number of edges in a maximum matching equals the number of vertices in a minimum vertex cover.
Example
The bipartite graph shown in the above illustration has 14 vertices; a matching with six edges is shown in blue, and a vertex cover with six vertices is shown in red. There can be no smaller vertex cover, because any vertex cover has to include at least one endpoint of each matched edge (as well as of every other edge), so this is a minimum vertex cover. Similarly, there can be no larger matching, because any matched edge has to include at least one endpoint in the vertex cover, so this is a maximum matching. Kőnig's theorem states that the equality between the sizes of the matching and the cover (in this example, both numbers are six) applies more generally to any bipartite graph.
Proofs
Constructive proof
The following proof provides a way of constructing a minimum vertex cover from a maximum matching. Let be a bipartite graph and let be the two parts of the vertex set . Suppose that is a maximum matching for .
Construct the flow network derived from in such way that there are edges of capacity from the source to every vertex and from every vertex to the sink , and of capacity from to for any .
The size of the maximum matching in is the size of a maximum flow in , which, in turn, is the size of a minimum cut in the network , as follows from the max-flow min-cut theorem.
Let be a minimum cut. Let and , such that and . Then the minimum cut is composed only of edges going from to or from to , as any edge from to would make the size of the cut infinite.
Therefore, the size of the minimum cut is equal to . On the other hand, is a vertex cover, as any edge that is not incident to vertices from and must be incident to a pair of vertices from and , which would contradict the fact that there are no edges between and .
Thus, is a minimum vertex cover of .
Constructive proof without flow concepts
No vertex in a vertex cover can cover more than one edge of (because the edge half-overlap would prevent from being a matching in the first place), so if a vertex cover with vertices can be constructed, it must be a minimum cover.
To construct such a cover, let be the set of unmatched vertices in (possibly empty), and let be the set of vertices that are either in or are connected to by alternating paths (paths that alternate between edges that are in the matching and edges that are not in the matching). Let
Every edge in either belongs to an alternating path (and has a right endpoint in ), or it has a left endpoint in . For, if is matched but not in an alternating path, then its left endpoint cannot be in an alternating path (because two matched edges can not share a vertex) and thus belongs to . Alternatively, if is unmatched but not in an alternating path, then its left endpoint cannot be in an alternating path, for such a path could be extended by adding to it. Thus, forms a vertex cover.
Additionally, every vertex in is an endpoint of a matched edge.
For, every vertex in is matched because is a superset of , the set of unmatched left vertices.
And every vertex in must also be matched, for if there existed an alternating path to an unmatched vertex then changing the matching by removing the matched edges from this path and adding the unmatched edges in their place would increase the size of the matching. However, no matched edge can have both of its endpoints in . Thus, is a vertex cover of cardinality equal to , and must be a minimum vertex cover.
Proof using linear programming duality
To explain this proof, we first have to extend the notion of a matching to that of a fractional matching - an assignment of a weight in [0,1] to each edge, such that the sum of weights near each vertex is at most 1 (an integral matching is a special case of a fractional matching in which the weights are in {0,1}). Similarly we define a fractional vertex-cover - an assignment of a non-negative weight to each vertex, such that the sum of weights in each edge is at least 1 (an integral vertex-cover is a special case of a fractional vertex-cover in which the weights are in {0,1}).
The maximum fractional matching size in a graph is the solution of the following linear program:Maximize 1E · x
Subject to: x ≥ 0E
__ AG · x ≤ 1V.where x is a vector of size |E| in which each element represents the weight of an edge in the fractional matching. 1E is a vector of |E| ones, so the first line indicates the size of the matching. 0E is a vector of |E| zeros, so the second line indicates the constraint that the weights are non-negative. 1V is a vector of |V| ones and AG is the incidence matrix of G, so the third line indicates the constraint that the sum of weights near each vertex is at most 1.
Similarly, the minimum fractional vertex-cover size in is the solution of the following LP:Minimize 1V · y
Subject to: y ≥ 0V
__ AGT · y ≥ 1E.where y is a vector of size |V| in which each element represents the weight of a vertex in the fractional cover. Here, the first line is the size of the cover, the second line represents the non-negativity of the weights, and the third line represents the requirement that the sum of weights near each edge must be at least 1.
Now, the minimum fractional cover LP is exactly the dual linear program of the maximum fractional matching LP. Therefore, by the LP duality theorem, both programs have the same solution. This fact is true not only in bipartite graphs but in arbitrary graphs:In any graph, the largest size of a fractional matching equals the smallest size of a fractional vertex cover.What makes bipartite graphs special is that, in bipartite graphs, both these linear programs have optimal solutions in which all variable values are integers. This follows from the fact that in the fractional matching polytope of a bipartite graph, all extreme points have only integer coordinates, and the same is true for the fractional vertex-cover polytope. Therefore the above theorem implies:
In any bipartite graph, the largest size of a matching equals the smallest size of a vertex cover.
Algorithm
The constructive proof described above provides an algorithm for producing a minimum vertex cover given a maximum matching. Thus, the Hopcroft–Karp algorithm for finding maximum matchings in bipartite graphs may also be used to solve the vertex cover problem efficiently in these graphs.
Despite the equivalence of the two problems from the point of view of exact solutions, they are not equivalent for approximation algorithms. Bipartite maximum matchings can be approximated arbitrarily accurately in constant time by distributed algorithms; in contrast, approximating the minimum vertex cover of a bipartite graph requires at least logarithmic time.
Example
In the graph shown in the introduction take to be the set of vertices in the bottom layer of the diagram and to be the set of vertices in the top layer of the diagram. From left to right label the vertices in the bottom layer with the numbers 1, …, 7 and label the vertices in the top layer with the numbers 8, …, 14. The set of unmatched vertices from is {1}. The alternating paths starting from are 1–10–3–13–7, 1–10–3–11–5–13–7, 1–11–5–13–7, 1–11–5–10–3–13–7, and all subpaths of these starting from 1. The set is therefore {1,3,5,7,10,11,13}, resulting in , and the minimum vertex cover .
Non-bipartite graphs
For graphs that are not bipartite, the minimum vertex cover may be larger than the maximum matching. Moreover, the two problems are very different in complexity: maximum matchings can be found in polynomial time for any graph, while minimum vertex cover is NP-complete.
The complement of a vertex cover in any graph is an independent set, so a minimum vertex cover is complementary to a maximum independent set; finding maximum independent sets is another NP-complete problem. The equivalence between matching and covering articulated in Kőnig's theorem allows minimum vertex covers and maximum independent sets to be computed in polynomial time for bipartite graphs, despite the NP-completeness of these problems for more general graph families.
History
Kőnig's theorem is named after the Hungarian mathematician Dénes Kőnig. Kőnig had announced in 1914 and published in 1916 the results that every regular bipartite graph has a perfect matching, and more generally that the chromatic index of any bipartite graph (that is, the minimum number of matchings into which it can be partitioned) equals its maximum degree – the latter statement is known as Kőnig's line coloring theorem. However, attribute Kőnig's theorem itself to a later paper of Kőnig (1931).
According to , Kőnig attributed the idea of studying matchings in bipartite graphs to his father, mathematician Gyula Kőnig. In Hungarian, Kőnig's name has a double acute accent, but his theorem is sometimes spelled (incorrectly) in German characters, with an umlaut.
Related theorems
Kőnig's theorem is equivalent to many other min-max theorems in graph theory and combinatorics, such as Hall's marriage theorem and Dilworth's theorem. Since bipartite matching is a special case of maximum flow, the theorem also results from the max-flow min-cut theorem.
Connections with perfect graphs
A graph is said to be perfect if, in every induced subgraph, the chromatic number equals the size of the largest clique. Any bipartite graph is perfect, because each of its subgraphs is either bipartite or independent; in a bipartite graph that is not independent the chromatic number and the size of the largest clique are both two while in an independent set the chromatic number and clique number are both one.
A graph is perfect if and only if its complement is perfect, and Kőnig's theorem can be seen as equivalent to the statement that the complement of a bipartite graph is perfect. For, each color class in a coloring of the complement of a bipartite graph is of size at most 2 and the classes of size 2 form a matching, a clique in the complement of a graph G is an independent set in G, and as we have already described an independent set in a bipartite graph G is a complement of a vertex cover in G. Thus, any matching M in a bipartite graph G with n vertices corresponds to a coloring of the complement of G with n-|M| colors, which by the perfection of complements of bipartite graphs corresponds to an independent set in G with n-|M| vertices, which corresponds to a vertex cover of G with M vertices. Conversely, Kőnig's theorem proves the perfection of the complements of bipartite graphs, a result proven in a more explicit form by .
One can also connect Kőnig's line coloring theorem to a different class of perfect graphs, the line graphs of bipartite graphs. If G is a graph, the line graph L(G) has a vertex for each edge of G, and an edge for each pair of adjacent edges in G. Thus, the chromatic number of L(G) equals the chromatic index of G. If G is bipartite, the cliques in L(G) are exactly the sets of edges in G sharing a common endpoint. Now Kőnig's line coloring theorem, stating that the chromatic index equals the maximum vertex degree in any bipartite graph, can be interpreted as stating that the line graph of a bipartite graph is perfect.
Since line graphs of bipartite graphs are perfect, the complements of line graphs of bipartite graphs are also perfect. A clique in the complement of the line graph of G is just a matching in G. And a coloring in the complement of the line graph of G, when G is bipartite, is a partition of the edges of G into subsets of edges sharing a common endpoint; the endpoints shared by each of these subsets form a vertex cover for G. Therefore, Kőnig's theorem itself can also be interpreted as stating that the complements of line graphs of bipartite graphs are perfect.
Weighted variants
Konig's theorem can be extended to weighted graphs.
Egerváry's theorem for edge-weighted graphs
Jenő Egerváry (1931) considered graphs in which each edge e has a non-negative integer weight we. The weight vector is denoted by w. The w-weight of a matching is the sum of weights of the edges participating in the matching. A w-vertex-cover is a multiset of vertices ("multiset" means that each vertex may appear several times), in which each edge e is adjacent to at least we vertices. Egerváry's theorem says:In any edge-weighted bipartite graph, the maximum w-weight of a matching equals the smallest number of vertices in a w-vertex-cover.The maximum w-weight of a fractional matching is given by the LP:
Maximize w · x
Subject to: x ≥ 0E
__ AG · x ≤ 1V.And the minimum number of vertices in a fractional w-vertex-cover is given by the dual LP:Minimize 1V · y
Subject to: y ≥ 0V
__ AGT · y ≥ w.As in the proof of Konig's theorem, the LP duality theorem implies that the optimal values are equal (for any graph), and the fact that the graph is bipartite implies that these programs have optimal solutions in which all values are integers.
Theorem for vertex-weighted graphs
One can consider a graph in which each vertex v has a non-negative integer weight bv. The weight vector is denoted by b. The b-weight of a vertex-cover is the sum of bv for all v in the cover. A b-matching is an assignment of a non-negative integral weight to each edge, such that the sum of weights of edges adjacent to any vertex v is at most bv. Egerváry's theorem can be extended, using a similar argument, to graphs that have both edge-weights w and vertex-weights b:
In any edge-weighted vertex-weighted bipartite graph, the maximum w-weight of a b-matching equals the minimum b-weight of vertices in a w-vertex-cover.
| Mathematics | Graph theory | null |
5468113 | https://en.wikipedia.org/wiki/Europasaurus | Europasaurus | Europasaurus (meaning 'Europe lizard') is a basal macronarian sauropod, a form of quadrupedal herbivorous dinosaur. It lived during the Late Jurassic (middle Kimmeridgian, from about 154 to 151 million years ago) of northern Germany, and has been identified as an example of insular dwarfism resulting from the isolation of a sauropod population on an island within the Lower Saxony basin.
Discovery and naming
In 1998, a single sauropod tooth was discovered by private fossil collector Holger Lüdtke in an active quarry at Langenberg Mountain, between the communities of Oker, Harlingerode and Göttingerode in Germany. The Langenberg chalk quarry had been active for more than a century; rocks are quarried using blasting and are mostly processed into fertilisers. The quarry exposes a nearly continuous, thick succession of carbonate rocks belonging to the Süntel Formation, that ranges in age from the early Oxfordian to late Kimmeridgian stages and have been deposited in a shallow sea with a water depth of less than . The layers exposed in the quarry are oriented nearly vertically and slightly overturned, which is a result of the ascent of the adjacent Harz mountains during the Lower Cretaceous. Widely known as a classical exposure among geologists, the quarry had been extensively studied, and visited by students of geology for decades. Although rich in fossils of marine invertebrates, fossils of land-living animals had been rare. The sauropod tooth was the first specimen of a sauropod dinosaur from the Jurassic of northern Germany.
After more fossil material was found, including bones, excavation of the bone-bearing layer commenced in April 1999, conducted by a local association of private fossil collectors. Although the quarry operator was cooperative, excavation was complicated by the near-vertical orientation of the layers that limited access, as well as by the ongoing quarrying. The sauropod material could not be excavated directly from the layer but had to be collected from lose blocks that resulting from blasting. The exact origin of the bone material was therefore unclear, but could later be traced to a single bed (bed 83). An excavation conducted between July 20–28 of 2000 rescued ca. of bone-bearing blocks containing vertebrate remains. Fossils were prepared and stored in the Dinosaur Park Münchehagen (DFMMh), a private dinosaur open-air museum located close to Hanover. Due to the very good preservation of the bones, consolidating agents had to be applied only occasionally, and preparation could be conducted comparatively quickly as bone would separate easily from the surrounding rock. Bones of simple shape could sometimes be prepared in less than an hour, while the preparation of a sacrum required a workload of three weeks. By January 2001, 200 single vertebrate bones had already been prepared. At this point, the highest bone density was found in a block measuring 70 x 70 cm, which contained ca. 20 bones. By January 2002, preparation of an even larger block had revealed a partial sauropod skull – the first to be discovered in Europe. Before complete removal of the bones from the block, a silicon cast was made of the block to document the precise three-dimensional position of the individual bones.
Part of the Europasaurus fossil material got damaged or destroyed by arson fire in the night from the 4th to the 5th of October, 2003. The fire destroyed the laboratory and exhibition hall of the Dinosaur Park Münchehagen, resulting in the loss of 106 bones, which account for 15% of the bones prepared at the time. Furthermore, the fire affected most of the still unprepared blocks, with firefighting water hitting the hot stone causing additional crumbling. Destroyed specimens include DFMMh/FV 100, which included the best preserved vertebral series and the only complete pelvis.
In 2006, the new sauropod taxon was formally described as Europasaurus holgeri. The given etymology for the genus name is "reptile from Europe", and the specific name honours Holger Lüdtke, the discoverer of the first fossils. Given the comparatively small size of the bones, it was initially assumed that they stem from juvenile individuals. The 2006 publication, however, established that the majority of specimens were adult, and that Europasaurus was an island dwarf. The number of individual sauropod bones had increased to 650 and include variously articulated individuals; the material was found within an area of squared. From these specimens, the holotype was selected, a disarticulated but associated individual (DFMMh/FV 291). The holotype includes multiple cranial bones (premaxilla, maxilla and quadratojugal), a partial braincase, multiple mandible bones (dentary, surangular and angular), large amounts of teeth, cervical vertebrae, sacral vertebrae and ribs from the neck and torso. At least 10 other individuals were referred to the same taxon based on overlap in material.
A large-scale excavation campaign commenced in the summer of 2012, with the goal to excavate Europasaurus bones not only from lose blocks but directly from the rock layer. Access to the bone-bearing layer required the removal of some 600 tons of rock using excavators and wheel loaders, and the constant pumping out of water from the base of the quarry. Excavations continued in spring and summer 2013. The campaign resulted in the discovery of new fish, turtle, and crocodile remains, as well as valuable information of the bone-bearing layer; additional Europasaurus bones, however, could not be located. By 2014, around 1300 vertebrate bones had been prepared from bed 83, the majority of which stemming from Europasaurus; an estimated 3000 additional bones await preparation. A minimum number of 20 individuals was identified based on jaw bones.
Description
Europasaurus is a very small sauropod, measuring only long and weighed as an adult. This length was estimated based on a partial femur, scaled to the size of a nearly complete Camarasaurus specimen. Younger individuals are known, from sizes of to the youngest juvenile at .
Distinguishing characteristics
Aside from being a very small neosauropod, Europasaurus was thought to have multiple unique morphological features to distinguish it from close relatives by its original describers, Sander et al. (2006). The nasal process of the premaxilla was thought to curve anteriorly while projecting upwards (now known to be preservational), there is a notch on the upper surface of the centra of cervical vertebrae, the scapula has a prominent process on the posterior surface of its body, and the astragalus (an ankle bone) is twice as wide as tall.
When compared to Camarasaurus, Europasaurus has a different morphology of the postorbital where the posterior flange is not as short, a short contact between the nasal and frontal bones of the skull, the shape of its parietal (rectangular in Europasaurus), and the neural spines of its vertebrae in front of the pelvis are unsplit. Comparisons with Brachiosaurus (now named Giraffatitan) were also mentioned, and it was identified that Europasaurus has a shorter snout, a contact between the quadratojugal and squamosal, and a humerus (upper forelimb bone) that has flattened and aligned proximal and distal surfaces. There were finally quick comparisons to the potential brachiosaurid Lusotitan, which has a different ilium and astragalus shape, and Cetiosaurus humerocristatus (named Duriatitan), which has a deltopectoral crest that is less prominent and extends across less of the humerus.
Skull
Nearly all external skull bones have been preserved among Europasaurus specimens, except the prefrontals. Some additional bones are only represented by very fragmented and uninformative fossils, such as the lacrimals. Eight premaxillae are known, with a generally rectangular snout shape as found in Camarasaurus. The anterior projection of the premaxilla identified in Sander et al. (2006) was re-identified as a preservational artifact in Marpmann et al. (2014), similar to the anatomy found in Camarasaurus and Euhelopus to a lesser degree. The dorsal projection of the premaxilla, the one which contacts the nasal bone, begins as a postero-dorsal projection, before becoming straight vertical at the point of the subnarial foramen, until it reaches the nasal. This weak "step" is seen in Camarasaurus and Euhelopus, and is present more strongly in Abydosaurus, Giraffatitan and a possible skull of Brachiosaurus. These latter taxa also have a longer snout, with more distance from the first tooth until the nasal process of the premaxilla. As well, Europasaurus shares with the basal camarasauromorphs (brachiosaurids, Camarasaurus, Euhelopus and Malawisaurus) a similarly sized orbit and nasal fenestra, whereas the nasal opening is significantly reduced in derived titanosaurs (Rapetosaurus, Tapuiasaurus and Nemegtosaurus).
A single maxilla is present in the well-preserved material of Europasaurus, DFMMh/FV 291.17. This maxilla has a long body, with two elongate processes, a nasal and a posterior process. There is only a weak lacrimal process, like in most sauropods except Rapetosaurus. The nasal process is elongate and covers the anterior and ventral rim of the antorbital fenestra. This process extends about 120º from the horizontal tooth row. The base of the nasal process also forms the body of the lacrimal process, and at their divergence is the antorbital fenestra, similar in shape to those of Camarasaurus, Euhelopus, Abydosaurus and Giraffatitan, but about 1/2 taller proportionally. The pre-antorbital fenestra, a small opening in front of or beneath the antorbital opening, is well developed in taxa like Diplodocus and Tapuiasaurus, is nearly absent, like in Camarasaurus and Euhelopus. There were about 12–13 total teeth in the maxilla of Europasaurus, fewer than in more basal taxa (16 teeth in Jobaria and 14–25 in Atlasaurus), but falling within the range of variation in Brachiosauridae (15 in Brachiosaurus to 10 in Abydosaurus). All of the unworn teeth preserved display up to four small denticles on their mesial edges. A small amount of the posterior tooth crowns are slightly twisted (~15º), but much less than in brachiosaurids (30–45º).
Among the nasal bones of Europasaurus, several are known, but few are complete or undistorted. The nasals are overlapped posteriorly by the frontal bones, and towards the side, they articulate bluntly with the prefrontals. Unlike the nasals of Giraffatitan, those in Europasaurus project horizontally forwards, forming a small portion of the skull roof over the antorbital fenestrae. Four frontals are known from Europasaurus, three being from the left and one being from the right. Because of their disarticulation, it is likely that the frontals never fused during growth, unlike in Camarasaurus. The frontals form a portion of the skull roof, articulating with other bones such as the nasals, parietals, prefrontals and postorbitals, and they are longer antero-posteriorly than they are wide, a unique character among a eusauropodan. Like in diplodocoids (Amargasaurus, Dicraeosaurus and Diplodocus), as well as Camarasaurus, the frontals are excluded from the frontoparietal fenestra (or parietal fenestra when frontals are excluded). The frontals are also excluded from the supratemporal fenestra margin (a widespread character in sauropods more derived than Shunosaurus), and they only have a small, unornamented participation in the orbit. Several parietal bones are known in Europasaurus, which show a rectangular shape much wider than long. the parietals are also wide when viewed from the back of the skull, being slightly taller than the foramen magnum (spinal cord opening). The parietals contribute to about half the post-temporal fenestra (opening above the very back of the skull) border, with the other region enclosed by the squamosal bones and some braincase bones. Parietals also form part of the edge of the supratemporal fenestra, which is wider than long in Europasaurus, like in Giraffatitan, Camarasaurus and Spinophorosaurus. Besides the before mentioned fenestra, the parietals also have a "postparietal fenestra", something rarely seen outside of Dicraeosauridae. A triradiate postorbital bone is present in Europasaurus, which evolved as the fusion of the postfrontal and postosbital bone of more basal taxa. Between the anterior and ventrally projecting processes the postorbital forms the margin of the orbit, and between the posterior and ventral processes it borders the infratemporal fenestra.
Multiple jugals are known from Europasaurus, which are more similar in morphology to basal sauropodomorphs than other macronarians. It forms part of the border of the orbit, infratemporal fenestra and the bottom edge of the skull, but does not reach the antorbital fenestra. The posterior process of the jugal are very fragile and narrow, showing a bone scar from the articulation with the quadratojugal. There are two prominences projecting from the back of the jugal body, which diverge at 75º and form the bottom and front edges of the infratemporal fenestra. Like in Riojasaurus and Massospondylus, two non-sauropod sauropodomorphs, the jugal forms a large part of the orbit edge, from the back to the front bottom corner. This feature has been seen in embryos of titanosaurs, but no adult individuals. The quadratojugal bone is an elongate element that has two projecting arms, one anterior and one dorsal. Like in other sauropods, the anterior process is longer than the dorsal, but in Europasaurus the arms are more similar lengths. The horizontal process is parallel to the tooth row of Europasaurus, similar to in Camarasaurus but unlike in Giraffatitan and Abydosaurus. There is a prominent ventral flange on the anterior arm of the bone, which is a possibly synapomorphy of Brachiosauridae, although it is also found in some Camarasaurus individuals. The two quadratojugal processes diverge at a nearly right angle (90º), although the dorsal process curves as it follows the shape of the quadrate. Squamosals found from Europasaurus show the same approximate shape in lateral view as Camarasaurus, that of a question mark. The squamosals articulate with many skull bones, including those of the skull roof, those of the ventral skull, and those of the braincase. Like the postorbitals, the squamosals are triradiate, with a ventral, anterior and medial
process.
There are thirteen preserved elements of the palate of Europasaurus, including the quadrate, pterygoid and ectopterygoid. The quadrates articulate with the palate and braincase bones, as well as the external skull bones. They are similar in shape to those of Giraffatitan and Camarasaurus, and have well-developed articular surfaces. A single shaft is present for a majority of the quadrates length, with a pterygoid wing along the medial side. Pterygoids are the largest of the sauropod palate bones, and it has a triradiate shape, like the postorbitals. An anterior projection contacts the opposite pterygoid, while a lateral wing contacts the ectopterygoid, and a posterior wing supports the quadrate and basipterygoid (a bone that provides connection between the palate and the braincase). The ectopterygoid is a small palate bone, which articulates the central palate bones (pterygoid and palatine) with the maxilla. Ectopterygoids are L-shaped, with an anterior process attaching to the maxilla, and a dorsal process that meets the pterygoid.
Vertebrae
The cervical vertebrae of Europasaurus are the best preserved and most represented of the vertebral column. However, not the entire neck is known, so the cervical number could be between Camarasaurus (12 vertebrae) and Rapetosaurus (17 vertebrae). Additionally, the multiple cervical vertebrae come from different-aged individuals, and the centrum length and internal structure are known to change throughout development. The adult cervical centra are elongated and (anterior end is ball-shaped), with a notch in the top of the rear end of the centrum. This feature was described as characteristic of Europasaurus but is also known in Euhelopus and Giraffatitan. In the side of the centra of Europasaurus there is an excavation which opens into the internal of the vertebrae. Unlike in Giraffatitan and brachiosaurids, Europasaurus does not have thin ridges () dividing this opening. Europasaurus shares laminae features on the upper vertebrae with basal macronarians and brachiosaurids. Differing from the anterior and middle cervicals, the posterior cervical vertebrae are less elongate, and taller proportionally, like in other macronarians, with significant changes in the positions of articular surfaces.
Front dorsal vertebrae are strongly opisthocoelous like the cervicals, and can be placed in the series based on the absence of the and low placement. The internal structures are open and like Camarasaurus, Giraffatitan and Galvesaurus, but unlike these taxa this pneumaticity does not extend into the middle and posterior dorsal vertebrae. The arrangement and presence of anterior laminae in Europasaurus is similar to other basal macronarians, but unlike more basal taxa (e.g. Mamenchisaurus, Haplocanthosaurus) and more derived taxa (e.g. Giraffatitan). The middle dorsals possess a pneumatic cavity that extends upwards into the , like in Barapasaurus, Cetiosaurus, Tehuelchesaurus, and Camarasaurus. The ventral edge of this opening is rhomboidal and well-defined. In the posterior vertebrae, the lateral pneumatic cavity has shifted higher on the centrum, a change seen in other basal macronarians. These are wide anteriorly, and narrow to become acutely angled posteriorly. The of Europasaurus stands vertically, a basal feature not seen in Brachiosaurus or more derived sauropods.
A series of all complete is only known from a single specimen, DFMMh/FV 100, which was destroyed in a fire in 2003. All five vertebrae, the characteristic number of more basal neosauropods, are incorporated into the . The third and fourth sacrals represent the primordial sacrals, present in all dinosaur groups. The second, S2, is the ancestral sauropodomorph sacral that was added in basal sauropodomorphs, who all share three sacrals to the exception of Plateosaurus. The fifth sacral, fused behind the primordial pair, is a caudosacral, migrated from the tail into the pelvis in taxa around Leonerasaurus. The first sacral, articulated with the ilium but not fused to the other vertebrae, represents the dorsosacral, bringing the count to five vertebrae found in all neosauropods. The level of fusion of the dorsosacral confirms the evolutionary history of the sauropod sacral count: the primordial pair incorporating first a dorsal (total of three), then a caudal (total of four), then another dorsal to make a total of five vertebrae.
Skin
Among macronarians, fossilized skin impressions are only known from Haestasaurus, Tehuelchesaurus and Saltasaurus. Both Tehuelchesaurus and Haestasaurus may be closely related to Europasaurus, and the characteristics of all sauropod skin impressions are similar. Haestasaurus, the first dinosaur known from skin impressions, preserved integument over a portion of the arm around the elbow joint. Dermal impressions are more widespread in the material of Tehuelchesaurus, where they are known from the areas of the forelimb, scapula and torso. There are no bony plates or nodules, to indicate armour, but there are several types of scales. The skin types of Tehuelchesaurus are overall more similar to those found in diplodocids and Haestasaurus than in the titanosaur embryos of Auca Mahuevo. As the shape and articulation of the preserved tubercles in these basal macronarians are similar in other taxa where skin is preserved, including specimens of Brontosaurus excelsus and intermediate diplodocoids, such dermal structures are probably widespread throughout Neosauropoda.
Classification
When it was first named, Europasaurus was considered to be a taxon within Macronaria that didn't fall within the family Brachiosauridae or the clade Titanosauromorpha. This indicated that the dwarfism of the taxon was a result of evolution, instead of being a characteristic of a group. Three matrices were analysed with the inclusion of Europasaurus, that of Wilson (2002) and Upchurch (1998) and Upchurch et al. (2004). All analyses resulted in similar phylogenetics, where Europasaurus placed more derived than Camarasaurus but outside a clade of Brachiosauridae and Titanosauromorpha (now named Titanosauriformes). The results of the favoured analysis of Sander et al. (2006) are shown below on the left:
During a description of the vertebrae of Europasaurus by Carballido & Sander (2013), another phylogenetic analysis was conducted (right column above). The cladistic matrix was expanded to include more sauropod taxa, such as Bellusaurus, Cedarosaurus and Tapuiasaurus. The taxon Brachiosaurus was also separated into true Brachiosaurus (B. altithorax) and Giraffatitan (B. brancai), based on Taylor (2009). Based on this newer and more expansive analysis, Europasaurus was found to be in a similar placement, as a basal camarasauromorph closer to titanosaurs than Camarasaurus. However, Euhelopus, Tehuelchesaurus, Tastavinsaurus and Galvesaurus were placed between Europasaurus and Brachiosauridae.
Placement as a brachiosaurid
In a 2012 analysis of the phylogeny of Titanosauriformes, D'Emic (2012) considered Europasaurus to belong to Brachiosauridae, instead of being basal to the earliest brachiosaurids. The phylogeny resolved the most true brachiosaurids to date, although several potential brachiosaurids were instead determined to belong to Somphospondyli (Paluxysaurus, Sauroposeidon and Qiaowanlong). However, D'Emic was tentative in considering Europasaurus to be a confirmed brachiosaurid. While there was strong support in the phylogeny for its placement, Europasaurus, one of few basal macronarians with a skull, lacks multiple bones that display characteristic features of the group, such as caudal vertebrae. The cladogram below on the left illustrates the phylogenetic results of D'Emic (2012), with Euhelopodidae and Titanosauria collapsed.
A later analysis on titanosauriformes agreed with D'Emic (2012) in the placement of Europasaurus. It formed a polytomy with Brachiosaurus and the "French Bothriospondylus" (named Vouivria) as the basalmost brachiosaurids. Next most derived in the clade was Lusotitan, with Giraffatitan, Abydosaurus, Cedarosaurus and Venenosaurus forming a more derived clade of brachiosaurids. The "twisted" teeth of Europasaurus were found to be one of the unique features of Brachiosauridae, which could mean a confident referral of isolated sauropod teeth to the clade.
A further phylogenetic analysis was performed on Brachiosauridae, based on that of D'Emic (2012). This phylogeny, conducted by D'Emic et al. (2016), resolved a very similar placement of Europasaurus within Brachiosauridae, although Sonorasaurus was placed in a clade with Giraffatitan, and Lusotitan was placed in a polytomy with Abydosaurus and Cedarosaurus. The remaining tree was the same as in D'Emic (2012), although Brachiosaurus was collapsed into a polytomy with more derived brachiosaurids. Another phylogeny, Mannion et al. (2017) found similar results to D'Emic (2012) and D'Emic et al. (2016). Europasaurus was the basalmost brachiosaurid, with the "French Bothriospondylus", or Vouivria, as the next most basal brachiosaurid. Brachiosaurus was placed outside of a poltomy of all other brachiosaurids, Giraffatitan, Abydosaurus, Sonorasaurus, Cedarosaurus and Venenosaurus. A 2017 phylogeny, that of Royo-Torres et al. (2017), resolved more complex relations within Brachiosauridae. Besides Europasaurus as the basalmost brachiosaurid, there were two subgroups within the clade, one containing Giraffatitan, Sonorasaurus and Lusotitan, and another including almost all other brachiosaurids, as well as Tastavinsaurus. This second clade would be termed Laurasiformes under the group's definition. Brachiosaurus was in a polytomy with the two subclades of Brachiosauridae. The phylogeny of Royo-Torres et al. (2017) can be seen above, in the right column.
Paleobiology
Growth
It was identified that Europasaurus was a unique dwarf species, and not a juvenile of an existing taxon like Camarasaurus, by a histology analysis of multiple specimens of Europasaurus. The youngest specimen (DFMMh/FV 009) was shown by this analysis to lack signs of aging such as growth marks or laminar bone tissue, and is also the smallest specimen at in length. Such bone tissue is an indicator of rapid growth, so the specimen is probably a young juvenile. A larger specimen (DFMMh/FV 291.9) at shows large amounts of laminar tissue, with no growth marks present, so is likely a juvenile as well. The next smallest specimen (DFMMh/FV 001) has shows the presence of growth marks (specifically annuli), and at the length of is possibly a subadult. Further larger, DFMMh/FV 495 displays mature osteons as well as annuli, and is . The second largest analysed specimen (DFMMh/FV 153) also shows growth marks, but they are more frequent. This specimen is . A single partial femur represents the largest known Europasaurus individual, at a body length of . Unlike all other specimens, this one (DFMMh/FV 415) shows the presence of lines of arrested growth, indicating it died after reaching full body size. The internal bone is also partially lamellar, which shows it had stopped growing recently.
These combinations of growth factors show that Europasaurus developed its small size because of a largely reduced growth rate, gaining size slower than larger taxa such as Camarasaurus. This slowing growth rate is the opposite of the general trend of sauropods and theropods, who reached greater sizes with increased growth rates. Some of the close relatives of Europasaurus represent the largest dinosaurs known, including Brachiosaurus and Sauroposeidon. Marpmann et al. (2014) proposed that the small size and reduced growth rate of Europasaurus was an effect of pedomorphism, where the adults of taxa retain juvenile characteristics, such as size.
Examinations of the inner ears of infant Europasaurus suggest they were precocial, and it is suggested that they would have been reliant on the protections of adults in a herd to some degree, something not seen in larger sauropods due to the massive size difference in parent and offspring. The structure and long length of the inner ear in this genus also suggests that they had good senses of hearing, with Europasaurus. Intraspecific communication was also apparently important to this sauropod, based on these studies, suggesting this sauropod displayed clear, gregarious behavior.
Dwarfism
It has been suggested that an ancestor of Europasaurus would have quickly decreased in body size after emigrating to an island that existed at the time, as the largest of the islands in the region around northern Germany was smaller than squared, which may not have been able to support a community of large sauropods. Alternately, a macronarian may have shrunken concurrently with a larger landmass, until achieving the size of Europasaurus. Previous studies on insular (island) dwarfism are largely restricted to the Maastrichtian of Haţeg Island in Romania, home to the dwarf titanosaur Magyarosaurus and the dwarf hadrosaur Telmatosaurus. Telmatosaurus is known to be from a small adult, and although it is very small, Magyarosaurus specimens of small sizes are known to be from adult to old individuals. Magyarosaurus dacus adults were a similar body size to Europasaurus, but the largest of the latter had longer femora than the largest of the former, while Magyarosaurus hungaricus was significantly larger than either taxon. The dwarfism in Europasaurus represents the only significant rapid body mass change in derived Sauropodomorpha, with the general trend of taxa being a growth in overall size in other groups.
Palaeoecology
The Langenberg locality in Germany, from the early Oxfordian to late Kimmeridgian, displays the variety of plant and animal life from an island ecosystem from the late Jurassic. During the Kimmeridgian the locality would have been marine, being located between the Rhenisch, Bohemian, and London-Brabant Massifs. This does not indicate that the taxa present were marine, as the animals and plants may have been deposited allochthonously (albeit only by a short distance) from the surrounding islands. The sediments to show that there was an occasional influx of fresh or brackish water, and the fossils preserved display that. There are large numbers of marine bivalve fossils, as well as echinoderms and microfossils present in the limestone of the quarry, although many of the animals and plants were terrestrial.
Many marine taxa are preserved at Langenberg, although they would not have co-existed often with Europasaurus. There are at least three turtle genera, Plesiochelys, Thalassemys and up to two unnamed taxa. Actinopterygian fish are abundant, being represented by Lepidotes, Macromesodon, Proscinetes, Coelodus, Macrosemius, Notagogus, Histionotus, Ionoscopus, Callopterus, Caturus, Sauropsis, Belonostomus, and Thrissops. Also present are at least five distinct morphologies of hybodont sharks, the neoselachians Palaeoscyllium, Synechodus and Asterodermus. Two marine crocodyliforms are known from Langenberg, Machimosaurus and Steneosaurus, which likely fed off turtles and fish, and the amphibious crocodyliform Goniopholis has also been found.
Conifer cones and twigs can be identified as the araucarian Brachyphyllum, from the terrestrial fossils of the site. Deposited in the locality are many taxa, including a large accumulation of Europasaurus bones and individuals. At least 450 bones from Europasaurus were recovered from the Langenberg Quarry, with about 1/3 bearing tooth marks. Of these tooth marks, the sizes and shapes match well with the teeth of fish, crocodyliforms or other scavengers, but no confirmed theropod marks. The high number of individuals present suggests that a herd of Europasaurus was crossing a tidal zone and drowned. While the dominant large-bodied animal present is Europasaurus, there is also material from a diplodocid sauropod, a stegosaurian, and multiple theropods. Three cervicals of the diplodocid are preserved, and from their size it is possible that the taxon was also a dwarf. The stegosaurian and variety of theropods only preserve teeth, with the exception of a few bones possibly from a taxon in Ceratosauridae. Isolated teeth show that there were at least four different types of theropods present at the locality, including the megalosaurid Torvosaurus sp. as well as an additional megalosaurid and indeterminate members of the Allosauridae and Ceratosauria; and there are also the oldest teeth known from Velociraptorinae.
Besides the dinosaurs, many small-bodied terrestrial vertebrates are also preserved in the Langenberg quarry. Such animals include a well-preserved three-dimensional pterosaur skeleton from Dsungaripteridae, and isolated remains from Ornithocheiroidea and Ctenochasmatidae; a paramacellodid lizard; and partial skeletons and skulls from a relative of Theriosuchus now named as the genus Knoetschkesuchus. Teeth from dryolestid mammals are also preserved, as well as a docodont, a taxon in Eobaataridae, and a multituberculate with similarities to Proalbionbataar (now named Teutonodon).
Extinction
Dinosaur footprints preserved at the Langenberg Quarry display a possible reason for the extinction of Europasaurus, and potentially other insular dwarfs present on the islands of the region. The footprints are located above the deposit of Europasaurus individuals, which shows that at least 35,000 years after that deposit there was a drop in sea level which allowed for a faunal overturn. The inhabiting theropods of the island, that coexisted with Europasaurus, would have been about , but the theropods that arrived over the land bridge preserve footprints
up to , which indicates a body size between if reconstructed as an allosaurian. It was suggested by the describers of these tracks (Jens Lallensack and colleagues), that these theropod taxa likely made the specialized dwarf fauna extinct, and the bed from which the footprints originated (Langenberg bed 92) is probably the youngest in which Europasaurus is present.
| Biology and health sciences | Sauropods | Animals |
15939419 | https://en.wikipedia.org/wiki/Black%20Iberian%20pig | Black Iberian pig | The Iberian pig, also known in Portugal as the Alentejo Pig, is a traditional breed of the domestic pig (Sus scrofa domesticus) that is native to the Iberian Peninsula. The Iberian pig, whose origins can probably be traced back to the Neolithic, when animal domestication started, is currently found in herds clustered in Spain and the central and southern part of Portugal.
The most commonly accepted theory is that the pigs were first brought to the Iberian Peninsula by the Phoenicians from the Eastern Mediterranean coast (current-day Lebanon), where they interbred with wild boars. This cross gave rise to the ancestors of what are today Iberian pigs. The production of Iberian pig is deeply rooted to the Mediterranean ecosystem. It is a rare example in world swine production where the pig contributes so decisively to the preservation of the ecosystem. The Iberian breed is currently one of the few examples of a domesticated breed which has adapted to a pastoral setting where the land is particularly rich in natural resources, in this case acorns from the holm oak, gall oak and cork oak.
The numbers of the Iberian breed have been drastically reduced since 1960 due to several factors such as the outbreak of African swine fever and the lowered value of animal fats. In the past few years, however, the production of pigs of the Iberian type has increased to satisfy a renewed demand for top-quality meat and cured products. At the same time, breed specialisation has led to the disappearance of some ancestral varieties.
This traditional breed exhibits a good appetite and propensity to obesity, including a great capacity to accumulate intramuscular and epidermal fat. The high intramuscular fat is what produces the typical marbling; this, together with traditional feeding based on acorns, is what makes its ham taste so special. Iberian pigs are interesting from a human biomedical perspective because they present high feed intake and propensity to obesity, compatible with high values of serum leptin.
The Iberian pig can be either red or dark in colour, if black ranging from dark to grey, with little or no hair and a lean body, thus giving rise to the familiar name pata negra, or "black hoof". In traditional management, animals ranged freely in sparse oak forest (dehesa in Spain, montado in Portugal), they are constantly moving around and therefore burn more calories than confined pigs. This, in turn, produces the fine bones typical of this kind of jamón ibérico.
At least a hectare of healthy dehesa is needed to raise a single pig, and since the trees may be several hundred years old, the prospects for reforesting lost dehesa are slim at best. True dehesa is a richly diverse habitat with four different types of oak that are crucial in the production of prime-quality ham. The bulk of the acorn harvest comes from the holm oak (Quercus rotundifolia) from November to February, but the season would be too short without the earlier harvests of Pyrenean oak (Quercus pyrenaica) and Portuguese or gall oak (Quercus lusitanica), and the late cork oak (Quercus suber) season, which between them extend the acorn-production period from September almost to April.
| Biology and health sciences | Pigs | Animals |
13261606 | https://en.wikipedia.org/wiki/Red%20banana | Red banana | Red bananas are a group of varieties of bananas with reddish-purple skin. Some are smaller and plumper than the common Cavendish banana, others much larger. Ripe, raw red bananas have a flesh that is creamy to light pink. They are also softer and sweeter than the yellow Cavendish varieties, some with a slight tangy raspberry flavor and others with an earthy one. Many red bananas are exported by producers in East Africa, Asia, South America, and the United Arab Emirates. They are a favorite in Central America, but are sold throughout the world.
Description
Red bananas should have a deep red or maroon rind when ripe and are best eaten when unbruised and slightly soft. This variety contains more beta-carotene and vitamin C than yellow bananas. It also contains potassium and iron. The redder the fruit, the more carotene and the higher the vitamin C level. As with yellow bananas, red bananas will ripen in a few days at room temperature and are best stored outside from refrigeration.
Compared with the most common banana, the Cavendish banana, they tend to be smaller, have a slightly thicker skin with a sweeter taste, but do have a longer shelf life than yellow bananas.
Nomenclature
It is known in English as Red dacca (Australia), Red banana, 'Red' banana (US), Claret banana, Cavendish banana "Cuban Red", Jamaican red banana, and Red Cavendish banana.
Taxonomy
The red banana is a triploid cultivar of the wild banana Musa acuminata, belonging to the AAA group.
Its official designation is Musa acuminata (AAA Group) 'Red Dacca'.
Synonyms include:
Musa acuminata Colla (AAA Group) cv. 'Red'
Musa sapientum L. f. rubra Bail.
Musa sapientum L. var. rubra (Firm.) Baker
Musa rubra Wall. ex Kurz.
Musa × paradisiaca L. ssp. sapientum (L.) Kuntze var. rubra
Musa acuminata Colla (AAA Group) cv. 'Cuban Red'
Musa acuminata Colla (Cavendish Group) cv. 'Cuban Red'
Musa acuminata Colla (AAA Group) cv. 'Red Jamaican'
Musa acuminata Colla (AAA Group) cv. 'Jamaican Red'
Musa acuminata Colla (AAA Group) cv. 'Spanish Red'.
History
The first bananas to appear on the market in Toronto (in the 1870s and 1880s) were red bananas. Red bananas are available year-round at specialty markets and larger supermarkets in the United States.
Uses
Culinary
Red bananas are eaten in the same way as yellow bananas, by peeling the fruit before eating. They are frequently eaten raw, whole, or chopped, and added to desserts and fruit salads, but can also be baked, fried, and toasted. Red bananas are also commonly sold dried in stores.
The red banana has more beta-carotene and vitamin C than the yellow banana varieties. All bananas contain natural sources of three sugars: sucrose, fructose, and glucose.
Cultivation
Pests and diseases
Panama disease
| Biology and health sciences | Tropical and tropical-like fruit | Plants |
8632926 | https://en.wikipedia.org/wiki/Security%20bug | Security bug | A security bug or security defect is a software bug that can be exploited to gain unauthorized access or privileges on a computer system. Security bugs introduce security vulnerabilities by compromising one or more of:
Authentication of users and other entities
Authorization of access rights and privileges
Data confidentiality
Data integrity
Security bugs do not need be identified nor exploited to be qualified as such and are assumed to be much more common than known vulnerabilities in almost any system.
Causes
Security bugs, like all other software bugs, stem from root causes that can generally be traced to either absent or inadequate:
Software developer training
Use case analysis
Software engineering methodology
Quality assurance testing
and other best practices
Taxonomy
Security bugs generally fall into a fairly small number of broad categories that include:
Memory safety (e.g. buffer overflow and dangling pointer bugs)
Race condition
Secure input and output handling
Faulty use of an API
Improper use case handling
Improper exception handling
Resource leaks, often but not always due to improper exception handling
Preprocessing input strings before they are checked for being acceptable
Mitigation
See software security assurance.
| Technology | Computer security | null |
8634017 | https://en.wikipedia.org/wiki/Ceroxylon%20quindiuense | Ceroxylon quindiuense | Ceroxylon quindiuense, often called Quindío wax palm, is a palm native to the humid montane forests of the Andes in Colombia and Peru.
Description
This palm species can grow to a height of —or rarely, even as high as . It is the tallest recorded monocot in the world. The trunk is cylindrical, smooth, light colored, covered with wax; leaf scars forming dark rings around the trunk. The leaves are dark green and grayish, long, with a petiole up to . Fruits are globose and orange-red when ripe, in diameter.
Taxonomy
Ceroxylon quindiuense was described by Gustav Karl Wilhelm Hermann Karsten and published in Bonplandia (Hannover) 8: 70. (1860).
Etymology:
Ceroxylon: generic name composed of the Greek words: kèròs = "wax" and xγlon = "wood", in reference to the thick white wax found on the trunks.
quindiuense: geographical epithet alluding to its location in Quindío.
Synonymy:
Klopstockia quindiuensis H.Karst
Ceroxylon floccosum Burret
Ecology
It grows in large and dense populations along the central and eastern Andes of Colombia (rarely in the western Colombian Andes), with a disjunct distribution in the Andes of northern Peru. The elevational range of this species is between above sea level. It achieves a minimum reproductive age at 80 years. Wax palms provide habitats for many unique life forms, including endangered species such as the yellow-eared parrot (Ognorhynchus icterotis).
Vernacular names
Palma de cera, palma de ramo (both names in Colombia).
Conservation
Populations of Ceroxylon quindiuense are threatened by habitat disturbance, overharvesting and diseases. The fruit was used as feed for cattle and pigs. The leaves were extensively used in the Catholic celebrations of Palm Sunday; such leaves coming from young individuals which were damaged to death. That activity has been reduced severely in recent years due to law enforcement and widespread campaign. Felling of Ceroxylon quindiuense palms to obtain wax from the trunk also is an activity still going on in Colombia and Peru. The palm is recognized as the national tree of Colombia, and since the implementation of Law 61 of 1985, it is legally a protected species in that country.
Cultivation and uses
The wax of the trunk was used to make candles, especially in the 19th century. The outer part of the stem of the palm has been used locally for building houses, and was used to build water supply systems for impoverished farmers. It is cultivated as an ornamental plant in Colombia and California.
Gallery
| Biology and health sciences | Arecales (inc. Palms) | Plants |
10780895 | https://en.wikipedia.org/wiki/Celestial%20cartography | Celestial cartography | Celestial cartography, uranography,
astrography or star cartography is the aspect of astronomy and branch of cartography concerned with mapping stars, galaxies, and other astronomical objects on the celestial sphere. Measuring the position and light of charted objects requires a variety of instruments and techniques. These techniques have developed from angle measurements with quadrants and the unaided eye, through sextants combined with lenses for light magnification, up to current methods which include computer-automated space telescopes. Uranographers have historically produced planetary position tables, star tables, and star maps for use by both amateur and professional astronomers. More recently, computerized star maps have been compiled, and automated positioning of telescopes uses databases of stars and of other astronomical objects.
Etymology
The word "uranography" derived from the Greek "ουρανογραφια" (Koine Greek ουρανος "sky, heaven" + γραφειν "to write") through the Latin "uranographia". In Renaissance times, Uranographia was used as the book title of various celestial atlases. During the 19th century, "uranography" was defined as the "description of the heavens". Elijah H. Burritt re-defined it as the "geography of the heavens". The German word for uranography is "Uranographie", the French is "uranographie" and the Italian is "uranografia".
Astrometry
Astrometry, the science of spherical astronomy, is concerned with precise measurements of the location of celestial bodies in the celestial sphere and their kinematics relative to a reference frame on the celestial sphere. In principle, astrometry can involve such measurements of planets, stars, black holes and galaxies to any celestial body.
Throughout human history, astrometry played a significant role in shaping our understanding of the structure of the visible sky, which accompanies the location of bodies in it, hence making it a fundamental tool to celestial cartography.
Star catalogues
A determining fact source for drawing star charts is naturally a star table. This is apparent when comparing the imaginative "star maps" of Poeticon Astronomicon – illustrations beside a narrative text from the antiquity – to the star maps of Johann Bayer, based on precise star-position measurements from the Rudolphine Tables by Tycho Brahe.
Important historical star tables
c:AD 150, Almagest – contains the last known star table from antiquity, prepared by Ptolemy, 1,028 stars.
c.964, Book of the Fixed Stars, Arabic version of the Almagest by al-Sufi.
1627, Rudolphine Tables – contains the first West Enlightenment star table, based on measurements of Tycho Brahe, 1,005 stars.
1690, Prodromus Astronomiae – by Johannes Hevelius for his Firmamentum Sobiescanum, 1,564 stars.
1729, Britannic Catalogue – by John Flamsteed for his Atlas Coelestis, position of more than 3,000 stars by accuracy of 10".
1903, Bonner Durchmusterung – by Friedrich Wilhelm Argelander and collaborators, circa 460,000 stars.
Star atlases
Naked-eye
15th century BC – The ceiling of the tomb TT71 for the Egyptian architect and minister Senenmut, who served Queen Hatshepsut, is adorned with a large and extensive star chart.
1 CE ? Poeticon astronomicon, allegedly by Gaius Julius Hyginus
1092 – Xin Yi Xiang Fa Yao (新儀 象法要), by Su Song, a horological treatise which had the earliest existent star maps in printed form. Su Song's star maps also featured the corrected position of the pole star which had been deciphered due to the efforts of astronomical observations by Su's peer, the polymath scientist Shen Kuo.
1515 – First European printed star charts published in Nuremberg, Germany, engraved by Albrecht Dürer.
1603 – Uranometria, by Johann Bayer, the first western modern star map based on Tycho Brahe's and Johannes Kepler's Tabulae Rudolphinae
1627 – Julius Schiller published the star atlas Coelum Stellatum Christianum, which replaced pagan constellations with biblical and early Christian figures.
1660 – Jan Janssonius' 11th volume of Atlas Major (not to be confused with the similarly named and scoped Atlas Maior) featured the Harmonia Macrocosmica by Andreas Cellarius
1693 – Firmamentum Sobiescanum sive Uranometria, by Johannes Hevelius, a star map updated with many new star positions based on Hevelius's Prodromus Astronomiae (1690) – 1564 stars.
Telescopic
1729 Atlas Coelestis by John Flamsteed
1801 Uranographia by Johann Elert Bode
1843 Uranometria Nova by Friedrich Wilhelm Argelander
Photographic
1914 Franklin-Adams Charts, by John Franklin-Adams, a very early photographic atlas.
The Falkau Atlas (Hans Vehrenberg). Stars to magnitude 13.
Atlas Stellarum (Hans Vehrenberg). Stars to magnitude 14.
True Visual Magnitude Photographic Star Atlas (Christos Papadopoulos). Stars to magnitude 13.5.
The Cambridge Photographic Star Atlas, Axel Mellinger and Ronald Stoyan, 2011. Stars to magnitude 14, natural color, 1°/cm.
Modern
Bright Star Atlas – Wil Tirion (stars to magnitude 6.5)
Cambridge Star Atlas – Wil Tirion (Stars to magnitude 6.5)
Norton's Star Atlas and Reference Handbook – Ed. Ian Ridpath (stars to magnitude 6.5)
Stars & Planets Guide – Ian Ridpath and Wil Tirion (stars to magnitude 6.0)
Cambridge Double Star Atlas – James Mullaney and Wil Tirion (stars to magnitude 7.5)
Cambridge Atlas of Herschel Objects – James Mullaney and Wil Tirion (stars to magnitude 7.5)
Pocket Sky Atlas – Roger Sinnott (stars to magnitude 7.5)
Deep Sky Reiseatlas – Michael Feiler, Philip Noack (Telrad Finder Charts – stars to magnitude 7.5)
Atlas Coeli Skalnate Pleso (Atlas of the Heavens) 1950.0 – Antonín Bečvář (stars to magnitude 7.75 and about 12,000 clusters, galaxies and nebulae)
SkyAtlas 2000.0, second edition – Wil Tirion & Roger Sinnott (stars to magnitude 8.5)
1987, Uranometria 2000.0 Deep Sky Atlas – Wil Tirion, Barry Rappaport, Will Remaklus (stars to magnitude 9.7; 11.5 in selected close-ups)
Herald-Bobroff AstroAtlas – David Herald & Peter Bobroff (stars to magnitude 9 in main charts, 14 in selected sections)
Millennium Star Atlas – Roger Sinnott, Michael Perryman (stars to magnitude 11)
Field Guide to the Stars and Planets – Jay M. Pasachoff, Wil Tirion charts (stars to magnitude 7.5)
SkyGX (still in preparation) – Christopher Watson (stars to magnitude 12)
The Great Atlas of the Sky – Piotr Brych (2,400,000 stars to magnitude 12, galaxies to magnitude 18).
Interstellarum Deep Sky Atlas (2014) – Ronald Stoyan and Stephan Schurig (stars to magnitude 9.5)
Computerized
100,000 Stars
Cartes du Ciel
Celestia
Stars and Planets for Android
Stars and Planets for iOS
CyberSky
GoSkyWatch Planetarium
Google Sky
KStars
Stellarium
SKY-MAP.ORG
SkyMap Online
WorldWide Telescope
XEphem, for Unix-like systems
Stellarmap.com – online map of the stars
Star Walk and Kepler Explorer OpenLab: 2 celestial cartography apps for smartphones
SpaceEngine
Free and printable from files
The TriAtlas Project
Toshimi Taki Star Atlases
DeepSky Hunter Star Atlas
Andrew Johnson mag 7
| Physical sciences | Celestial sphere: General | Astronomy |
10780959 | https://en.wikipedia.org/wiki/Wattieza | Wattieza | Wattieza was a genus of prehistoric trees that existed in the mid-Devonian that belong to the cladoxylopsids, close relatives of the modern ferns and horsetails. The 2005 discovery (publicly revealed in 2007) in Schoharie County, New York, of fossils from the Middle Devonian about 385 million years ago united the crown of Wattieza to a root and trunk known since 1870. The fossilized grove of "Gilboa stumps" discovered at Gilboa, New York, were described as Eospermatopteris, though the complete plant remained unknown. These fossils have been described as the earliest known trees, standing 8 m (26 ft) or more tall, resembling the unrelated modern tree fern.
Wattieza had fronds rather than leaves, and they reproduced with spores.
Belgian paleobotanist François Stockmans described the species Wattieza givetiana in 1968 from fossil fronds collected from Middle Devonian strata in the London-Brabant Massif in Belgium.
English geologist and palaeobotanist Chris Berry described Wattieza casasii in Review of Paleobotany and Palynology No. 112 in 2000, based on fossil branches (13 slabs) and numerous other fragments (Berry, 2000) collected from middle-Givetian strata from the lower member of the Campo Chico Formation (Casas et al, 2022). The lithology of the lower member consists of dark grey to green mudstones and shales, interbedded with medium-coarse-grained sandstones close to the base of the Campo Chico Formation, in outcrops of the road to the Rio Socuy (Casas et al, 2022; pag 24), close to the Cano Colorado river, Perija Range, Zulia, Venezuela (Casas et al, 2022). The fossil material of Wattieza casasii is held at the National Museum Cardiff, Cardiff, Wales, and the palaeontological section of the Museo de Biologia at the University of Zulia, Maracaibo, Venezuela (Berry, 2000; p. 127). The name Wattieza casasii was assigned to the species in honor of Jhonny Casas, one of the discoverers of the original material (Berry, 2000; pag. 144).
| Biology and health sciences | Pteridophytes | Plants |
10791959 | https://en.wikipedia.org/wiki/E%E2%80%93Z%20notation | E–Z notation | E–Z configuration, or the E–Z convention, is the IUPAC preferred method of describing the absolute stereochemistry of double bonds in organic chemistry. It is an extension of cis–trans isomer notation (which only describes relative stereochemistry) that can be used to describe double bonds having two, three or four substituents. E and Z notation are only used when a compound doesn't have two identical substituents.
Following the Cahn–Ingold–Prelog priority rules (CIP rules), each substituent on a double bond is assigned a priority, then positions of the higher of the two substituents on each carbon are compared to each other. If the two groups of higher priority are on opposite sides of the double bond (trans to each other), the bond is assigned the configuration E (from entgegen, , the German word for "opposite"). If the two groups of higher priority are on the same side of the double bond (cis to each other), the bond is assigned the configuration Z (from zusammen, , the German word for "together").
The letters E and Z are conventionally printed in italic type, within parentheses, and separated from the rest of the name with a hyphen. They are always printed as full capitals (not in lowercase or small capitals), but do not constitute the first letter of the name for English capitalization rules (as in the example above).
Another example: The CIP rules assign a higher priority to bromine than to chlorine, and a higher priority to chlorine than to hydrogen, hence the following (possibly counterintuitive) nomenclature.
For organic molecules with multiple double bonds, it is sometimes necessary to indicate the alkene location for each E or Z symbol. For example, the chemical name of alitretinoin is (2E,4E,6Z,8E)-3,7-dimethyl-9-(2,6,6-trimethyl-1-cyclohexenyl)nona-2,4,6,8-tetraenoic acid, indicating that the alkenes starting at positions 2, 4, and 8 are E while the one starting at position 6 is Z.
Undefined ene stereochemistry
The prefix 'E/Z-' can be used to indicate uncertainty in the E or Z isomers for an ene bond. For graphical representations, wavy single bonds are the standard way to represent unknown or unspecified stereochemistry or a mixture of isomers (as with tetrahedral stereocenters). A crossed double-bond has been used sometimes; it is no longer considered an acceptable style for general use by IUPAC but may still be required by computer software.
| Physical sciences | Stereochemistry | Chemistry |
2160676 | https://en.wikipedia.org/wiki/Coastal%20geography | Coastal geography | Coastal geography is the study of the constantly changing region between the ocean and the land, incorporating both the physical geography (i.e. coastal geomorphology, climatology and oceanography) and the human geography (sociology and history) of the coast. It includes understanding coastal weathering processes, particularly wave action, sediment movement and weather, and the ways in which humans interact with the coast.
Wave action and longshore drift
The waves of different strengths that constantly hit against the shoreline are the primary movers and shapers of the coastline. Despite the simplicity of this process, the differences between waves and the rocks they hit result in hugely varying shapes.
The effect that waves have depends on their strength. Strong waves, also called destructive waves, occur on high-energy beaches and are typical of winter. They reduce the quantity of sediment present on the beach by carrying it out to bars under the sea. Constructive, weak waves are typical of low-energy beaches and occur most during summer. They do the opposite to destructive waves and increase the size of the beach by piling sediment up onto the berm.
One of the most important transport mechanisms results from wave refraction. Since waves rarely break onto a shore at right angles, the upward movement of water onto the beach (swash) occurs at an oblique angle. However, the return of water (backwash) is at right angles to the beach, resulting in the net movement of beach material laterally. This movement is known as beach drift (Figure 3). The endless cycle of swash and backwash and resulting beach drift can be observed on all beaches. This may differ between coasts.
Probably the most important effect is longshore drift (LSD)(Also known as Littoral Drift), the process by which sediment is continuously moved along beaches by wave action. LSD occurs because waves hit the shore at an angle, pick up sediment (sand) on the shore and carry it down the beach at an angle (this is called swash). Due to gravity, the water then falls back perpendicular to the beach, dropping its sediment as it loses energy (this is called backwash). The sediment is then picked up by the next wave and pushed slightly further down the beach, resulting in a continual movement of sediment in one direction. This is the reason why long strips of coast are covered in sediment, not just the areas around river mouths, which are the main sources of beach sediment. LSD is reliant on a constant supply of sediment from rivers and if sediment supply is stopped or sediment falls into a submarine canals at any point along a beach, this can lead to bare beaches further along the shore.
LSD helps create many landforms including barrier islands, bay beaches and spits. In general LSD action serves to straighten the coast because the creation of barriers cuts off bays from the sea while sediment usually builds up in bays because the waves there are weaker (due to wave refraction), while sediment is carried away from the exposed headlands. The lack of sediment on headlands removes the protection of waves from them and makes them more vulnerable to weathering while the gathering of sediment in bays (where longshore drift is unable to remove it) protects the bays from further erosion and makes them pleasant recreational beaches.
Atmospheric processes
Onshore winds blowing "up" the beach, pick up sand and move it up the beach to form sand dunes.
Rain hits the shore and erodes rocks, and carries weathered material to the shoreline to form beaches.
Warm weather can encourage biological processes to occur more rapidly. In tropical areas some plants and animals protect stones from weathering, while other plants and animals actually eat away at the rocks.
Temperatures that vary from below to above freezing point result in frost weathering, whereas weather more than a few degrees below freezing point creates sea ice.
Biological processes
In tropical regions in particular, plants and animals not only affect the weathering of rocks but are a source of sediment themselves. The shells and skeletons of many organisms are of calcium carbonate and when this is broken down it forms sediment, limestone and clay.
Physical processes
The main physical Weathering process on beaches is salt-crystal growth. Wind carries salt spray onto rocks, where it is absorbed into small pores and cracks within the rocks. There the water evaporates and the salt crystallises, creating pressure and often breaking down the rock. In some beaches calcium carbonate is able to bind together other sediments to form beachrock and in warmer areas dunerock. Wind erosion is also a form of erosion, dust and sand is carried around in the air and slowly erodes rock, this happens in a similar way in the sea were the salt and sand is washed up onto the rocks.
Sea level changes (eustatic change)
The sea level on earth regularly rises and falls due to climatic changes. During cold periods more of the Earth's water is stored as ice in glaciers while during warm periods it is released and sea levels rise to cover more land. Sea levels are currently quite high, while just 18,000 years ago during the Pleistocene ice age they were quite low. Global warming may result in further rises in the future, which presents a risk to coastal cities as most would be flooded by only small rises. As sea levels rise, fjords and rias form. Fjords are flooded glacial valleys and rias are flooded river valleys. Fjords typically have steep rocky sides, while rias have dendritic drainage patterns typical of drainage zones. As tectonic plates move about the Earth they can rise and fall due to changing pressures and the presence of glaciers. If a beach is moving upwards relative to other plates this is known as isostatic change and raised beaches can be formed.
Land level changes (isostatic change)
This is found in the U.K. as above the line from the Wash to the Severn estuary, the land was covered in ice sheets during the last ice age. The weight of the ice caused northeast Scotland to sink, displacing the southeast and forcing it to rise. As the ice sheets receded the reverse process happened, as the land was released from the weight. At current estimates the southeast is sinking at a rate of about 2 mm per year, with northeast Scotland rising by the same amount.
Coastal landforms
Spits
If the coast suddenly changes direction, especially around an estuary, spits are likely to form. Long shore drift pushes the sediment along the beach but when it reaches a turn as in the diagram, the long shore drift does not always easily turn with it, especially near an estuary where the outward flow from a river may push sediment away from the coast. The area may also be shielded from wave action, preventing much long shore drift. On the side of the headland receiving weaker waves, shingle and other large sediments will build up under the water where waves are not strong enough to move them along. This provides a good place for smaller sediments to build up to sea level. The sediment, after passing the headland will accumulate on the other side and not continue down the beach, sheltered both by the headland and the shingle.
Slowly over time sediment simply builds on this area, extending the spit outwards, forming a barrier of sand. Once in a while, the wind direction will change and come from the other direction. During this period the sediment will be pushed along in the other direction. The spit will start to grow backwards, forming a 'hook'. After this process the spit will grow again in the original direction. Eventually the spit will not be able to grow any further because it is no longer sufficiently sheltered from erosion by waves, or because the estuary current prevents sediment resting. Usually in the salty but calm waters behind the spit there will form a salt marshland. Spits often form around the breakwater of artificial harbours requiring dredging.
Occasionally, if there is no estuary then it is possible for the spit to grow across to the other side of the bay and form what is called a bar, or barrier. Barriers come in several varieties, but all form in a manner similar to spits. They usually enclose a bay to form a lagoon. They can join two headlands or join a headland to the mainland. When an island is joined to the mainland with a bar or barrier it is known as a tombolo. This usually occurs due to wave refraction, but can also be caused by isostatic change, a change in the level of the land (e.g. Chesil Beach).
| Physical sciences | Oceanic and coastal landforms | Earth science |
2161429 | https://en.wikipedia.org/wiki/Eigenvalues%20and%20eigenvectors | Eigenvalues and eigenvectors | In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor (possibly negative).
Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. Its eigenvectors are those vectors that are only stretched, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or squished. If the eigenvalue is negative, the eigenvector's direction is reversed.
The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all the areas where linear algebra is applied, from geology to quantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is the steady state of the system.
Definition
Consider an matrix and a nonzero vector of length If multiplying with (denoted by ) simply scales by a factor of , where is a scalar, then is called an eigenvector of , and is the corresponding eigenvalue. This relationship can be expressed as: .
There is a direct correspondence between n-by-n square matrices and linear transformations from an n-dimensional vector space into itself, given any basis of the vector space. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices, or the language of linear transformations.
The following section gives a more general viewpoint that also covers infinite-dimensional vector spaces.
Overview
Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'. Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization.
In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation
referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex.
The example here, based on the Mona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either.
Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like , in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as
Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication
where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it.
Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them:
The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation.
The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of T associated with that eigenvalue.
If a set of eigenvectors of T forms a basis of the domain of T, then this basis is called an eigenbasis.
History
Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations.
In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix.
In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is now called eigenvalue; his term survives in characteristic equation.{{efn| Augustin Cauchy (1839) "Mémoire sur l'intégration des équations linéaires" (Memoir on the integration of linear equations), Comptes rendus, 8: 827–830, 845–865, 889–907, 931–937. From p. 827: ''"On sait d'ailleurs qu'en suivant la méthode de Lagrange, on obtient pour valeur générale de la variable prinicipale une fonction dans laquelle entrent avec la variable principale les racines d'une certaine équation que j'appellerai léquation caractéristique, le degré de cette équation étant précisément l'order de l'équation différentielle qu'il s'agit d'intégrer." (One knows, moreover, that by following Lagrange's method, one obtains for the general value of the principal variable a function in which there appear, together with the principal variable, the roots of a certain equation that I will call the "characteristic equation", the degree of this equation being precisely the order of the differential equation that must be integrated.)}}
Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his 1822 treatise The Analytic Theory of Heat (Théorie analytique de la chaleur). Charles-François Sturm elaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices.
Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability.
In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.
At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today.
The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis and Vera Kublanovskaya in 1961.
Eigenvalues and eigenvectors of matrices
Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.University of Michigan Mathematics (2016) Math Course Catalogue . Accessed on 2016-03-27.
Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications.
Consider -dimensional vectors that are formed as a list of scalars, such as the three-dimensional vectors
These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar such that
In this case, .
Now consider the linear transformation of -dimensional vectors defined by an by matrix ,
or
where, for each row,
If it occurs that and are scalar multiples, that is if
then is an eigenvector of the linear transformation and the scale factor is the eigenvalue corresponding to that eigenvector. Equation () is the eigenvalue equation for the matrix .
Equation () can be stated equivalently as
where is the by identity matrix and 0 is the zero vector.
Eigenvalues and the characteristic polynomial
Equation () has a nonzero solution v if and only if the determinant of the matrix is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation
Using the Leibniz formula for determinants, the left-hand side of equation () is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation () is called the characteristic equation or the secular equation of A.
The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms,
where each λi may be real but in general is a complex number. The numbers λ1, λ2, ..., λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A.
As a brief example, which is described in more detail in the examples section later, consider the matrix
Taking the determinant of , the characteristic polynomial of A is
Setting the characteristic polynomial equal to zero, it has roots at and , which are the two eigenvalues of A. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation In this example, the eigenvectors are any nonzero scalar multiples of
If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers.
The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs.
Spectrum of a matrix
The spectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities.
An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as the spectral radius of the matrix.
Algebraic multiplicity
Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial.
Suppose a matrix A has dimension n and d ≤ n distinct eigenvalues. Whereas equation () factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity,
If d = n then the right-hand side is the product of n linear terms and this is the same as equation (). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as
If μA(λi) = 1, then λi is said to be a simple eigenvalue. If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue.
Eigenspaces, geometric multiplicity, and the eigenbasis for matrices
Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy equation (),
On one hand, this set is precisely the kernel or nullspace of the matrix (A − λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). E is called the eigenspace or characteristic space of A associated with λ. In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of .
Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written , then or equivalently . This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if and α is a complex number, or equivalently . This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ.
The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity . Because E is also the nullspace of (A − λI), the geometric multiplicity of λ is the dimension of the nullspace of (A − λI), also called the nullity of (A − λI), which relates to the dimension and rank of (A − λI) as
Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n.
To prove the inequality , consider how the definition of geometric multiplicity implies the existence of orthonormal eigenvectors , such that . We can therefore find a (unitary) matrix whose first columns are these eigenvectors, and whose remaining columns can be any orthonormal set of vectors orthogonal to these eigenvectors of . Then has full rank and is therefore invertible. Evaluating , we get a matrix whose top left block is the diagonal matrix . This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding on both sides, we get since commutes with . In other words, is similar to , and . But from the definition of , we know that contains a factor , which means that the algebraic multiplicity of must satisfy .
Suppose has distinct eigenvalues , where the geometric multiplicity of is . The total geometric multiplicity of ,
is the dimension of the sum of all the eigenspaces of 's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of . If , then
The direct sum of the eigenspaces of all of 's eigenvalues is the entire vector space .
A basis of can be formed from linearly independent eigenvectors of ; such a basis is called an eigenbasis Any vector in can be written as a linear combination of eigenvectors of .
Additional properties
Let be an arbitrary matrix of complex numbers with eigenvalues . Each eigenvalue appears times in this list, where is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues:
The trace of , defined as the sum of its diagonal elements, is also the sum of all eigenvalues,
The determinant of is the product of all its eigenvalues,
The eigenvalues of the th power of ; i.e., the eigenvalues of , for any positive integer , are .
The matrix is invertible if and only if every eigenvalue is nonzero.
If is invertible, then the eigenvalues of are and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity.
If is equal to its conjugate transpose , or equivalently if is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix.
If is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively.
If is unitary, every eigenvalue has absolute value .
If is a matrix and are its eigenvalues, then the eigenvalues of matrix (where is the identity matrix) are . Moreover, if , the eigenvalues of are . More generally, for a polynomial the eigenvalues of matrix are .
Left and right eigenvectors
Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the matrix in the defining equation, equation (),
The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix . In this formulation, the defining equation is
where is a scalar and is a matrix. Any row vector satisfying this equation is called a left eigenvector of and is its associated eigenvalue. Taking the transpose of this equation,
Comparing this equation to equation (), it follows immediately that a left eigenvector of is the same as the transpose of a right eigenvector of , with the same eigenvalue. Furthermore, since the characteristic polynomial of is the same as the characteristic polynomial of , the left and right eigenvectors of are associated with the same eigenvalues.
Diagonalization and the eigendecomposition
Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A,
Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue,
With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then
Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1,
or by instead left multiplying both sides by Q−1,
A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ.
Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, . Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable.
A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces.
Variational characterization
In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of is the maximum value of the quadratic form . A value of that realizes that maximum is an eigenvector.
Matrix examples
Two-dimensional matrix example
Consider the matrix
The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy equation (), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues.
Taking the determinant to find characteristic polynomial of A,
Setting the characteristic polynomial equal to zero, it has roots at and , which are the two eigenvalues of A.
For , equation () becomes,
Any nonzero vector with v1 = −v2 solves this equation. Therefore,
is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector.
For , equation () becomes
Any nonzero vector with v1 = v2 solves this equation. Therefore,
is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector.
Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues and , respectively.
Three-dimensional matrix example
Consider the matrix
The characteristic polynomial of A is
The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors and or any nonzero multiple thereof.
Three-dimensional matrix example with complex eigenvalues
Consider the cyclic permutation matrix
This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are
where is an imaginary unit with
For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example,
For the complex conjugate pair of imaginary eigenvalues,
Then
and
Therefore, the other two eigenvectors of A are complex and are and with eigenvalues λ2 and λ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair,
Diagonal matrix example
Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix
The characteristic polynomial of A is
which has the roots , , and . These roots are the diagonal elements as well as the eigenvalues of A.
Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors,
respectively, as well as scalar multiples of these vectors.
Triangular matrix example
A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal.
Consider the lower triangular matrix,
The characteristic polynomial of A is
which has the roots , , and . These roots are the diagonal elements as well as the eigenvalues of A.
These eigenvalues correspond to the eigenvectors,
respectively, as well as scalar multiples of these vectors.
Matrix with repeated eigenvalues example
As in the previous example, the lower triangular matrix
has a characteristic polynomial that is the product of its diagonal elements,
The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A.
On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector . The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section.
Eigenvector-eigenvalue identity
For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix,
where is the submatrix formed by removing the jth row and column from the original matrix. This identity also extends to diagonalizable matrices, and has been rediscovered many times in the literature.
Eigenvalues and eigenfunctions of differential operators
The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C∞ of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation
The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions.
Derivative operator example
Consider the derivative operator with eigenvalue equation
This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function
is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, for λ = 0 the eigenfunction f(t) is a constant.
The main eigenfunction article gives other examples.
General definition
The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V,
We say that a nonzero vector v ∈ V is an eigenvector of T if and only if there exists a scalar λ ∈ K such that
This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v.
Eigenspaces, geometric multiplicity, and the eigenbasis
Given an eigenvalue λ, consider the set
which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ.
By definition of a linear transformation,
for x, y ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely u, v ∈ E, then
So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αv ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V.
If that subspace has dimension 1, it is sometimes called an eigenline.
The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector.
The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues.
Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable.
Spectral theory
If λ is an eigenvalue of T, then the operator (T − λI) is not one-to-one, and therefore its inverse (T − λI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue.
For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them.
Associative algebras and representation theory
One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory.
The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively.
Hecke eigensheaf is a tensor-multiple of itself and is considered in Langlands correspondence.
Dynamic equations
The simplest difference equations have the form
The solution of this equation for x in terms of t is found by using its characteristic equation
which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations giving a k-dimensional system of the first order in the stacked variable vector in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots for use in the solution equation
A similar procedure is used for solving a differential equation of the form
Calculation
The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice.
Classical method
The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such as floating-point.
Eigenvalues
The eigenvalues of a matrix can be determined by finding the roots of the characteristic polynomial. This is easy for matrices, but the difficulty increases rapidly with the size of the matrix.
In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy. However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable round-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by Wilkinson's polynomial). Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an matrix is a sum of different products.
Explicit algebraic formulas for the roots of a polynomial exist only if the degree is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degree is the characteristic polynomial of some companion matrix of order .) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical.
Eigenvectors
Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix
we can find its eigenvectors by solving the equation , that is
This matrix equation is equivalent to two linear equations
that is
Both equations reduce to the single linear equation . Therefore, any vector of the form , for any nonzero real number , is an eigenvector of with eigenvalue .
The matrix above has another eigenvalue . A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of , that is, any vector of the form , for any nonzero real number .
Simple iterative methods
The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector. A variation is to instead multiply the vector by this causes it to converge to an eigenvector of the eigenvalue closest to
If is (a good approximation of) an eigenvector of , then the corresponding eigenvalue can be computed as
where denotes the conjugate transpose of .
Modern methods
Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm. For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.
Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed.
Applications
Geometric transformations
Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes.
The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors.
The characteristic equation for a rotation is a quadratic equation with discriminant , which is a negative number whenever is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers, ; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane.
A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues.
Principal component analysis
The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data.
Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors). More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling.
Graphs
In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix , or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either (sometimes called the combinatorial Laplacian) or (sometimes called the normalized Laplacian), where is a diagonal matrix with equal to the degree of vertex , and in , the th diagonal entry is . The th principal eigenvector of a graph is defined as either the eigenvector corresponding to the th largest or th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering.
Markov chains
A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state.
Vibration analysis
Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by
or
That is, acceleration is proportional to position (i.e., we expect to be sinusoidal in time).
In dimensions, becomes a mass matrix and a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem
where is the eigenvalue and is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of alone. Furthermore, damped vibration, governed by
leads to a so-called quadratic eigenvalue problem,
This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system.
The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems.
Tensor of moment of inertia
In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass.
Stress tensor
In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components.
Schrödinger equation
An example of an eigenvalue equation where the transformation is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics:
where , the Hamiltonian, is a second-order differential operator and , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue , interpreted as its energy.
However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which and can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.
The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by . In this notation, the Schrödinger equation is:
where is an eigenstate of and represents the eigenvalue. is an observable self-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above is understood to be the vector obtained by application of the transformation to .
Wave transport
Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix . The eigenvectors of the transmission operator form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues, , of correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with and . Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels.
Molecular orbitals
In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations.
Geology and glaciology
In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast's fabric can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as a stereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,. A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used in crystallography to create stereograms.
The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered by their eigenvalues ;
then is the primary orientation/dip of clast, is the secondary and is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of , , and are dictated by the nature of the sediment's fabric. If , the fabric is said to be isotropic. If , the fabric is said to be planar. If , the fabric is said to be linear.
Basic reproduction number
The basic reproduction number () is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then is the average number of people that one typical infectious person will infect. The generation time of an infection is the time, , from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time has passed. The value is then the largest eigenvalue of the next generation matrix.
Eigenfaces
In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made.
Similar to this concept, eigenvoices''' represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation.
| Mathematics | Algebra | null |
2165266 | https://en.wikipedia.org/wiki/Ceprano%20Man | Ceprano Man | Ceprano Man, Argil, and Ceprano Calvarium, is a Middle Pleistocene archaic human fossil, a single skull cap (calvarium), accidentally unearthed in a highway construction project in 1994 near Ceprano in the Province of Frosinone, Italy. It was initially considered Homo cepranensis, Homo erectus, or possibly Homo antecessor; but in recent studies, most regard it either as a form of Homo heidelbergensis sharing affinities with African forms, or an early morph of Neanderthal.
History
During excavation in preparation for a highway on March 13, 1994, in the Campo Grande area near Ceprano, Italy, a partial hominin calvaria was discovered. Although damaged by a bulldozer, it was recognized, documented, and described by archeologist Italo Biddittu, who happened to be present when the fossil was discovered. Mallegni et al. (2003) proposed the introduction of a new human species, dubbed Homo cepranensis, based on the fossil. As the specimen was believed to be around 700 ka, they believed that this specimen and Homo antecessor suggested a wave of dispersals into Europe 0.9-0.8 Ma through Iberia and the Middle East. The most recent belief is that it is associated with Homo heidelbergensis or rhodesiensis, or that it is ancestral to Neanderthals.
Dating
The fossil was first estimated to be between 690,000 and 900,000 years old determined on the basis of regional correlations and a series of absolute dates. Taking the circumstances of the recovery of the fossil into account, Ascenzi (2001) noted that "an age between 800 and 900 ka is at present our best chronological estimate" based on "the absence in the sediments containing the cranium of any leucitic remnants of the more recent volcanic activity known in the region . . . and the presence above the cranium itself of a clear stratigraphic unconformity that marks" After clarification of its geostratigraphic, biostratigraphic and archaeological relation to the well known and nearby Acheulean site of Fontana Ranuccio, dated to , Muttoni et al. (2009) suggested that Ceprano is most likely about 450,000 years old. Manzi et al. (2010) agree with this, citing an age of 430 and 385 ka.
Segre and Mallegni (2012) strongly retain their beliefs that the skull is 900-800 ka and is not the same age as the clay it was found in situ. Di Vincenzo et al. (2017) explain that this thought process is based on the belief of secondary deposition into younger strata, though they believe otherwise based on renewed analysis and context of the find. They note that a lack of gnawing, weathering or abrasions induced by transport supports the theory that the skull was buried once by rising and falling water levels, which is evidenced by the pedofeatures of the clay it was found in situ. This would have dispersed the remaining skeleton and rapidly filled the cranium for fossilization.
Description
The reconstruction of the skull made in 2000 by Clarke and tweaked by M.A. de Lumley and Mallegni features repositioning of the parietal, removal of dental plaster, midsagittal plane was established, added two zygomatic frontal processus previously missing, added an occipital fragment, and rid of unnecessary plaster and glue reinforcements. DI Vincenzo et al. (2017) provided a virtual reconstruction wherein all plaster and glue was removed and the remains were repositioned to most closely fit their life position. They noticed misplacement and misalignment in the temporo-parietal region, left mastoid process, and occipital squama, and worked to correct some taphonomic distortion through retrodeformation and other methods. Most of this work is reflected in the vault rather than the face, and most of the peculiar aspects of the skull are now gone. For example, the single autapomorphy used to distinguish it as a new, valid species is a foreshortened vault, which when compared to the new reconstruction, appears typical of H. heidelbergensis.
Paleopathology
The specimen preserves several injuries, The first is a deep, wide recess infiltrating deep the left greater wing of the sphenotemporal suture on the sphenoidal sinus, which was found but not reported early in Ceprano's literature history. Second, a healed depression on the right brow. This was probably caused by an altercation with a large animal, where the skull was butted and fractured; this is more plausible than another, more popular explanation that the blow was inflicted by another human wielding a stick (thus being, hypothetically, murder). They hypothesized that the individual was a young adult man (gender stated without evidence) whose activities consisted of hunting for themself or the group, and was "bold and aggressive" based on the accumulation of injuries. The fracture healed, suggesting that it did not cause death and the congenital malformation on the skull was not restrictive or painful enough to limit the subject's physical abilities.
Classification
Ascenzi et al. (1996) argue that the similarity to Chinese H. erectus and assignment to Homo heidelbergensis based on provenance (as Mauer cannot be compared to Ceprano) cannot justify attribution to any other species. Ascenzi and Segre (1997) compared an early cranial reconstruction with the Gran Dolina fossils and concluded that it was "late Homo erectus", being one of the latest occurrences of the species and earliest Italian hominin. This allocation was supported by them with vault profile data and metrics. However, Ascenzi and Segre also consider specimens such as Montmaurin, Arago, Petralona and Vertesszolos as H. erectus or a similar taxon. They suggested that Tighenif No. 3 mandible is a good fit for the skull, and hinted that connection between it and North Africa may be evident.
Clarke (2000) suggested that inconsistencies with minimal fontal breadth may be individual or geographic variation, and not taxonomically informative. Ascenzi et al. (2000) followed the cranial reconstruction by Clarke (2000) and modifications by M.A. de Lumley to reinforce assignment to H. erectus based on the tori, cranial capacity, bone thickness, and occipital profile angle.
Manzi et al. (2001) pose the possibility that it may be an adult Homo antecessor, but do not make the referral based on the reasoning that no elements from Gran Dolina match in age or completeness to directly compare with Ceprano. They also state that a less parsimonious explanation would be the accommodation of two contemporary species, as they find the specimen is not referrable to Homo erectus, H. ergaster, H. heidelbergensis, or H. rhodesiensis. In fact, they recommend creation of a new name to represent a transition from late African to early European fossils. They also suggest that Early Pleistocene dispersals toted a new morphology that was lost, possibly by other Acheulean-using hominins.
Mallegni et al. (2003) noticed a lack of Homo heidelbergensis frontal morphology was similar to the Daka specimen, and as such they were recovered as sister individuals in their cladistic analysis. They propose that the Bouri population was the source for later European populations, and the resulting species did not contribute to the genomes of later Middle Pleistocene (MP) hominins. They also suggested that similarity with Homo rhodesiensis fossils may be reflective of an ancestral-descendant phylogenetic relationship; and since the fossil was appearing so distinct they named Homo cepranensis with the calvarium as the holotype and only specimen.
Bruner et al. (2007) recognize that the characters of the specimen exhibits a mix of early African and later European features, enough to be potentially distinct or, alternatively, considered an ancestral of Homo heidelbergensis. However, they caution other workers that no direct comparisons can yet be made based on fossil record incompleteness. Mounier et al. (2011) have identified the fossil as "an appropriate ancestral stock of [H. heidelbergensis] . . . preceding the appearance of regional autapomorphic features." They suggested that the specimen could be "an appropriate 'counterpart'" to the current, inadequate holotype due to its preservation and morphology. They also suggest ancestry with Neanderthals.
Segre and Mallegni (2012) retain use of Homo cepranensis and dispute redating of the site. Freidline et al. (2012) follow suit with the opinion of Guipert (2005). Guipert (2005) digitally reconstructed several hominin fossils exhibiting extreme degrees of distortion, including the cranial remains from Arago. In their results, both teams draw similar conclusions that the Ceprano calvarium and the Arago hominins are closest in morphology.
Manzi (2016) suggested that the species Homo heidelbergensis is the best descriptor for the calvaria, and further proposed two modes of classification. One uses a single species under that name with Ceprano being having ancestral characters, but noticed that subspecific distinctions may be made. The second incorporates this, using the following: H. h. heidelbergensis (Ceprano, Mauer, Arago, ?Hexian, Melka Kunture 2–3), daliensis (Dali, Denisova, Jinniushan, Narmada), rhodesiensis (Broken Hill, Irhoud, Florisbad, Eliye Springs, Ngaloba, Omo Kibish II), and steinheimensis (Steinheim, Petralona, Reilingen, Swanscombe, Sima). Di Vincenzo et al. (2017) found with their new reconstruction that it is typical of H. heidelbergensis, specifically Broken Hill and Petralona. They suggest that it is ancestral to the neanderthalensis-sapiens-Denisovan clade.
Manzi (2021) elaborates that the specimen is a lost morphology that lived in a refugium in Italy (much like the Neanderthal from Altamura) and retained plesiomorphic traits for an extended duration. This suite of old traits gave rise to the MP hominin diversity observed, but was absorbed. Manzi, again, recommends H. h. heidelbergensis for the specimen. The description of the Harbin skull suggests that it is associated specifically with H. rhodesiensis. Roksandic et al. (2022) considered it for inclusion in their Homo bodoensis, but this term was agreed to be valueless and does not comply with ICZN naming conventions. They suggest that it may have contributed to Arago and Petralona, among other specimens.
Technology
Lithics at Ceprano tend to be located higher up and in volcanic sediment. Choppers are more common than lithics at the Castro dei Volschi facie, and overlie the choppers found at Arce Fontana Liri. They are 458-385 ka (as low as 200 ka) in age, which was, at the time, much younger than the cranium. It is suggested that these populations dispersed during the late Early Pleistocene with Mode I technologies and their morphology was lost by other Acheulean-using hominins.
Paleoecology
The Ceprano calvarium was discovered in the Camp Grande area by what is now a highway. It was associated with bone and lithic Acheulean artifacts and faunal remains, such as the straight-tusked elephant (Palaeoloxodon antiquus), the narrow-nosed rhinoceros (Stephanorhinus hemitoechus), Hippopotamus sp., the giant deer Praemegaceros verticornis, the fallow deer Dama clactoniana, beavers, aurochs (Bos primigenius), and the European pond turtle (Emys orbicularis). Pleistocene fossils occur in strata that is around 50 meters in thickness. The Ceprano basin splits into two sections: one of 22 meters in fluvial-colluvial facies with gravels and sands intercalacted with clays, and one of 24 meters with distinct limno-marsh facies, clays and silts. Further, six groups are split. The furthest to the top holds Acheulean and is common in aurochs and Palaeoloxodon. The skull was discovered in clays of a gray-green color above a travertine and with scattered nodular calcium carbonate concretions, mixed with yellow sands, and diffused with Ferromanganese. The area would have probably been forested due to the clay corresponding to a fluvial period that is in relation to terminal tectonics. It lived during the MIS 11, a warm stage at Lirino Lake, which was a refugium for archaic morphologies. It was buried once in a perilacustrine environment by rising and lowering water, scattering the skeleton and filling the cranium.
| Biology and health sciences | Homo | Biology |
2165269 | https://en.wikipedia.org/wiki/Paranthropus%20robustus | Paranthropus robustus | Paranthropus robustus is a species of robust australopithecine from the Early and possibly Middle Pleistocene of the Cradle of Humankind, South Africa, about 2.27 to 0.87 (or, more conservatively, 2 to 1) million years ago. It has been identified in Kromdraai, Swartkrans, Sterkfontein, Gondolin, Cooper's, and Drimolen Caves. Discovered in 1938, it was among the first early hominins described, and became the type species for the genus Paranthropus. However, it has been argued by some that Paranthropus is an invalid grouping and synonymous with Australopithecus, so the species is also often classified as Australopithecus robustus.
Robust australopithecines—as opposed to gracile australopithecines—are characterised by heavily built skulls capable of producing high stresses and bite forces, as well as inflated cheek teeth (molars and premolars). Males had more heavily built skulls than females. P. robustus may have had a genetic susceptibility for pitting enamel hypoplasia on the teeth, and seems to have had a dental cavity rate similar to non-agricultural modern humans. The species is thought to have exhibited marked sexual dimorphism, with males substantially larger and more robust than females. Based on 3 specimens, males may have been tall and females . Based on 4 specimens, males averaged in weight and females . The brain volume of the specimen SK 1585 is estimated to have been 476 cc, and of DNH 155 about 450 cc (for comparison, the brain volume of contemporary Homo varied from 500 to 900 cc). P. robustus limb anatomy is similar to that of other australopithecines, which may indicate a less efficient walking ability than modern humans, and perhaps some degree of arboreality (movement in the trees).
P. robustus seems to have consumed a high proportion of C4 savanna plants. In addition, it may have also eaten fruits, underground storage organs (such as roots and tubers), and perhaps honey and termites. P. robustus may have used bones as tools to extract and process food. It is unclear if P. robustus lived in a harem society like gorillas or a multi-male society like baboons. P. robustus society may have been patrilocal, with adult females more likely to leave the group than males, but males may have been more likely to be evicted as indicated by higher male mortality rates and assumed increased risk of predation to solitary individuals. P. robustus contended with sabertooth cats, leopards, and hyenas on the mixed, open-to-closed landscape, and P. robustus bones probably accumulated in caves due to big cat predation. It is typically found in what were mixed open and wooded environments, and may have gone extinct in the Mid-Pleistocene Transition characterised by the continual prolonging of dry cycles and subsequent retreat of such habitat.
Taxonomy
Research history
Discovery
The first remains, a partial skull including a part of the jawbone (TM 1517), were discovered in June 1938 at the Kromdraai cave site, South Africa, by local schoolboy Gert Terblanche. He gave the remains to South African conservationist Charles Sydney Barlow, who then relayed them to South African palaeontologist Robert Broom. Broom began investigating the site, and, a few weeks later, recovered a right distal humerus (the lower part of the upper arm bone), a proximal right ulna (upper part of a lower arm bone) and a distal phalanx bone of the big toe, all of which he assigned to TM 1517. He also identified a distal toe phalanx which he believed belonged to a baboon, but has since been associated with TM 1517. Broom noted the Kromdraai remains were especially robust compared to other hominins. In August 1938, Broom classified the robust Kromdraai remains into a new genus, as Paranthropus robustus. "Paranthropus" derives from the Ancient Greek , beside or alongside; and , man.
At this point in time, Australian anthropologist Raymond Dart had made the very first claim (quite controversially at the time) of an early ape-like human ancestor in 1924 from South Africa, Australopithecus africanus, based on the Taung child. In 1936, Broom had described "Plesianthropus transvaalensis" (now synonymised with A. africanus) from the Sterkfontein Caves only west from Kromdraai. All these species dated to the Pleistocene and were found in the same general vicinity (now called the "Cradle of Humankind"). Broom considered them evidence of a greater diversity of hominins in the Pliocene from which they and modern humans descended, and consistent with several hominin taxa existing alongside human ancestors.
The Kromdraai taxon, classified as Paranthropus robustus, was later discovered at the nearby Swartkrans Cave in 1948. P. robustus was only definitively identified at Kromdraai and Swartkrans until around the turn of the century when the species was reported elsewhere in the Cradle of Humankind at Sterkfontein, Gondolin, Cooper's, and Drimolen Caves. The species has not been found outside this small area.
"P. crassidens"
In 1948, at the nearby Swartkrans Cave, Broom described "P. crassidens" (distinct from P. robustus) based on a subadult jaw, SK 6, because Swartkrans and Kromdraai clearly dated to different time intervals based on the diverging animal assemblages in these caves. At this point in time, humans and allies were classified into the family Hominidae, and non-human great apes into "Pongidae"; in 1950, Broom suggested separating early hominins into the subfamilies Australopithecinae (Au. africanus and "Pl. transvaalensis"), "Paranthropinae" (Pa. robustus and "Pa. crassidens"), and "Archanthropinae" ("Au. prometheus"). This scheme was widely criticised for being too liberal in demarcating species. Further, the remains were not firmly dated, and it was debated if there were indeed multiple hominin lineages or if there was only a single one leading to humans. Most prominently, Broom and South African palaeontologist John Talbot Robinson continued arguing for the validity of Paranthropus.
Anthropologists Sherwood Washburn and Bruce D. Patterson were the first to recommend synonymising Paranthropus with Australopithecus in 1951, wanting to limit hominin genera to only that and Homo, and it has since been debated whether or not Paranthropus is a junior synonym of Australopithecus. In the spirit of tightening splitting criteria for hominin taxa, in 1954, Robinson suggested demoting "P. crassidens" to subspecies level as "P. r. crassidens", and also moved the Indonesian Meganthropus into the genus as "P. palaeojavanicus". Meganthropus has since been variously reclassified as a synonym of the Asian Homo erectus, "Pithecanthropus dubius", Pongo (orangutans), and so on, and in 2019 it was again argued to be a valid genus.
In 1949, also in Swartkrans Cave, Broom and Robinson found a mandible which they preliminary described as "intermediate between one of the ape-men and true man," classifying it as a new genus and species "Telanthropus capensis". Most immediate reactions favoured synonymising "T. capensis" with "P. crassidens", whose remains were already abundantly found in the cave. In 1957, though, Italian biologist Alberto Simonetta moved it to the genus "Pithecanthropus", and Robinson (without a specific reason why) decided to synonymise it with H. erectus (African H. erectus are sometimes called H. ergaster today). In 1965, South African palaeoanthropologist Phillip V. Tobias questioned whether this classification is completely sound or not.
By the 21st century, "P. crassidens" had more or less fallen out of use in favour of P. robustus. American palaeoanthropologist Frederick E. Grine is the primary opponent of synonymisation of the two species.
Gigantopithecus
In 1939, Broom hypothesised that P. robustus was closely related to the similarly large-toothed ape Gigantopithecus from Asia (extinct apes were primarily known from Asia at the time) believing Gigantopithecus to have been a hominin. Primarily influenced by the mid-century opinions of Jewish German anthropologist Franz Weidenreich and German-Dutch palaeontologist Ralph von Koenigswald that Gigantopithecus was, respectively, the direct ancestor of the Asian H. erectus or closely related, much debate followed over whether Gigantopithecus was a hominin or a non-human ape.
In 1972, Robinson suggested including Gigantopithecus in "Paranthropinae", with the Miocene Pakistani "G. bilaspurensis" (now Indopithecus) as the ancestor of Paranthropus and the Chinese G. blacki. He also believed that they both had a massive build. In contrast, he reported a very small build for A. africanus (which he referred to as "Homo" africanus) and speculated it had some cultural and hunting abilities, being a member of the human lineage, which "paranthropines" lacked. With the popularisation of cladistics by the late 1970s to 1980s, and better resolution on how Miocene apes relate to later apes, Gigantopithecus was entirely removed from Homininae, and is now placed in the subfamily Ponginae with orangutans.
P. boisei
In 1959, another and much more robust australopithecine was discovered in East Africa, P. boisei, and in 1975, the P. boisei skull KNM-ER 406 was demonstrated to have been contemporaneous with the H. ergaster/H. erectus skull KNM ER 3733 (which is considered a human ancestor). This is generally taken to show that Paranthropus was a sister taxon to Homo, both developing from some Australopithecus species, which at the time only included A. africanus.
In 1979, a year after describing A. afarensis from East Africa, anthropologists Donald Johanson and Tim D. White suggested that A. afarensis was instead the last common ancestor between Homo and Paranthropus, and A. africanus was the earliest member of the Paranthropus lineage or at least was ancestral to P. robustus, because A. africanus inhabited South Africa before P. robustus, and A. afarensis was at the time the oldest known hominin species at roughly 3.5 million years old. Now, the earliest-known South African australopithecine ("Little Foot") dates to 3.67 million years ago, contemporaneous with A. afarensis. The matter is still debated.
It was long assumed that if Paranthropus is a valid genus then P. robustus was the ancestor of P. boisei, but in 1985, anthropologists Alan Walker and Richard Leakey found that the 2.5-million-year-old East African skull KNM WT 17000—which they assigned to a new species A. aethiopicus|A. aethiopicus—was ancestral to A. boisei (they considered Paranthropus synonymous with Australopithecus), thus establishing the boisei lineage as beginning long before robustus had existed.
Classification
The genus Paranthropus (otherwise known as "robust australopithecines", in contrast to the "gracile australopithecines") now also includes the East African P. boisei and P. aethiopicus. It is still debated if this is a valid natural grouping (monophyletic) or an invalid grouping of similar-looking hominins (paraphyletic). Because skeletal elements are so limited in these species, their affinities with each other and with other australopithecines are difficult to gauge with accuracy. The jaws are the main argument for monophyly, but jaw anatomy is strongly influenced by diet and environment, and could have evolved independently in P. robustus and P. boisei. Proponents of monophyly consider P. aethiopicus to be ancestral to the other two species, or closely related to the ancestor. Proponents of paraphyly allocate these three species to the genus Australopithecus as A. boisei, A. aethiopicus, and A. robustus. In 2020, palaeoanthropologist Jesse M. Martin and colleagues' phylogenetic analyses reported the monophyly of Paranthropus, but also that P. robustus had branched off before P. aethiopicus (that P. aethiopicus was ancestral to only P. boisei). The exact classification of Australopithecus species with each other is quite contentious.
In 2023, fragmentary genetic material belonging to this species was reported from 2 million year-old teeth, being the oldest genetic evidence to be retrieved from a human.
Anatomy
Head
Skull
Typical of Paranthropus, P. robustus exhibits post-canine megadontia with enormous cheek teeth but human-sized incisors and canines. The premolars are shaped like molars. The enamel thickness on the cheek teeth is relatively on par with that of modern humans, though australopithecine cheek tooth enamel thickens especially at the tips of the cusps, whereas in humans it thickens at the base of the cusps.
P. robustus has a tall face with slight prognathism (the jaw jutted out somewhat). The skulls of males have a well-defined sagittal crest on the midline of the skullcap and inflated cheek bones, which likely supported massive temporal muscles important in biting. The cheeks project so far from the face that, when in top-view, the nose appears to sit at the bottom of a concavity (a dished face). This displaced the eye sockets forward somewhat, causing a weak brow ridge and receding forehead. The inflated cheeks also would have pushed the masseter muscle (important in biting down) forward and pushed the tooth rows back, which would have created a higher bite force on the premolars. The ramus of the jawbone, which connects the lower jaw to the upper jaw, is tall, which would have increased lever arm (and thereby, torque) of the masseter and medial pterygoid muscles (both important in biting down), further increasing bite force.
The well-defined sagittal crest and inflated cheeks are absent in the presumed-female skull DNH-7, so Keyser suggested that male P. robustus may have been more heavily built than females (P. robustus was sexually dimorphic). The Drimolen material, being more basal, is comparatively more gracile and consequently probably had a smaller bite force than the younger Swartkrans and Kromdraii P. robustus. The brows of the former also are rounded off rather than squared, and the sagittal crest of the presumed-male DNH 155 is more posteriorly (towards the back of the head) positioned.
The posterior semicircular canals in the inner ear of SK 46 and SK 47 are unlike those of the apelike Australopithecus or Homo, suggesting different locomotory and head movement patterns, since inner ear anatomy affects the vestibular system (sense of balance). The posterior semicircular canals of modern humans are thought to aid in stabilisation while running, which could mean P. robustus was not an endurance runner.
Brain
Upon describing the species, Broom estimated the fragmentary braincase of TM 1517 as 600 cc, and he, along with South African anthropologist Gerrit Willem Hendrik Schepers, revised this to 575–680 cc in 1946. For comparison, the brain volume of contemporary Homo varied from 500 to 900 cc. A year later, British primatologist Wilfrid Le Gros Clark commented that, since only a part of the temporal bone on one side is known, brain volume cannot be accurately measured for this specimen. In 2001, Polish anthropologist Katarzyna Kaszycka said that Broom quite often artificially inflated brain size in early hominins, and the true value was probably much lower.
In 1972, American physical anthropologist Ralph Holloway measured the skullcap SK 1585, which is missing part of the frontal bone, and reported a volume of about 530 cc. He also noted that, compared to other australopithecines, Paranthropus seems to have had an expanded cerebellum like Homo, echoing what Tobias said while studying P. boisei skulls in 1967. In 2000, American neuroanthropologist Dean Falk and colleagues filled in frontal bone anatomy of SK 1585 using the P. boisei specimens KNM-ER 407, OH 5, and KNM-ER 732, and recalculated the brain volume to about 476 cc. They stated overall brain anatomy of P. robustus was more like that of non-human apes.
In 2020, the nearly complete skull DNH 155 was discovered and was measured to have had a brain volume of 450 cc.
Blood vessels
In 1983, while studying SK 1585 (P. robustus) and KNM-ER 407 (P. boisei, which he referred to as robustus), French anthropologist Roger Saban stated that the parietal branch of the middle meningeal artery originated from the posterior branch in P. robustus and P. boisei instead of the anterior branch as in earlier hominins, and considered this a derived characteristic due to increased brain capacity. It has since been demonstrated that, at least for P. boisei, the parietal branch could originate from either the anterior or posterior branches, sometimes both in a single specimen on opposite sides of the skull.
Regarding the dural venous sinuses, in 1983, Falk and anthropologist Glenn Conroy suggested that, unlike A. africanus or modern humans, all Paranthropus (and A. afarensis) had expanded occipital and marginal (around the foramen magnum) sinuses, completely supplanting the transverse and sigmoid sinuses. They suggested the setup would have increased blood flow to the internal vertebral venous plexuses or internal jugular vein, and was thus related to the reorganisation of the blood vessels supplying the head as an immediate response to bipedalism, which relaxed as bipedalism became more developed. In 1988, Falk and Tobias demonstrated that early hominins (at least A. africanus and P. boisei) could have both an occipital/marginal and transverse/sigmoid systems concurrently or on opposite halves of the skull.
Torso
Few vertebrae are assigned to P. robustus. The only thoracolumbar series (thoracic and lumbar series) preserved belongs to the juvenile SKW 14002, and either represents the 1st to the 4th lumbar vertebrae, or the 2nd to the 5th. SK 3981 preserves a 12th thoracic vertebra (the last in the series), and a lower lumbar vertebra. The 12th thoracic vertebra is relatively elongated, and the articular surface (where it joins with another vertebra) is kidney-shaped. The T12 is more compressed in height than that of other australopithecines and modern apes. Modern humans who suffer from spinal disc herniation often have vertebrae that are more similar to those of chimpanzees than healthy humans. Early hominin vertebrae are similar to those of a pathological human, including the only other 12th thoracic vertebra known for P. robustus, the juvenile SK 853. Conversely, SK 3981 is more similar to those of healthy humans, which could be explained as: SK 3981 is abnormal, the vertebrae took on a more humanlike condition with maturity, or one of these specimens is assigned to the wrong species. The shape of the lumbar vertebrae is much more similar to that of Turkana Boy (H. ergaster/H. erectus) and humans than other australopithecines. The pedicles (which jut out diagonally from the vertebra) of the lower lumbar vertebra are much more robust than in other australopithecines and are within the range of humans, and the transverse processes (which jut out to the sides of the vertebra) indicate powerful iliolumbar ligaments. These could have bearing on the amount of time spent upright compared to other australopithecines.
The pelvis is similar to the pelvises of A. africanus and A. afarensis, but it has a wider iliac blade and smaller acetabulum and hip joint. Like modern humans, the ilium of P. robustus features development of the surface and thickening of the posterior superior iliac spine, which are important in stabilising the sacrum, and indicates lumbar lordosis (curvature of the lumbar vertebrae) and thus bipedalism. The anatomy of the sacrum and the first lumbar vertebra (at least the vertebral arch), preserved in DNH 43, are similar to those of other australopithecines. The pelvis seems to indicate a more-or-less humanlike hip joint consistent with bipedalism, though differences in overall pelvic anatomy may indicate P. robustus used different muscles to generate force and perhaps had a different mechanism to direct force up the spine. This is similar to the condition seen in A. africanus. This could potentially indicate the lower limbs had a wider range of motion than those of modern humans.
Limbs
The distal (lower) humerus of P. robustus falls within the variation of both modern humans and chimps, as the distal humerus is quite similar between humans and chimpanzees. The radius of P. robustus is comparable in form to Australopithecus species. The wrist joint had the same maneuverability as that of modern humans rather than the greater flexion achieved by non-human apes, but the head of radius (the elbow) seems to have been quite capable of maintaining stability when the forearm was flexed like non-human apes. It is possible this reflects some arboreal activity (movement in the trees) as is controversially postulated in other australopithecines. SKX 3602 exhibits robust radial styloid processes near the hand which indicate strong brachioradialis muscles and extensor retinaculae. Like humans, the finger bones are uncurved and have weaker muscle attachment than non-human apes, though the proximal phalanges are smaller than in humans. The intermediate phalanges are stout and straight like humans, but have stouter bases and better developed flexor impressions. The distal phalanges seem to be essentially humanlike. These could indicate a decreased climbing capacity compared to non-human apes and P. boisei. The P. robustus hand is consistent with a humanlike precision grip which would have made possible the production or usage of tools requiring greater motor functions than non-human primate tools.
The femur, as in P. boisei and H. habilis, is flattened anteroposteriorly (on the front and back side). This may indicate a walking gait more similar to early hominins than to modern humans (less efficient gait). Four femora assigned to P. robustus—SK 19, SK 82, SK 97, and SK 3121—exhibit an apparently high anisotropic trabecular bone (at the hip joint) structure, which could indicate reduced mobility of the hip joint compared to non-human apes, and the ability to produce forces consistent with humanlike bipedalism. The femoral head StW 311, which either belongs to P. robustus or early Homo, seems to have habitually been placed in highly flexed positions based on the wearing patterns, which would be consistent with frequent climbing activity. It is unclear if frequent squatting could be a valid alternative interpretation. The textural complexity of the kneecap SKX 1084, which reflects cartilage thickness and thus usage of the knee joint and bipedality, is midway between modern humans and chimps. The big toe bone of P. robustus is not dextrous, which indicates a humanlike foot posture and range of motion, but the more distal ankle joint would have inhibited the modern human toe-off gait cycle. P. robustus and H. habilis may have achieved about the same grade of bipedality.
Size
Broom had noted that the ankle bone and humerus of the holotype TM 1517 were about the same dimensions as that of a modern San woman, and so assumed humanlike proportions in P. robustus. In 1972, Robinson estimated Paranthropus as having been massive. He calculated the humerus-to-femur ratio of P. robustus by using the presumed female humerus of STS 7 and comparing it with the presumed male femur of STS 14. He also had to estimate the length of the humerus using the femur assuming a similar degree of sexual dimorphism between P. robustus and humans. Comparing the ratio to humans, he concluded that P. robustus was a heavily built species with a height of and a weight of . Consequently, Robinson had described its locomotory habits as, "a compromise between erectness and facility for quadrupedal climbing." In contrast, he estimated A. africanus (which he called "H." africanus) to have been tall and in weight, and to have also been completely bipedal.
Robinson's estimation of P. robustus size was soon challenged in 1974 by American palaeontologist Stephen Jay Gould and English palaeoanthropologist David Pilbeam, who guessed from the available skeletal elements a weight of about . Similarly, in 1988, American anthropologist Henry McHenry reported much lighter weights as well as notable sexual dimorphism for Paranthropus. McHenry plotted body size vs. the cross sectional area of the femoral head for a sample of just humans and a sample with all great apes including humans, and calculated linear regressions for each one. Based on the average of these two regressions, he reported an average weight of for P. robustus using the specimens SK 82 and SK 97. In 1991, McHenry expanded his sample size, and also estimated the living size of Swartkrans specimens by scaling down the dimensions of an average modern human to meet a preserved leg or foot element (he considered the arm measurements too variable among hominins to give accurate estimates). At Members 1 and 2, about 35% of the P. robustus leg or foot specimens were the same size as those in a human, 22% in a human, and the remaining 43% bigger than the former but less than a human except for KNM‐ER 1464 (an ankle bone). At Member 3, all individuals were consistent with a human. Smaller adults thus seem to have been more common. McHenry also estimated the living height of three P. robustus specimens (male SK 82, male SK 97, and female or subadult SK 3155), by scaling down an average human to meet the estimated size of the preserved femur, as , , and , respectively. Based on just these three, he reported an average height of for P. robustus males and for females.
In 2001, palaeoanthropologist Randall L. Susman and colleagues, using two recently discovered proximal femoral fragments from Swartkrans, estimated an average of for males and for females. If these four proximal femur specimens—SK 82, SK 97, SKW 19, and SK 3121—are representative of the entire species, they said that this degree of sexual dimorphism is greater than what is exhibited in humans and chimpanzees, but less than orangutans and gorillas. Female P. robustus were about the same estimated weight as female H. ergaster/H. erectus in Swartkrans, but they estimated male H. ergaster/H. erectus as much bigger at . In 2012, American anthropologist Trenton Holliday, using the same equation as McHenry on three specimens, reported an average of with a range of . In 2015, biological anthropologist Mark Grabowski and colleagues, using nine specimens, estimated an average of for males and for females.
Palaeobiology
Diet
In 1954, Robinson suggested that the heavily built skull of P. robustus and resultantly exorbitant bite force was indicative of a specialist diet adapted for frequently cracking hard foods such as nuts. Because of this, the predominant model of Paranthropus extinction for the latter half of the 20th century was that they were unable to adapt to the volatile climate of the Pleistocene, unlike the much more adaptable Homo. Subsequent researchers reinforced this model studying the musculature of the face, dental wearing patterns, and primate ecology. In 1981, English anthropologist Alan Walker, while studying the P. boisei skulls KNM-ER 406 and 729, pointed out that bite force is a measure of not only the total pressure exerted but also the surface area of the tooth over which the pressure is being exerted, and Paranthropus teeth are 4–5 times the size of modern human teeth. Because the chewing muscles are arranged the same way, Walker postulated that the heavy build was instead an adaptation to chew a large quantity of food at the same time. He also found that microwearing on 20 P. boisei molar specimens were indistinguishable from patterning recorded in mandrills, chimps, and orangutans. Despite subsequent arguments that Paranthropus were not specialist feeders, the predominant consensus in favour of Robinson's initial model did not change for the remainder of the 20th century.
In 2004, in their review of Paranthropus dietary literature, anthropologists Bernard Wood and David Strait concluded that Paranthropus were most definitely generalist feeders, and that P. robustus was an omnivore. They found that the microwear patterns in P. robustus suggest hard food was infrequently consumed, and therefore the heavy build of the skull was only relevant when eating less desirable fallback foods. Such a strategy is similar to that used by modern gorillas, which can sustain themselves entirely on lower quality fallback foods year-round, as opposed to lighter built chimpanzees (and presumably gracile australopithecines) which require steady access to high quality foods. In 1980, anthropologists Tom Hatley and John Kappelman suggested that early hominins (convergently with bears and pigs) adapted to eating abrasive and calorie-rich underground storage organs (USOs), such as roots and tubers. Since then, hominin exploitation of USOs has gained more support. In 2005, biological anthropologists Greg Laden and Richard Wrangham proposed that Paranthropus relied on USOs as a fallback or possibly primary food source, and noted that there may be a correlation between high USO abundance and hominin occupation.
A 2006 carbon isotope analysis suggested that P. robustus subsisted on mainly C4 savanna plants or C3 forest plants depending on the season, which could indicate either seasonal shifts in diet or seasonal migration from forest to savanna. H. ergaster/H. erectus appears to have consumed about the same proportion of C3 to C4 based foods as P. robustus. P. robustus likely also commonly cracked hard foods such as seeds or nuts, as it had a moderate tooth-chipping rate (about 12% in a sample of 239 individuals, as opposed to little to none for P. boisei). A high cavity rate could indicate honey consumption. Juvenile P. robustus may have relied more on tubers than adults, given the elevated levels of strontium compared to adults in teeth from Swartkrans Cave, which, in the area, was most likely sourced from tubers. Dentin exposure on juvenile teeth could indicate early weaning, or a more abrasive diet than adults which wore away the cementum and enamel coatings, or both. It is also possible juveniles were instead less capable of removing grit from dug-up food rather than purposefully seeking out more abrasive foods.
Social structure
Given the marked anatomical and physical differences with modern great apes, there may be no modern analogue for australopithecine societies, so comparisons drawn with modern primates are highly speculative.
In 2007, anthropologist Charles Lockwood and colleagues pointed out that P. robustus appears to have had pronounced sexual dimorphism, with males notably larger than females. This is commonly correlated with a male-dominated polygamous society, such as the harem society of modern forest-dwelling silverback gorillas where one male has exclusive breeding rights to a group of females. Estimated male-female size disparity in P. robustus is comparable to gorillas (based on facial dimensions), and younger males were less robust than older males (delayed maturity is also exhibited in gorillas). Because the majority of sexed P. robustus specimens are male (or at least presumed male), males seem to have had a higher mortality rate than females. In a harem society, males are more likely to be evicted from the group given higher male–male competition over females, and lone males may have been put at a higher risk of predation. By this hypothesis, a female moving out of her birth group may have spent little time alone and transferred immediately to another established group.
However, in 2011, palaeoanthropologist Sandi Copeland and colleagues studied the strontium isotope ratio of P. robustus teeth from the dolomite Sterkfontein Valley, and found that like other hominins, but unlike other great apes, P. robustus females were more likely to leave their place of birth (patrilocal). This discounts the plausibility of a harem society, which would have resulted in a matrilocal society due to heightened male–male competition. Males did not seem to have ventured very far from the valley, which could either indicate small home ranges, or that they preferred dolomitic landscapes due to perhaps cave abundance or factors related to vegetation growth. Similarly, in 2016, Polish anthropologist Katarzyna Kaszycka rebutted that, among primates, delayed maturity is also exhibited in the rhesus monkey which has a multi-male society, and may not be an accurate indicator of social structure. If P. robustus preferred a savanna habitat, a multi-male society would have been more conducive in defending the troop from predators in the more exposed environment, much like baboons which live in the savanna. Even in a multi-male society, it is still possible that males were more likely to be evicted, explaining male-skewed mortality with the same mechanism.
In 2017, anthropologist Katharine Balolia and colleagues postulated that, because male non-human great apes have a larger sagittal crest than females (particularly gorillas and orangutans), the crest may be influenced by sexual selection in addition to supporting chewing muscles. Further, the size of the sagittal crest (and the gluteus muscles) in male western lowland gorillas has been correlated with reproductive success. Balolia et al. extended their interpretation of the crest to the males of Paranthropus species, with the crest and resultantly larger head (at least in P. boisei) being used for some kind of display. This contrasts with other primates which flash the typically enlarged canines in agonistic display (Paranthropus likely did not do this as the canines are comparatively small), though it is also possible that the crest is only so prominent in male gorillas and orangutans because they require larger temporalis muscles to achieve a wider gape to better display the canines.
Technology
Cave sites in the Cradle of Humankind often have stone and bone tools, with the former attributed to early Homo and the latter generally to P. robustus, as bone tools are most abundant when P. robustus remains far outnumber Homo remains. Australopithecine bone technology was first proposed by Dart in the 1950s with what he termed the "osteodontokeratic culture", which he attributed to A. africanus at Makapansgat dating to 3–2.6 million years ago. These bones are no longer considered to have been tools, and the existence of this culture is not supported. The first probable bone tool was reported by Robinson in 1959 at Sterkfontein Member 5. Excavations led by South African palaeontologist Charles Kimberlin Brain at Swartkrans in the late 1980s and early 1990s recovered 84 similar bone tools, and excavations led by Keyser at Drimolen recovered 23. These tools were all found alongside Acheulean stone tools, except for those from Swartkrans Member 1 which bore Oldowan stone tools. Thus, there are 108 bone tool specimens from the region in total, and possibly an additional two from Kromdraai B. The two stone tools (either "Developed Oldowan" or "Early Acheulean") from Kromdraai B could possibly be attributed to P. robustus, as Homo has not been confidently identified in this layer, though it is possible that the stone tools were reworked (moved into the layer after the inhabitants had died). Bone tools may have been used to cut or process vegetation, process fruits (namely marula fruit), strip tree bark, or dig up tubers or termites. The form of P. robustus incisors appears to be intermediate between H. erectus and modern humans, which could possibly mean it did not have to regularly bite off mouthfuls of a large food item due to preparation with simple tools. The bone tools were typically sourced from the shaft of long bones from medium- to large-sized mammals, but tools sourced from mandibles, ribs, and horn cores have also been found. They were not manufactured or purposefully shaped for a task, but since they display no weathering, and there is a preference displayed for certain bones, raw materials were likely specifically hand picked. This contrasts with East African bone tools which appear to have been modified and directly cut into specific shapes before using.
In 1988, Brain and South African archaeologist A. Sillent analysed the 59,488 bone fragments from Swartkrans Member 3, and found that 270 had been burnt, mainly belonging to medium-sized antelope, but also zebra, warthog, baboon, and P. robustus. They were found across the entire depth of Member 3, so fire was a regular event throughout its deposition. Based on colour and structural changes, they found that 46 were heated to below , 52 to , 45 to , and 127 above this. They concluded that these bones were, "the earliest direct evidence of fire use in the fossil record," and compared the temperatures with those achieved by experimental campfires burning white stinkwood which commonly grows near the cave. Though some bones had cut marks consistent with butchery, they said it was also possible hominins were making fire to scare away predators or for warmth instead of cooking. Because both P. robustus and H. ergaster/H. erectus were found in the cave, they were unsure which species to attribute the fire to. As an alternative to hominin activity, because the bones were not burnt inside the cave, it is possible that they were naturally burnt in cyclically occurring wildfires (dry savanna grass as well as possible guano or plant accumulation in the cave may have left it susceptible to such a scenario), and then washed into what would become Member 3. The now-earliest claim of fire usage is 1.7 million years ago at Wonderwerk Cave, South Africa, made by South African archaeologist Peter Beaumont in 2011, which he attributed to H. ergaster/H. erectus.
Development
Australopithecines are generally considered to have had a faster, apelike growth rate than modern humans largely due to dental development trends. Broadly speaking, the emergence of the first permanent molar in early hominins has been variously estimated anywhere from 2.5 to 4.5 years, which all contrast markedly with the modern human average of 5.8 years. The 1st permanent molar of SK 63, which may have died at 3.4–3.7 years of age, possibly erupted at 2.9–3.2 years. In modern apes (including humans), dental development trajectory is strongly correlated with life history and overall growth rate, but it is possible that early hominins simply had a faster dental trajectory but a slower life history due to environmental factors, such as early weaning age as is exemplified in modern indriid lemurs. In TM 1517, fusion of the elements of the distal humerus (at the elbow joint) occurred before the fusion of the elements in the distal big toe phalanx, much like in chimps and bonobos, but unlike humans, which could also indicate an apelike growth trajectory.
While growing, the front part of the jaw in P. robustus is depository (so it grows) whereas the sides are resorptive (so they recede). For comparison, chimp jaws are generally depository reflecting prognathism, and modern humans resorptive reflecting a flat face. In Paranthropus, this may have functioned to thicken the palate. Unlike other apes and gracile australopithecines, but like humans, the premaxillary suture between the premaxilla and the maxilla (on the palate) formed early in development. At early stages, the P. robustus jawbone was somewhat similar to that of modern humans, but the breadth grew in P. robustus, as to be expected from its incredible robustness in adulthood. By the time the first permanent molar erupts, the body of the mandible and the front jaw broadened, and the ramus of the mandible elongated, diverging from the modern human trajectory. Because the ramus was so tall, it is suggested that P. robustus experienced more anterior face rotation than modern humans and apes. Growth was most marked between the eruptions of the first and second permanent molars, most notably in terms of the distance from the back of the mouth to the front of the mouth, probably to make room for the massive postcanine teeth. Like humans, jaw robustness decreased with age, though it decreased slower in P. robustus. Regardless if P. robustus followed a human or non-human ape dental development timeframe, the premolars and molars would have had an accelerated growth rate to achieve their massive size. In contrast, the presence of perikymata on the incisors and canines (growth lines which typically are worn away after eruption) could indicate these teeth had a reduced growth rate. The tooth roots of P. robustus molars may have grown at a faster rate than gracile australopithecines; the root length of SK 62's 1st molar, which was reaching emergence from the dental alveolus, is about . In contrast, those of other hominins reach after the tooth has emerged not only from the gums (a later stage of dental development). SK 62's growth trajectory is more similar to that of gorillas, whose roots typically measure when emerging from the gums.
Females may have reached skeletal maturity by the time the third molar erupted, but males appear to have continued growing after reaching dental maturity, during which time they become markedly more robust than females (sexual bimaturism). Similarly, male gorillas complete dental development about the same time as females, but continue growing for up to 5 or 6 years; and male mandrills complete dental development before females, but continue growing for several years more. It is debated whether or not P. robustus had a defined growth spurt in terms of overall height during adolescence, an event unique to humans among modern apes.
Life history
In 1968, American anthropologist Alan Mann, using dental maturity, stratified P. robustus specimens from Swartkrans into different ages, and found an average of 17.2 years at death (they did not necessarily die from old age), and the oldest specimen was 30–35 years old. He also reported an average of 22.2 years for A. africanus. Using these, he argued these hominins had a humanlike prolonged childhood. In response, in 1971, biologist Kelton McKinley repeated Mann's process with more specimens, and (including P. boisei) reported an average of 18 years. McKinley agreed with Mann that P. robustus may have had a prolonged childhood. McKinley also speculated that sexual maturity was reached at approximately 11 years because it is about halfway between the averages for chimps (9 years) and humans (13). Based on this, he concluded babies were birthed at intervals of 3 to 4 years using a statistical test to maximise the number of children born.
In 1972, after estimating a foetal size of based on an adult female weight of , anthropologist Walter Leutenegger estimated foetal head size at about , similar to a chimp. In 1973, using this and an equation between foetal head size and gestation (assuming foetal growth rate of 0.6 for all mammals), biologist John Frazer estimated a gestation of 300 days for P. robustus. In response, Leutenegger pointed out that apes have highly variable foetal growth rates, and "estimates on gestation periods based on this rate and birth weight are useless."
In 1985, British biologists Paul H. Harvey and Tim Clutton-Brock came up with equations relating body size to life history events for primates, which McHenry applied to australopithecines in 1994. For P. robustus, he reported newborn brain size of 175 cc and weight of , gestation 7.6 months, weaning after 30.1 months of age, maturation age 9.7 years, breeding age 11.4 years, birth interval 45 months, and lifespan 43.3 years. These roughly aligned with other australopithecines and chimps. However, for chimps, he got strongly inaccurate results when compared to actual data for newborn brain size, weaning age, and birth interval, and for humans all metrics except birth interval.
Pathology
Based on a sample of 402 teeth, P. robustus seems to have had a low incidence rate of about 12–16% for tertiary dentin, which forms to repair tooth damage caused by excessive wearing or dental cavities. This is similar to what was found for A. africanus and H. naledi (all three inhabited the Cradle of Humankind at different points in time). In contrast, chimpanzees have an incidence rate of 47%, and gorillas as much as 90%, probably due to a diet with a much higher content of tough plants.
P. robustus seems to have had notably high rates of pitting enamel hypoplasia (PEH), where tooth enamel formation is spotty instead of mostly uniform. In P. robustus, about 47% of baby teeth and 14% of adult teeth were affected, in comparison to about 6.7% and 4.3%, respectively, for the combined teeth of A. africanus, A. sediba, early Homo, and H. naledi. The condition of these holes covering the entire tooth is consistent with the modern human ailment amelogenesis imperfecta. Since circular holes in enamel coverage are uniform in size, only present on the molar teeth, and have the same severity across individuals, the PEH may have been a genetic condition. It is possible that the coding region concerned with thickening enamel also increased the risk of developing PEH.
As many as four P. robustus individuals have been identified as having had dental cavities, indicating a rate similar to non-agricultural modern humans (1–5%). This is odd as P. robustus is thought to have had a diet high in gritty foods, and gritty foods should decrease cavity incidence rate, so P. robustus may have often consumed high-sugar cavity-causing foods. PEH may have also increased susceptibility to cavities. A molar from Drimolen showed a cavity on the tooth root, a rare occurrence in fossil great apes. In order for cavity-creating bacteria to reach this area, the individual would have also presented either alveolar resportion, which is commonly associated with gum disease; or super-eruption of the tooth which occurs when it becomes worn down and has to erupt a bit more in order to maintain a proper bite, exposing the root in the process. The latter is most likely, and the exposed root seems to have caused hypercementosis to anchor the tooth in place. The cavity seems to have been healing, possibly due to a change in diet or mouth microbiome, or the loss of the adjacent molar.
In a sample of 15 P. robustus specimens, all of them exhibited mild to moderate alveolar bone loss resulting from periodontal disease (the wearing away of the bone which supports the teeth due to gum disease). In contrast, in a sample of 10 A. africanus specimens, three exhibited no pathologies of the alveolar bone. Measuring the distance between the alveolar bone and the cementoenamel junction, P. robustus possibly suffered from a higher rate of tooth-attachment loss, unless P. robustus had a higher cervical height (the slightly narrowed area where the crown meets the root) in which case these two species had the same rate of tooth-attachment loss. If the former is correct, then the difference may be due to different dietary habits, chewing strategies, more pathogenic mouth microflora in P. robustus, or some immunological difference which made P. robustus somewhat more susceptible to gum disease.
While removing the matrix encapsulating TM 1517, Schepers noted a large rock, which would have weighed , which had driven itself into the braincase through the parietal bone. He considered this evidence that another individual had killed TM 1517 by launching the rock as a projectile in either defense or attack, but the most parsimonious explanation is that the rock was deposited during the fossilisation process after TM 1517 had died. In 1961, science writer Robert Ardrey noted two small holes about 2.5 cm (an inch) apart on the child skullcap SK 54, and believed this individual had been killed by being struck twice on the head in an assault; in 1970, Brain reinterpreted this as evidence of a leopard attack.
Palaeoecology
The Pleistocene Cradle of Humankind was mainly dominated by the springbok Antidorcas recki, but other antelope, giraffes, and elephants were also seemingly abundant megafauna. The carnivore assemblage comprises the sabertoothed cats Dinofelis spp. and Megantereon spp., and the hyena Lycyaenops silberbergi. Overall, the animal assemblage of the region broadly indicates a mixed, open-to-closed landscape featuring perhaps montane grasslands and shrublands. Australopithecines and early Homo likely preferred cooler conditions than later Homo, as there are no australopithecine sites that were below in elevation at the time of deposition. This would mean that, like chimps, they often inhabited areas with an average diurnal temperature of , dropping to at night.
P. robustus also cohabited the Cradle of Humankind with H. ergaster/H. erectus. In addition, these two species resided alongside Australopithecus sediba which is known from about 2 million years ago at Malapa. The most recent A. africanus specimen, Sts 5, dates to about 2.07 million years ago, around the arrival of P. robustus and H. erectus. It has been debated whether or not P. robustus would have had symbiotic, neutral, or antagonist relations with contemporary Australopithecus and Homo. It is possible that South Africa was a refugium for Australopithecus until about 2 million years ago with the beginning of major climatic variability and volatility, and potentially competition with Homo and Paranthropus.
Fossil-bearing deposits
Swartkrans
At Swartkrans, P. robustus has been identified from Members 1–3. Homo is also found in these deposits, but species identification in Members 1 and 2 is debated between H. ergaster/H. erectus, H. habilis, H. rudolfensis, or multiple species. In total, over 300 P. robustus specimens representing over 130 individuals, predominantly isolated teeth, have been recovered from Swartkrans.
Member 1 and Member 3 have several mammal species in common, making dating by animal remains (biostratigraphy) yield overlapping time intervals. Like the East African Olduvai Bed I (2.03–1.75 million years ago) and Lower Bed II (1.75–1.70 million years ago), Member 1 preserved the antelope Parmularius angusticornis, the wildebeest, and the Cape buffalo. The presence of the Hamadryas baboon and Dinopithecus could mean Members 1–3 were deposited 1.9–1.65 million years ago, though the presence of warthogs suggests some sections of the deposits could date to after 1.5 million years ago. Uranium–lead dating reports intervals of 3.21–0.45 million years ago for Member 1 (a very large error range), 1.65–1.07 million years ago for Member 2, and 1.04–0.62 million years ago for Member 3, though more likely the younger side of the estimate; this could mean P. robustus outlived P. boisei.
Cosmogenic nuclide geochronology has reported much more constrained dates of 2.2–1.8 million years ago for Member 1, and 0.96 million years ago for Member 3. No suitable section of Member 2 could be identified to date.
Sterkfontein
At Sterkfontein, only the specimens StW 566 and StW 569 are firmly assigned to P. robustus, coming from the "Oldowan infill" dating to 2–1.7 million years ago in a section of Member 5. Earlier members yielded A. africanus. In 1988, palaeoanthropologist Ronald J. Clarke suggested StW 505 from the earlier Member 4 was an ancestor to P. robustus. The specimen is still generally assigned to A. africanus, though the Sterkfontein hominins are known to have an exceedingly wide range of variation, and it is debated whether or not the materials represent multiple species instead of just A. africanus.
The appearance of the baboon Theropithecus oswaldi, zebras, lions, ostriches, springhares, and several grazing antelope in Member 5 indicates the predominance of open grasslands, but sediment analysis indicates the cave opening was moist during deposition, which could point to a well-watered wooded grassland.
Kromdraai
At Kromdraai, P. robustus has been unearthed at Kromdraai B, and almost all P. robustus fossils discovered in the cave have been recovered from Member 3 (out of 5 members). A total of 31 specimens representing at least 17 individuals have been recovered. The only potential Homo specimen from Member 3 is KB 5223, but its classification is debated. The ear bones of the juvenile KB 6067 from Member 3 is consistent with that of P. robustus, but the dimensions of the cochlea and oval window better align with the more ancient StW 53 from Sterkfontein Member 4 with undetermined species designation. KB 6067, therefore, may possibly be basal to (more ancient than) other P. robustus specimens, at least those for which ear morphology is known.
Palaeomagnetism suggests Member 3 may date to 1.78–1.6 million years ago, Member 2 to before 1.78 million years ago, and Member 1 to 2.11–1.95 million years ago.
The animal remains of Kromdraai A suggest deposition occurred anywhere between 1.89 and 1.63 million years ago, and the presence of Oldowan or Achulean tools indicates early Homo activity. The biostratigraphic dating of Kromdraai B is less clear as there are no animal species which are known to have existed in a narrow time interval, and many non-hominin specimens have not been assigned to a species (left at genus level). About 75% of mammalian remains other than P. robustus are monkeys, including leaf-eating colobine monkeys, possibly the earliest record of the Hamadryas baboon, Gorgopithecus, and Papio angusticeps in South Africa. The absence of the baboons T. oswaldi and Dinopithecus could potentially mean Member 3 is older than Sterkfontein Member 5 and Swartkrans Member 1; which, if correct, would invalidate the results from palaeomagnetism, and make these specimens among the oldest representatives of the species.
Gondolin Cave
Gondolin Cave has yielded 3 hominin specimens: a right third premolar assigned to early Homo (G14018), a partial left gracile australopithecine first or second molar (GDA-1), and a robust australopithecine second molar (GDA-2). The first hominin specimen (G14018) was found by German palaeontologist Elisabeth Vrba in 1979, and the other two specimens were recovered in 1997 by, respectively, South African palaeoanthropologist Andre Keyser and excavator L. Dihasu. GDA-2—measuring , an area of —is exceptionally large for P. robustus, which has a recorded maximum of . This falls within the range of P. boisei , so the discoverers assigned it to an indeterminate species of Paranthropus rather than P. robustus.
GDA-2 was found alongside the pig Metridiochoerus andrewsi, which means the tooth must be 1.9–1.5 million years old. Using this and palaeomagnetism, it may date to roughly 1.8 million years ago.
Cooper's Cave
Cooper's Cave was first reported to yield P. robustus remains in 2000 by South African palaeoanthropologists Christine Steininger and Lee Rogers Berger. Specimens include a crushed partial right face (COB 101), three isolated teeth, a juvenile jawbone, and several skull fragments.
The animal remains in the hominin-bearing deposit are similar to those of Swartkrans and Kromdraai A, so the Cooper's Cave deposits may date to 1.87–1.56 million years ago.
Drimolen Cave
Drimolen Cave was first discovered to have yielded hominin remains by Keyser in 1992, who, in eight years, oversaw the recovery of 79 P. robustus specimens. Among these are the most complete P. robustus skulls: the presumed female DNH-7 (which also preserved articulated jawbone with almost all the teeth), and presumed male DNH 155. It was also associated with the H. ergaster/H. erectus skull DNH 134. The Drimolen material preserves several basal characteristics relative to the Swartkrans and Kromdraai remains (meaning it may be older).
The site is thought to be roughly 2–1.5 million years old based on animal remains which have also been recovered from Swartkrans Member 1. The animal assemblage is broadly similar to that of Cooper's Cave, meaning they probably are about the same age. In 2020, DNH 152 was palaeomagnetically dated to 2.04–1.95 million years ago, making it the oldest identified P. robustus specimen.
Predation
Australopithecine bones may have accumulated in caves due to large carnivores dragging in carcasses, which was first explored in detail by Brain in his 1981 book The Hunters or the Hunted?: An Introduction to African Cave Taphonomy. The juvenile P. robustus skullcap SK 54 has two puncture marks consistent with the lower canines of the leopard specimen SK 349 from the same deposits. Brain hypothesised that Dinofelis and perhaps also hunting hyenas specialised on killing australopithecines, but carbon isotope analysis indicates these species predominantly ate large grazers, while the leopard, the sabertoothed Megantereon, and the spotted hyena were more likely to have regularly consumed P. robustus. Brain was unsure if these predators actively sought them out and brought them back to the cave den to eat, or inhabited deeper recesses of caves and ambushed them when they entered. Modern-day baboons in this region often shelter in sinkholes especially on cold winter nights, though Brain proposed that australopithecines seasonally migrated out of the Highveld and into the warmer Bushveld, only taking up cave shelters in spring and autumn.
As an antipredator behaviour, baboons often associate themselves with medium-to-large herbivores, most notably impalas, and it is possible that P. robustus as well as other early hominins which lived in open environments did so also, given they are typically associated with an abundance of medium-to-large bovid and horse remains.
Extinction
Though P. robustus was a rather hardy species with a tolerance for environmental variability, it seems to have preferred wooded environments, and similarly most P. robustus remains date to a wet period in South Africa 2–1.75 million years ago conducive to such biomes. The extinction of P. robustus coincided with the Mid-Pleistocene Transition, and the doubling of glacial cycle duration. During glacial events, with more ice locked up at the poles, the tropical rain belt contracted towards the equator, subsequently causing the retreat of wetland and woodland environments. Before the transition, P. robustus populations possibly contracted to certain wooded refuge zones over 21,000-year cycles, becoming regionally extinct in certain areas until the wet cycle whereupon it would repopulate those zones. The continual prolonging of dry cycles may have caused its extinction, with the last occurrence in the fossil record 1–0.6 million years ago (though more likely 0.9 million years ago). Homo possibly was able to survive by inhabiting a much larger geographical range, more likely to find a suitable refuge area during unfavourable climate swings.
However, the geographical range of P. robustus in the fossil record is roughly , whereas the critically endangered eastern gorilla (with the smallest range of any African ape) inhabits , the critically endangered western gorilla , and the endangered chimpanzee . Therefore, fossil distribution very unlikely represents the true range of the species; consequently, P. robustus possibly went extinct much more recently somewhere other than the Cradle of Humankind (Signor–Lipps effect).
| Biology and health sciences | Australopithecines | Biology |
2165275 | https://en.wikipedia.org/wiki/Paranthropus%20boisei | Paranthropus boisei | Paranthropus boisei is a species of australopithecine from the Early Pleistocene of East Africa about 2.5 to 1.15 million years ago. The holotype specimen, OH 5, was discovered by palaeoanthropologist Mary Leakey in 1959 at Olduvai Gorge, Tanzania and described by her husband Louis a month later. It was originally placed into its own genus as "Zinjanthropus boisei", but is now relegated to Paranthropus along with other robust australopithecines. However, it is also argued that Paranthropus is an invalid grouping and synonymous with Australopithecus, so the species is also often classified as Australopithecus boisei.
Robust australopithecines are characterised by heavily built skulls capable of producing high stresses and bite forces, and some of the largest molars with the thickest enamel of any known ape. P. boisei is the most robust of this group. Brain size was about , similar to other australopithecines. Some skulls are markedly smaller than others, which is taken as evidence of sexual dimorphism where females are much smaller than males, though body size is difficult to estimate given only one specimen, OH 80, definitely provides any bodily elements. The presumed male OH 80 may have been tall and in weight, and the presumed female KNM-ER 1500 tall (though its species designation is unclear). The arm and hand bones of OH 80 and KNM-ER 47000 suggest P. boisei was arboreal to a degree.
P. boisei was originally believed to have been a specialist species of hard foods, such as nuts, due to its heavily built skull, but it was more likely a generalist feeder of predominantly abrasive C4 plants, such as grasses or underground storage organs. Like gorillas, the apparently specialised adaptations of the skull may have only been used with less desirable fallback foods, allowing P. boisei to inhabit a wider range of habitats than gracile australopithecines. P. boisei may have been able to make Oldowan stone tools and butcher carcasses. P. boisei mainly inhabited wet, wooded environments, and coexisted with H. habilis, H. rudolfensis and H. ergaster/erectus. These were likely preyed upon by the large carnivores of the time, including big cats, crocodiles and hyenas.
Research history
Discovery
Palaeoanthropologists Mary and Louis Leakey had conducted excavations in Tanzania since the 1930s, though work was postponed with the start of World War II. They returned in 1951, finding mostly ancient tools and fossils of extinct mammals for the next few years. In 1955, they unearthed a hominin baby canine and large molar tooth in Olduvai Gorge, catalogue ID Olduvai Hominin (OH) 3.
On the morning of July 17, 1959, Louis felt ill and stayed at camp while Mary went out to Bed I's Frida Leakey Gully. Sometime around 11:00 AM, she noticed what appeared to be a portion of a skull poking out of the ground, OH 5. The dig team created a pile of stones around the exposed portion to protect it from further weathering. Active excavation began the following day; they had chosen to wait for photographer Des Bartlett to document the entire process. The partial cranium was fully unearthed August 6, though it had to be reconstructed from its fragments which were scattered in the scree. Louis published a short summary of the find and context the following week.
Louis determined OH 5 to be a subadult or adolescent based on dental development, and he and Mary nicknamed it "Dear Boy". After they reconstructed the skull and jaws, newspapers began referring to it as "Nutcracker Man" due to the large back teeth and jaws which gave it a resemblance to vintage nutcrackers. South African palaeoanthropologist Phillip Tobias, a colleague of the Leakeys, has also received attribution for this nickname. The cranium was taken to Kenya after its discovery and was there until January 1965 when it was placed on display in the Hall of Man at the National Museum of Tanzania in Dar es Salaam.
Other specimens
Louis preliminarily supposed OH 5 was about half a million years old, but in 1965, American geologists Garniss Curtis and Jack Evernden dated OH 5 to 1.75 million years ago using potassium–argon dating of anortoclase crystals from an overlying tuff (volcanic ash) bed. Such an application of geochronology was unprecedented at the time.
The first identified jawbone, Peninj 1, was discovered Lake Natron just north of Olduvai Gorge in 1964. Especially from 1966 to 1975, several more specimens revealing facial elements were reported from the Shungura Formation, Ethiopia; Koobi Fora and Chesowanja, Kenya; and Omo and Konso, Ethiopia. Among the notable specimens found include the well preserved skull KNM-ER 406 from Koobi Fora in 1970. In 1997, the first specimen with both the skull and jawbone (and also one of the largest specimens), KGA10-525, was discovered in Konso. In 1999, a jawbone was recovered from Malema, Malawi, extending the species' southernmost range over from Olduvai Gorge. The first definitive bodily elements of P. boisei associated with facial elements, OH 80 (isolated teeth with an arm and a leg), were discovered in 2013. Previously, body remains lacking unambiguous diagnostic skull elements had been dubiously assigned to the species, namely the partial skeleton KNM-ER 1500 associated with a small jawbone fragment. In 2015, based on OH 80, American palaeoanthropologist Michael Lague recommended assigning the isolated humerus specimens KNM-ER 739, 1504, 6020 and 1591 from Koobi Fora to P. boisei. In 2020, the first associated hand bones were reported, KNM-ER 47000 (which also includes a nearly complete arm), from Ileret, Kenya.
Naming
The remains were clearly australopithecine (not of the genus Homo), and at the time, the only australopithecine genera described were Australopithecus by Raymond Dart and Paranthropus (the South African P. robustus) by Robert Broom, and there were arguments that Paranthropus was synonymous with Australopithecus. Louis believed the skull had a mix of traits from both genera, briefly listing 20 differences, and so used OH 5 as the basis for the new genus and species "Zinjanthropus boisei" on August 15, 1959. The genus name derives from the medieval term for East Africa, "Zanj", and the specific name was in honour of Charles Watson Boise, the Leakeys' benefactor. He initially considered the name "Titanohomo mirabilis" ("wonderful Titan-like man").
Soon after, Louis presented "Z." boisei to the 4th Pan-African Congress on Prehistory in Léopoldville, Belgian Congo (now Kinshasa, Democratic Republic of the Congo). Dart made his now famous joke, "... what would have happened if [the A. africanus specimen] Mrs. Ples had met Dear Boy one dark night." At the time of discovery, there was resistance to erecting completely new genera based on single specimens, and the Congress largely rejected "Zinjanthropus". In 1960, American anthropologist John Talbot Robinson pointed out that the supposed differences between "Zinjanthropus" and Paranthropus are due to OH 5 being slightly larger, and so recommended the species be reclassified as P. boisei. Louis rejected Robinson's proposal. Following this, it was debated if P. boisei was simply an East African variant of P. robustus until 1967 when South African palaeoanthropologist Phillip V. Tobias gave a far more detailed description of OH 5 in a monograph (edited by Louis). Tobias and Louis still retained "Zinjanthropus", but recommended demoting it to subgenus level as Australopithecus ("Zinjanthropus") boisei, considering Paranthropus to be synonymous with Australopithecus. Synonymising Paranthropus with Australopithecus was first suggested by anthropologists Sherwood Washburn and Bruce D. Patterson in 1951, who recommended limiting hominin genera to only Australopithecus and Homo.
Classification
The genus Paranthropus (otherwise known as "robust australopithecines") typically includes P. boisei, P. aethiopicus and P. robustus. It is debated if Paranthropus is a valid natural grouping (monophyletic) or an invalid grouping of similar-looking hominins (paraphyletic). Because skeletal elements are so limited in these species, their affinities with each other and to other australopithecines is difficult to gauge with accuracy. The jaws are the main argument for monophyly, but such anatomy is strongly influenced by diet and environment, and could in all likelihood have evolved independently in P. boisei and P. robustus. Proponents of monophyly consider P. aethiopicus to be ancestral to the other two species, or closely related to the ancestor. Proponents of paraphyly allocate these three species to the genus Australopithecus as A. boisei, A. aethiopicus and A. robustus.
Before P. boisei was described (and P. robustus was the only member of Paranthropus), Broom and Robinson continued arguing that P. robustus and A. africanus (the then only known australopithecines) were two distinct lineages. However, remains were not firmly dated, and it was debated if there were indeed multiple hominin lineages or if there was only 1 leading to humans. In 1975, the P. boisei skull KNM-ER 406 was demonstrated to have been contemporaneous with the H. ergaster/erectus skull KNM ER 3733, which is generally taken to show that Paranthropus was a sister taxon to Homo, both developing from some Australopithecus species, which at the time only included A. africanus. In 1979, a year after describing A. afarensis from East Africa, anthropologists Donald Johanson and Tim D. White suggested that A. afarensis was instead the last common ancestor between Homo and Paranthropus, and A. africanus was the earliest member of the Paranthropus lineage or at least was ancestral to P. robustus, because A. africanus inhabited South Africa before P. robustus, and A. afarensis was at the time the oldest-known hominin species at roughly 3.5 million years old. Now, the earliest known South African australopithecine ("Little Foot") dates to 3.67 million years ago, contemporaneous with A. afarensis.
Such arguments are based on how one draws the hominin family tree, and the exact classification of Australopithecus species with each other is quite contentious. For example, if the South African A. sediba (which evolved from A. africanus) is considered the ancestor or closely related to the ancestor of Homo, then this could allow for A. africanus to be placed more closely related to Homo than to Paranthropus. This would leave the Ethiopian A. garhi as the ancestor of P. aethiopicus instead of A. africanus (assuming Paranthropus is monophyletic, and that P. aethiopicus evolved at a time in East Africa when only A. garhi existed there).
Because P. boisei and P. aethiopicus are both known from East Africa and P. aethiopicus is only confidently identified from the skull KNM WT 17000 and a few jaws and isolated teeth, it is debated if P. aethiopicus should be subsumed under P. boisei or if the differences stemming from archaicness justifies species distinction. The terms P. boisei sensu lato ("in the broad sense") and P. boisei sensu stricto ("in the strict sense") can be used to respectively include and exclude P. aethiopicus from P. boisei when discussing the lineage as a whole.
P. aethiopicus is the earliest member of the genus, with the oldest remains, from the Ethiopian Omo Kibish Formation, dated to 2.6 million years ago (mya) at the end of the Pliocene. It is possible that P. aethiopicus evolved even earlier, up to 3.3 mya, on the expansive Kenyan floodplains of the time. The oldest P. boisei remains date to about 2.3 mya from Malema. The youngest record of P. boisei comes Olduvai Gorge (OH 80) about 1.34 mya; however, due a large gap in the hominin fossil record, P. boisei may have persisted until 1 mya. P. boisei changed remarkably little over its nearly one-million-year existence.
Anatomy
Skull
P. boisei is the most robust of the robust australopithecines, whereas the South African P. robustus is smaller with comparatively more gracile features. The P. boisei skull is heavily built, and features a defined brow ridge, receding forehead, rounded bottom margins of the eye sockets, inflated and concave cheek bones, a thick palate, and a robust and deep jawbone. This is generally interpreted as having allowed P. boisei to resist high stresses while chewing, though the thick palate could instead be a byproduct of facial lengthening. The skull features large rough patches (rugosities) on the cheek and jawbones, and males have pronounced sagittal (on the midline) and temporonuchal (on the back) crests, which indicate a massive masseter muscle (used in biting down) placed near the front of the head (increasing mechanical advantage). This is typically considered to be evidence of a high bite force.
The incisors and canines are reduced, which would hinder biting off chunks of large food pieces. In contrast, the cheek teeth of both sexes are enormous (postcanine megadontia), and the greater surface area would have permitted the processing of larger quantities of food at once. In the upper jaw, the 1st molar averages roughly , the 2nd molar , and the 3rd molar ; in the lower jaw, the 1st molar averages roughly , the 2nd molar , and the 3rd molar . The molars are bunodont, featuring low and rounded cusps. The premolars resemble molars (are molarised), which may indicate P. boisei required an extended chewing surface for processing a lot of food at the same time. The enamel on the cheek teeth are among the thickest of any known ape, which would help resist high stresses while biting.
Brain and sinuses
In a sample of 10 P. boisei specimens, brain size varied from with an average of . However, the lower-end specimen, Omo L338‐y6, is a juvenile, and many skull specimens have a highly damaged or missing frontal bone which can alter brain volume estimates. The brain volume of australopithecines generally ranged from , and for contemporary Homo .
Regarding the dural venous sinuses, in 1983, American neuroanthropologist Dean Falk and anthropologist Glenn Conroy suggested that, unlike A. africanus or modern humans, all Paranthropus (and A. afarensis) had expanded occipital and marginal (around the foramen magnum) sinuses, completely supplanting the transverse and sigmoid sinuses. In 1988, Falk and Tobias demonstrated that hominins can have both an occipital/marginal and transverse/sigmoid systems concurrently or on opposite halves of the skull, such as with the P. boisei specimen KNM-ER 23000.
In 1983, French anthropologist Roger Saban stated that the parietal branch of the middle meningeal artery originated from the posterior branch in P. boisei and P. robustus instead of the anterior branch as in earlier hominins, and considered this a derived characteristic due to increased brain capacity. It has since been demonstrated that the parietal branch could originate from either the anterior or posterior branches, sometimes both in a single specimen on opposite sides of the skull as in KNM-ER 23000 and OH 5.
Postcranium
The wide range of size variation in skull specimens seems to indicate a great degree of sexual dimorphism with males being notably bigger than females. However, it is difficult to predict with accuracy the true dimensions of living males and females due to the lack of definitive P. boisei skeletal remains, save for the presumed male OH 80. Based on an approximation of for the femur before it was broken and using modern humanlike proportions (which is probably an unsafe assumption), OH 80 was about tall in life. For comparison, modern human men and women in the year 1900 averaged and , respectively. The femoral head, the best proxy for estimating body mass, is missing, but using the shaft, OH 80 weighed about assuming humanlike proportions, and using the proportions of a non-human ape. The ambiguously attributed, presumed female femur KNM-ER 1500 is estimated to have been of an individual about tall which would be consistent with the argument of sexual dimorphism, but if the specimen does indeed belong to P. boisei, it would show a limb anatomy quite similar to that of the contemporary H. habilis.
Instead, the OH 80 femur, more like H. erectus femora, is quite thick, features a laterally flattened shaft, and indicates similarly arranged gluteal, pectineal and intertrochanteric lines around the hip joint. Nonetheless, the intertrochanteric line is much more defined in OH 80, the gluteal tuberosity is more towards the midline of the femur, and the mid-shaft in side-view is straighter, which likely reflect some difference in load-bearing capabilities of the leg. Unlike P. robustus, the arm bones of OH 80 are heavily built, and the elbow joint shows similarities to that of modern gibbons and orangutans. This could either indicate that P. boisei used a combination of terrestrial walking as well as suspensory behaviour, or was completely bipedal but retained an ape-like upper body condition from some ancestor species due to a lack of selective pressure to lose them. In contrast, the P. robustus hand is not consistent with climbing. The hand of KNM-ER 47000 shows Australopithecus-like anatomy lacking the third metacarpal styloid process (which allows the hand to lock into the wrist to exert more pressure), a weak thumb compared to modern humans, and curved phalanges (finger bones) which are typically interpreted as adaptations for climbing. Nonetheless, despite lacking a particularly forceful precision grip like Homo, the hand was still dextrous enough to handle and manufacture simple tools.
Palaeobiology
Diet
In 1954, Robinson suggested that the heavily built skull of Paranthropus (at the time only including P. robustus) was indicative of a specialist diet specifically adapted for processing a narrow band of foods. Because of this, the predominant model of Paranthropus extinction for the latter half of the 20th century was that it was unable to adapt to the volatile climate of the Pleistocene, unlike the much more adaptable Homo. It was also once thought P. boisei cracked open nuts and similar hard foods with its powerful teeth, giving OH 5 the nickname "Nutcracker Man".
However, in 1981, English anthropologist Alan Walker found that the microwearing patterns on the molars were inconsistent with a diet high in hard foods, and were effectively indistinguishable from the pattern seen in the molars of fruit-eating (frugivorous) mandrills, chimpanzees and orangutans. The microwearing on P. boisei molars is different from that on P. robustus molars, and indicates that P. boisei, unlike P. robustus, very rarely ever ate hard foods. Carbon isotope analyses report a diet of predominantly C4 plants, such as low quality and abrasive grasses and sedges. Thick enamel is consistent with grinding abrasive foods. The microwear patterns in P. robustus have been thoroughly examined, and suggest that the heavy build of the skull was only relevant when eating less desirable fallback foods. A similar scheme may have been in use by P. boisei. Such a strategy is similar to that used by modern gorillas, which can sustain themselves entirely on lower quality fallback foods year-round, as opposed to lighter built chimps (and presumably gracile australopithecines) which require steady access to high quality foods.
In 1980, anthropologists Tom Hatley and John Kappelman suggested that early hominins (convergently with bears and pigs) adapted to eating abrasive and calorie-rich underground storage organs (USOs), such as roots and tubers. Since then, hominin exploitation of USOs has gained more support. In 2005, biological anthropologists Greg Laden and Richard Wrangham proposed that Paranthropus relied on USOs as a fallback or possibly primary food source, and noted that there may be a correlation between high USO abundance and hominin occupation. In this model, P. boisei may have been a generalist feeder with a predilection for USOs, and may have gone extinct due to an aridity trend and a resultant decline in USOs in tandem with increasing competition with baboons and Homo. Like modern chimps and baboons, australopithecines likely foraged for food in the cooler morning and evening instead of in the heat of the day.
Technology
By the time OH 5 was discovered, the Leakeys had spent 24 years excavating the area for early hominin remains, but had instead recovered mainly other animal remains as well as the Oldowan stone tool industry. Because OH 5 was associated with the tools and processed animal bones, they presumed it was the toolmaker. Attribution of the tools was promptly switched to the bigger-brained H. habilis upon its description in 1964. In 2013, OH 80 was found associated with a mass of Oldowan stone tools and animal bones bearing evidence of butchery. This could potentially indicate P. boisei was manufacturing this industry and ate meat to some degree.
Additionally, the Early Stone Age of Africa coincides with simple bone tools. In South Africa, these are unearthed in the Cradle of Humankind and are largely attributed to P. robustus. In East Africa, a few have been encountered at Olduvai Gorge Beds I–IV, occurring over roughly 1.7 to 0.8 million years ago, and are usually made of limb bones and possibly teeth of large mammals, most notably elephants. The infrequency of such large animals at this site may explain the relative rarity of bone tools. The toolmakers were modifying bone in much the same way as they did with stone. Though the Olduvan bone tools are normally ascribed to H. ergaster/erectus, the presence of both P. boisei and H. habilis obfuscates attribution.
Social structure
In 1979, American biological anthropologist Noel T. Boaz noticed that the relative proportions between large mammal families at the Shungura Formation are quite similar to the proportion in modern-day across sub-Saharan Africa. Boaz believed that hominins would have had about the same population density as other large mammals, which would equate to 0.006–1.7 individuals per square kilometre (0.4 square mile). Alternatively, by multiplying the density of either bovids, elephants, or hippos by the percentage of hominin remains out of total mammal remains found at the formation, Boaz estimated a density of 0.001–2.58 individuals per square kilometre. Biologist Robert A. Martin considered population models based on the number of known specimens to be flimsy. In 1981, Martin applied equations formulated by ecologists Alton S. Harestad and Fred L. Bunnel in 1979 to estimate the home range and population density of large mammals based on weight and diet, and, using a weight of , he got: and 0.769 individual per square kilometre if herbivorous; and 0.077 individual if omnivorous; and and 0.0004 individual if carnivorous. For comparison, he calculated and 0.104 individual per square kilometre for omnivorous, chimps.
A 2017 study postulated that, because male non-human great apes have a larger sagittal crest than females (particularly gorillas and orangutans), the crest may be influenced by sexual selection in addition to supporting chewing muscles. Further, the size of the sagittal crest (and the gluteus muscles) in male western lowland gorillas has been correlated with reproductive success. They extended their interpretation of the crest to the males of Paranthropus species, with the crest and resultantly larger head (at least in P. boisei) being used for some kind of display. This contrasts with other primates which flash the typically engorged canines in agonistic display (the canines of Paranthropus are comparatively small). However, it is also possible that male gorillas and orangutans require larger temporalis muscles to achieve a wider gape to better display the canines.
Development
Australopithecines are generally considered to have had a faster, apelike growth rate than modern humans largely due to dental development trends. Broadly speaking, the emergence of the first permanent molar in early hominins has been variously estimated anywhere from 2.5 to 4.5 years of age, which all contrast markedly with the modern human average of 5.8 years. The tips of the mesial cusps of the 1st molar (on the side closest to the premolar) of KNM-ER 1820 were at about the same level as the cervix (where the enamel meets the cementum) of its non-permanent 2nd premolar. In baboons, this stage occurs when the 1st molar is about to erupt from the gums. The tooth root is about , which is similar to most other hominins at this stage. In contrast, the root of the P. robustus specimen SK 62 was when emerging through the dental alveolus (an earlier stage of development than gum emergence), so, unless either specimen is abnormal, P. robustus may have had a higher tooth-root formation rate. The specimen's 1st molar may have erupted 2–3 months before death, so possibly at 2.7–3.3 years of age. In modern apes (including humans), dental development trajectory is strongly correlated with life history and overall growth rate, but it is possible that early hominins simply had a faster dental trajectory and slower life history due to environmental factors, such as early weaning age exhibited in modern indriid lemurs.
Palaeoecology
P. boisei remains have been found predominantly in what were wet, wooded environments, such as wetlands along lakes and rivers, wooded or arid shrublands, and semi-arid woodlands, with the exception of the savanna-dominated Malawian Chiwondo Beds. Its abundance likely increased during precession-driven periods of relative humidity while being more rare during intervals of aridity. During the Pleistocene, there seems to have been coastal and montane forests in Eastern Africa. More expansive river valleys–namely the Omo River Valley–may have served as important refuges for forest-dwelling creatures. Being cut off from the forests of Central Africa by a savanna corridor, these East African forests would have promoted high rates of endemism, especially during times of climatic volatility. Australopithecines and early Homo likely preferred cooler conditions than later Homo, as there are no australopithecine sites that were below in elevation at the time of deposition. This would mean that, like chimps, they often inhabited areas with an average diurnal temperature of , dropping to at night.
P. boisei coexisted with H. habilis, H. rudolfensis and H. ergaster/erectus, but it is unclear how they interacted. To explain why P. boisei was associated with Oldowan tools despite not being the tool maker, Louis Leakey and colleagues, when describing H. habilis in 1964, suggested that one possibility was P. boisei was killed by H. habilis, perhaps as food. However, when describing P. boisei 5 years earlier, he said, "There is no reason whatever, in this case, to believe that the skull [OH 5] represents the victim of a cannibalistic feast by some hypothetical more advanced type of man." OH 80 seems to have been eaten by a big cat. The leg OH 35, which either belongs to P. boisei or H. habilis, shows evidence of leopard predation. Other likely Oldowan predators of great apes include the hunting hyena Chasmaporthetes nitidula, the sabertoothed cats Dinofelis and Megantereon, and the crocodile Crocodylus anthropophagus.
| Biology and health sciences | Australopithecines | Biology |
2985420 | https://en.wikipedia.org/wiki/Japanese%20units%20of%20measurement | Japanese units of measurement | Traditional Japanese units of measurement or the shakkanhō () is the traditional system of measurement used by the people of the Japanese archipelago. It is largely based on the Chinese system, which spread to Japan and the rest of the Sinosphere in antiquity. It has remained mostly unaltered since the adoption of the measures of the Tang dynasty in 701. Following the 1868 Meiji Restoration, Imperial Japan adopted the metric system and defined the traditional units in metric terms on the basis of a prototype metre and kilogram. The present values of most Korean and Taiwanese units of measurement derive from these values as well.
For a time in the early 20th century, the traditional, metric, and English systems were all legal in Japan. Although commerce has since been legally restricted to using the metric system, the old system is still used in some instances. The old measures are common in carpentry and agriculture, with tools such as chisels, spatels, saws, and hammers manufactured in sun and bu sizes. Floorspace is expressed in terms of tatami mats, and land is sold on the basis of price in tsubo. Sake is sold in multiples of 1gō, with the most common bottle sizes being 4 (720 mL) or 10 (1.8 L, isshōbin).
History
Customary Japanese units are a local adaption of the traditional Chinese system, which was adopted at a very early date. They were imposed and adjusted at various times by local and imperial statutes. The details of the system have varied over time and location in Japan's history.
Japan signed the Treaty of the Metre in 1885, with its terms taking effect in 1886. It received its prototype metre and kilogram from the International Bureau of Weights and Measures in 1890. The next year, a weights and measurements law codified the Japanese system, taking its fundamental units to be the shaku and kan and deriving the others from them. The law codified the values of the traditional and metric units in terms of one another, but retained the traditional units as the formal standard and metric values as secondary.
In 1909, English units were also made legal within the Empire of Japan. Following World War I, the Ministry of Agriculture and Commerce established a Committee for Weights and Measures and Industrial Standards, part of whose remit was to investigate which of Japan's three legal systems should be adopted. Upon its advice, the Imperial Diet established the metric system as Japan's legal standard, effective 1 July 1924, with use of the other systems permitted as a transitional measure. The government and "leading industries" were to convert within the next decade, with others following in the decade after that. Public education—at the time compulsory through primary school—began to teach the metric system. Governmental agencies and the Japanese Weights and Measures Association undertook a gradual course of education and conversion but opposition became vehemently outspoken in the early 1930s. Nationalists decried the "foreign" system as harmful to Japanese pride, language, and culture, as well as restrictive to international trade. In 1933, the government pushed the deadline for the conversion of the first group of industries to 1939; the rest of the country was given until 1954. Emboldened, the nationalists succeeded in having an Investigating Committee for Weights and Measures Systems established. In 1938, it advised that the government should continue to employ the "Shaku–Kan" system alongside the metric one. The next year, the imperial ordinance concerning the transition to the metric system was formally revised, indefinitely exempting real estate and historical objects and treasures from any need for metric conversion. The deadline for compulsory conversion in all other fields was moved back to 31 December 1958.
Following its defeat in World War II, Japan was occupied by America and saw an expanded use of US customary units. Gasoline was sold by the gallon and cloth by the yard. The Diet revisited the nation's measurements and, with the occupation's approval, promulgated a Measurements Law in June 1951 that reaffirmed its intention to continue Japan's metrication, effective on the first day of 1959. An unofficial and ad hoc Metric System Promotion Committee was established by interested scholars, public servants, and businessmen in August 1955, undertaking a public awareness campaign and seeking to accomplish as much of the conversion ahead of schedule as possible. Its first success was the conversion of candy sales in Tokyo department stores from the momme to the gram in September 1956; others followed, with NHK taking the lead in media use.
With the majority of the public now exposed to it since childhood, the metric system became the sole legal measurement system in most fields of Japanese life on 1 January 1959. Redrafting of laws to use metric equivalents had already been accomplished, but conversion of the land registries required until 31 March 1966 to complete. Industry transitioned gradually at its own expense, with compliance sometimes being nominal, as in the case of screws becoming " screws". Since the original fines for noncompliance were around $140 and governmental agencies mostly preferred to wait for voluntary conversion, metric use by December 1959 was estimated at only 85%. Since research showed that individual Japanese did not intend to actually use the metric units when given other options, however, sale and verification of devices marked with non-metric units (such as rulers and tape measures noting shaku and sun) were criminalised after 1961.
Some use of the traditional units continues. Some Japanese describe their weight in terms of kan. Homes continue to be reckoned in terms of tsubo, even on the national census as late as 2005, although the practice was discontinued in 2010. English units continue to be employed in aviation, munitions, and various sports, including golf and baseball.
Length
The base unit of Japanese length is the shaku based upon the Chinese chi, with other units derived from it and changing over time based on its dimensions. The chi was originally a span taken from the end of the thumb to the tip of an outstretched middle finger, but which gradually increased in length to about , just a few centimetres longer than the size of a foot.
As in China and Korea, Japan employed different shaku for different purposes. The "carpentry" shaku (, kanejaku) was used for construction. It was a little longer in the 19th century prior to its metric redefinition. The "cloth" or "whale" shaku (, kujirajaku), named for tailors' and fabric merchants' baleen rulers, was longer and used in measuring cloth. (A longer unit of about 25cloth shaku was the tan.) Traditional Japanese clothing was reckoned using the "traditional clothing" shaku (, gofukujaku), about longer than the carpentry shaku. The Shōsōin in Nara has ivory 1-shaku rulers, the .
The Japanese ri is now much longer than the Chinese or Korean li, comprising 36 chō, 2160 ken, or 12,960shaku. A still longer unit was formerly standard in Ise on Honshu and throughout the 9 provinces of Kyushu, which comprised 50 chō, 3000 ken, or 18,000shaku. The imperial nautical mile of 6080feet (1853.19m) was also formerly used by the Japanese in maritime contexts as a "marine ri". A fourth and shorter ri of about 600m is still evident in some beach names. The "99-Ri" beach at Kujukuri is about 60 km. The "7-Ri" beach at Shichiri is 4.2 km long.
The traditional units are still used for construction materials in Japan. For example, plywood is usually manufactured in (about ) sheets known in the trade as , or 3 × 6 shaku. Each sheet is about the size of one tatami mat. The thicknesses of the sheets, however, are usually measured in millimetres. The names of these units also live in the name of the bamboo flute , literally "shaku eight", which measures one shaku and eight sun, and the Japanese version of the Tom Thumb story, , literally "one sun boy", as well as in many Japanese proverbs.
Area
The base unit of Japanese area is the tsubo, equivalent to a square ken or 36 square shaku. It is twice the size of the jō, the area of the Nagoya tatami mat. Both units are used informally in discussing real estate floorspace. Due to historical connections, the tsubo is still used as the official base unit of area in Taiwan.
In agricultural contexts, the tsubo is known as the bu. The larger units remain in common use by Japanese farmers when discussing the sizes of fields.
Volume
The base unit of Japanese volume is the shō, although the gō now sees more use since it is reckoned as the appropriate size of a serving of rice or sake. Sake and shochu are both commonly sold in large 1800mL bottles known as , literally "one shō bottle".
The koku is historically important: since it was reckoned as the amount of rice necessary to feed a person for a single year, it was used to compute agricultural output and official salaries. The koku of rice was sometimes reckoned as 3000"sacks". By the 1940s the shipping koku was of the shipping ton of 40 or 42cuft (i.e., ); the koku of timber was about 10cuft (); and the koku of fish, like many modern bushels, was no longer reckoned by volume but computed by weight (40kan). The shakujime of timber was about 12cuft () and the taba about 108ft³ ( or ).
Mass
The base unit of Japanese mass is the kan, although the momme is more common. It is a recognised unit in the international pearl industry. In English-speaking countries, momme is typically abbreviated as mo.
The Japanese form of the Chinese tael was the ryō (). It was customarily reckoned as around 4 or 10 momme but, because of its importance as a fundamental unit of the silver and gold bullion used as currency in medieval Japan, it varied over time and location from those notional values.
Imperial units
Imperial units are sometimes used in Japan. Feet and inches are used for most non-sport bicycles, whose tyre sizes follow a British system; for sizes of magnetic tape and many pieces of computer hardware; for photograph sizes; and for the sizes of electronic displays for electronic devices. Photographic prints, however, are usually rounded to the nearest millimetre and screens are not described in terms of inches but "type" (, gata). For instance, a television whose screen has a 17-inch diagonal is described as a "17-type" () and one with a 32-inch widescreen screen is called a "32-vista-type" ().
| Physical sciences | Measurement systems | Basics and measurement |
2987828 | https://en.wikipedia.org/wiki/Copper%28II%29%20acetate | Copper(II) acetate | Copper(II) acetate, also referred to as cupric acetate, is the chemical compound with the formula Cu(OAc)2 where AcO− is acetate (). The hydrated derivative, Cu2(OAc)4(H2O)2, which contains one molecule of water for each copper atom, is available commercially. Anhydrous copper(II) acetate is a dark green crystalline solid, whereas Cu2(OAc)4(H2O)2 is more bluish-green. Since ancient times, copper acetates of some form have been used as fungicides and green pigments. Today, copper acetates are used as reagents for the synthesis of various inorganic and organic compounds. Copper acetate, like all copper compounds, emits a blue-green glow in a flame.
Structure
Copper acetate hydrate adopts the paddle wheel structure seen also for related Rh(II) and Cr(II) tetraacetates. One oxygen atom on each acetate is bound to one copper atom at 1.97 Å (197 pm). Completing the coordination sphere are two water ligands, with Cu–O distances of 2.20 Å (220 pm). The two copper atoms are separated by only 2.62 Å (262 pm), which is close to the Cu–Cu separation in metallic copper. The two copper centers interact resulting in a diminishing of the magnetic moment such that at temperatures below 90 K, Cu2(OAc)4(H2O)2 is essentially diamagnetic. Cu2(OAc)4(H2O)2 was a critical step in the development of modern theories for antiferromagnetic exchange coupling, which ascribe its low-temperature diamagnetic behavior to cancellation of the two opposing spins on the adjacent copper atoms.
Synthesis
Copper(II) acetate is prepared industrially by heating copper(II) hydroxide or basic copper(II) carbonate with acetic acid.
Uses in chemical synthesis
Copper(II) acetate has found some use as an oxidizing agent in organic syntheses. In the Eglinton reaction Cu2(OAc)4 is used to couple terminal alkynes to give a 1,3-diyne:
Cu2(OAc)4 + 2 RC≡CH → 2 CuOAc + RC≡C−C≡CR + 2 HOAc
The reaction proceeds via the intermediacy of copper(I) acetylides, which are then oxidized by the copper(II) acetate, releasing the acetylide radical. A related reaction involving copper acetylides is the synthesis of ynamines, terminal alkynes with amine groups using Cu2(OAc)4. It has been used for hydroamination of acrylonitrile.
It is also an oxidising agent in Barfoed's test.
It reacts with arsenic trioxide to form copper acetoarsenite, a powerful insecticide and fungicide called Paris green.
Related compounds
Heating a mixture of anhydrous copper(II) acetate and copper metal affords copper(I) acetate:
Cu + Cu(OAc)2 → 2 CuOAc
Unlike the copper(II) derivative, copper(I) acetate is colourless and diamagnetic.
"Basic copper acetate" is prepared by neutralizing an aqueous solution of copper(II) acetate. The basic acetate is poorly soluble. This material is a component of verdigris, the blue-green substance that forms on copper during long exposures to atmosphere.
Other uses
A mixture of copper acetate and ammonium chloride is used to chemically color copper with a bronze patina.
Mineralogy
The mineral hoganite is a naturally occurring form of copper(II) acetate. A related mineral, also containing calcium, is paceite. Both are very rare.
| Physical sciences | Acetates | Chemistry |
2988583 | https://en.wikipedia.org/wiki/Calcium%20acetate | Calcium acetate | Calcium acetate is a chemical compound which is a calcium salt of acetic acid. It has the formula Ca(C2H3O2)2. Its standard name is calcium acetate, while calcium ethanoate is the systematic name. An older name is acetate of lime. The anhydrous form is very hygroscopic; therefore the monohydrate (Ca(CH3COO)2•H2O) is the common form.
Production
Calcium acetate can be prepared by soaking calcium carbonate (found in eggshells, or in common carbonate rocks such as limestone or marble) or hydrated lime in vinegar:
CaCO3(s) + 2CH3COOH(aq) → Ca(CH3COO)2(aq) + H2O(l) + CO2(g)
Ca(OH)2(s) + 2CH3COOH(aq) → Ca(CH3COO)2(aq) + 2H2O(l)
Since both reagents would have been available pre-historically, the chemical would have been observable as crystals then.
Uses
In kidney disease, blood levels of phosphate may rise (called hyperphosphatemia) leading to bone problems. Calcium acetate binds phosphate in the diet to lower blood phosphate levels.
Calcium acetate is used as a food additive, as a stabilizer, buffer and sequestrant, mainly in candy products under the number E263.
Tofu is traditionally obtained by coagulating soy milk with calcium sulfate. Calcium acetate has been found to be a better alternative; being soluble, it requires less skill and a smaller amount.
Because it is inexpensive, calcium acetate was once a common starting material for the synthesis of acetone before the development of the cumene process:
Ca(CH3COO)2 → CaCO3(s) + (CH3)2CO
A saturated solution of calcium acetate in alcohol forms a semisolid, flammable gel that is much like "canned heat" products such as Sterno. Chemistry teachers often prepare "California Snowballs", a mixture of calcium acetate solution and ethanol. The resulting gel is whitish in color, resembling a snowball and can be lit on fire; it will burn for around 20 minutes.
Natural occurrence
Pure calcium acetate is yet unknown among minerals. calcium acetate chloride is listed as a known mineral, but its genesis is anthropogenic (human-generated, as opposed to naturally occurring).
| Physical sciences | Acetates | Chemistry |
2988743 | https://en.wikipedia.org/wiki/Chromium%28II%29%20acetate | Chromium(II) acetate | Chromium(II) acetate hydrate, also known as chromous acetate, is the coordination compound with the formula Cr2(CH3CO2)4(H2O)2. This formula is commonly abbreviated Cr2(OAc)4(H2O)2. This red-coloured compound features a quadruple bond. The preparation of chromous acetate once was a standard test of the synthetic skills of students due to its sensitivity to air and the dramatic colour changes that accompany its oxidation. It exists as the dihydrate and the anhydrous forms.
Cr2(OAc)4(H2O)2 is a reddish diamagnetic powder, although diamond-shaped tabular crystals can be grown. Consistent with the fact that it is nonionic, Cr2(OAc)4(H2O)2 exhibits poor solubility in water and methanol.
Structure
The Cr2(OAc)4(H2O)2 molecule contains two atoms of chromium, two ligated molecules of water, and four acetate bridging ligands. The coordination environment around each chromium atom consists of four oxygen atoms (one from each acetate ligand) in a square, one water molecule (in an axial position), and the other chromium atom (opposite the water molecule), giving each chromium centre an octahedral geometry. The chromium atoms are joined by a quadruple bond, and the molecule has D4h symmetry (ignoring the position of the hydrogen atoms). The same basic structure is adopted by Rh2(OAc)4(H2O)2 and Cu2(OAc)4(H2O)2, although these species do not have such short M–M contacts.
The quadruple bond between the two chromium atoms arises from the overlap of four d-orbitals on each metal with the same orbitals on the other metal: the dz2 orbitals overlap to give a sigma bonding component, the dxz and dyz orbitals overlap to give two pi bonding components, and the dxy orbitals give a delta bond. This quadruple bond is also confirmed by the low magnetic moment and short intermolecular distance between the two atoms of 236.2 ± 0.1 pm. The Cr–Cr distances are even shorter, 184 pm being the record, when the axial ligand is absent or the carboxylate is replaced with isoelectronic nitrogenous ligands.
History
Eugène-Melchior Péligot first reported a chromium(II) acetate in 1844. His material was apparently the dimeric Cr2(OAc)4(H2O)2. The unusual structure, as well as that of copper(II) acetate, was uncovered in 1951.
Preparation
The preparation usually begins with reduction of an aqueous solution of a Cr(III) compound using zinc. The resulting blue solution is treated with sodium acetate, which results in the rapid precipitation of chromous acetate as a bright red powder.
2 Cr3+ + Zn → 2 Cr2+ + Zn2+
2 Cr2+ + 4 OAc− + 2 H2O → Cr2(OAc)4(H2O)2
The synthesis of Cr2(OAc)4(H2O)2 has been traditionally used to test the synthetic skills and patience of inorganic laboratory students in universities because the accidental introduction of a small amount of air into the apparatus is readily indicated by the discoloration of the otherwise bright red product. The anhydrous form of chromium(II) acetate, and also related chromium(II) carboxylates, can be prepared from chromocene:
4 RCO2H + 2 Cr(C5H5)2 → Cr2(O2CR)4 + 4 C5H6
This method provides anhydrous derivatives in a straightforward manner.
Because it is so easily prepared, Cr2(OAc)4(H2O)2 is a starting material for other chromium(II) compounds. Also, many analogues have been prepared using other carboxylic acids in place of acetate and using different bases in place of the water.
Applications
Chromium(II) acetate has few practical applications. It has been used to dehalogenate organic compounds such as α-bromoketones and chlorohydrins. The reactions appear to proceed via 1e− steps, and rearrangement products are sometimes observed.
Because the compound is a good reducing agent, it will reduce the O2 found in air and can be used as an oxygen scrubber.
| Physical sciences | Acetates | Chemistry |
6974397 | https://en.wikipedia.org/wiki/Ferret-badger | Ferret-badger | Ferret-badgers are the six species of the genus Melogale, which is the only genus of the monotypic mustelid subfamily Helictidinae.
Bornean ferret-badger (Melogale everetti)
Chinese ferret-badger (Melogale moschata)
Formosan ferret-badger (Melogale subaurantiaca)
Javan ferret-badger (Melogale orientalis)
Burmese ferret-badger (Melogale personata)
Vietnam ferret-badger (Melogale cucphuongensis)
Human impact
The ferret-badger's impact on humans is through the spread of rabies. This has been documented in Taiwan and China but lack of prior documentation and research on ferret-badgers has proven a roadblock.
| Biology and health sciences | Mustelidae | Animals |
7139621 | https://en.wikipedia.org/wiki/Shearing%20%28physics%29 | Shearing (physics) | In continuum mechanics, shearing refers to the occurrence of a shear strain, which is a deformation of a material substance in which parallel internal surfaces slide past one another. It is induced by a shear stress in the material. Shear strain is distinguished from volumetric strain. The change in a material's volume in response to stress and change of angle is called the angle of shear.
Overview
Often, the verb shearing refers more specifically to a mechanical process that causes a plastic shear strain in a material, rather than causing a merely elastic one. A plastic shear strain is a continuous (non-fracturing) deformation that is irreversible, such that the material does not recover its original shape. It occurs when the material is yielding. The process of shearing a material may induce a volumetric strain along with the shear strain. In soil mechanics, the volumetric strain associated with shearing is known as Reynolds' dilation if it increases the volume, or compaction if it decreases the volume.
The shear center (also known as the torsional axis) is an imaginary point on a section, where a shear force can be applied without inducing any torsion. In general, the shear center is not the centroid. For cross-sectional areas having one axis of symmetry, the shear center is located on the axis of symmetry. For those having two axes of symmetry, the shear center lies on the centroid of the cross-section.
In some materials such as metals, plastics, or granular materials like sand or soils, the shearing motion rapidly localizes into a narrow band, known as a shear band. In that case, all the sliding occurs within the band while the blocks of material on either side of the band simply slide past one another without internal deformation. A special case of shear localization occurs in brittle materials when they fracture along a narrow band. Then, all subsequent shearing occurs within the fracture. Plate tectonics, where the plates of the Earth's crust slide along fracture zones, is an example of this.
Shearing in soil mechanics is measured with a triaxial shear test or a direct shear test.
| Physical sciences | Solid mechanics | Physics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.