id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
82420 | https://en.wikipedia.org/wiki/Sesame | Sesame | Sesame (; Sesamum indicum) is a plant in the genus Sesamum, also called benne. Numerous wild relatives occur in Africa and a smaller number in India. It is widely naturalized in tropical regions around the world and is cultivated for its edible seeds, which grow in pods. World production in 2018 was , with Sudan, Myanmar, and India as the largest producers.
Sesame seed is one of the oldest oilseed crops known, domesticated well over 3,000 years ago. Sesamum has many other species, most being wild and native to sub-Saharan Africa. S. indicum, the cultivated type, originated in India. It tolerates drought conditions well, growing where other crops fail. Sesame has one of the highest oil contents of any seed. With a rich, nutty flavor, it is a common ingredient in cuisines around the world. Like other foods, it can trigger allergic reactions in some people and is one of the nine most common allergens outlined by the Food and Drug Administration.
Etymology
The word "sesame" is from Latin sesamum and Greek σήσαμον: sēsamon; which in turn are derived from ancient Semitic languages such as Akkadian šamaššamu. From these roots, words with the generalized meaning "oil, liquid fat" were derived.
The word "benne" was first recorded in English in 1769; it comes from the African American creole Gullah benne, which in turn derives from Malinke bĕne.
Origins and history
Sesame seed is considered to be the oldest oilseed crop known to humanity. The genus has many species, and most are wild and native to sub-Saharan Africa. Sesamum indicum, the cultivated type, originated in India.
Archaeological remnants of charred sesame dating to about 3500-3050 BC shows that sesame was domesticated in the Indian subcontinent at least 5500 years ago. It has been claimed that trading of sesame between Mesopotamia and the Indian subcontinent occurred by 2000 BC. It is possible that the Indus Valley civilization exported sesame oil to Mesopotamia, where it was known as ilu in Sumerian and ellu in Akkadian, similar to the Dravidian languages Kannada and Malayalam eḷḷu, Tamil eḷ.
Sesame was cultivated in ancient Egypt. Egyptians called it sesemt, and it is included in the list of medicinal drugs in the scrolls of the 1550 BC Ebers Papyrus. Excavations of King Tutankhamen uncovered baskets of sesame among other grave goods, suggesting that sesame was present in Egypt by 1350 BC. Sesame was grown and pressed to extract oil at least 750 BC in the empire of Urartu. Others believe it may have originated in Ethiopia.
Historically, sesame was favored for its ability to grow in areas that do not support the growth of other crops. It is a robust crop that needs little farming support—it grows in drought conditions, in high heat, with residual moisture in soil after monsoons are gone or even when rains fail or when rains are excessive. It can be grown by subsistence farmers at the edge of deserts, earning it the name of survivor crop.
Botany
Sesame is a perennial plant growing tall, with opposite leaves long with an entire margin; they are broad lanceolate, to broad, at the base of the plant, narrowing to just broad on the flowering stem. The flowers are tubular, long. The flowers vary in colour, from white to pink or purple.
The fruit is a capsule, normally pubescent. The length of the fruit capsule varies from , its width varies between ; there are four locules. The seeds are either white or black.
Sesame seeds are small. Their sizes vary widely by cultivar. Typically, the seeds are 3 to 4×2×1 mm (0.12 to 0.16×0.08×0.04 in). The seeds are ovate, slightly flattened, and somewhat thinner at the eye of the seed (hilum) than at the opposite end. The mass of 100 seeds is 0.203 g.
Sesame was described as the species Sesamum indicum by Carl Linnaeus in 1753.
Agriculture
Cultivation
Sesame varieties have adapted to many soil types. The high-yielding crops do best on fertile, well-drained, soils with a neutral pH. However, these have a low tolerance for soils with high salt and water-logged conditions. Commercial sesame crops require 90 to 120 frost-free days. Warm conditions above favor growth and yields. While sesame crops can grow in poor soils, the best yields come from properly fertilized farms.
Flowering depends on photoperiod and cultivar. The photoperiod also affects the seed's oil content: increased photoperiod increases oil content. The oil content of the seed is inversely proportional to its protein content. Sesame is drought-tolerant, in part due to its extensive root system. However, it requires adequate moisture for germination and early growth. While the crop survives drought and the presence of excess water, the yields are significantly lower in either condition. Moisture levels before planting and flowering affect yield the most. Most commercial cultivars of sesame are intolerant of waterlogging. Rainfall late in the season prolongs growth and increases loss to dehiscence, when the seedpod shatters, scattering the seed. Wind can also cause shattering at harvest.
Processing
Sesame seeds are protected by a capsule that bursts when the seeds are ripe. The time of this bursting, or "dehiscence", tends to vary, so farmers cut plants by hand and place them together in an upright position to continue ripening until all the capsules have opened. The 1943 discovery of an indehiscent mutant (analogous to nonshattering in cereals) led breeders to try to create a high-yield variety that does not drop its seeds. Despite some progress, dehiscence continues to limit production. Agronomists in Israel are working on modern cultivars of sesame that can be harvested by mechanical means.
Since sesame seed is small and flat, it is hard to dry after harvest because the seeds pack closely together, impeding the flow of air in a drying bin. Therefore, the harvested seeds need to be as dry as possible, and then stored at 6% moisture or less. Moist seed stores can rapidly heat up and become rancid.
Production
In 2022, world production of sesame seeds was 6.7 million tonnes, led by Sudan, India, and Myanmar, which together accounted for 41% of the total (table).
The white and other lighter-colored sesame seeds are common in Europe, the Americas, West Asia, and the Indian subcontinent. The black and darker-colored sesame seeds are mostly produced in China and Southeast Asia.
In the United States most sesame is raised by farmers under contract to Sesaco, which also supplies proprietary seed.
Trade
Japan is the world's largest sesame importer. Sesame oil, particularly from roasted seed, is an important component of Japanese cooking and traditionally the principal use of the seed. China is the second-largest importer of sesame, mostly oil-grade. China exports lower-priced food-grade sesame seeds, particularly to Southeast Asia. Other major importers are the United States, Canada, the Netherlands, Turkey, and France.
Sesame seed is a high-value cash crop. Prices ranged between US$ between 2008 and 2010. Prices depend on perceived quality, based on factors such as the seed's appearance, freedom from impurities, oil content of at least 40%, and sorting by size and colour.
Nutrition
Composition
Dried whole sesame seeds are 5% water, 23% carbohydrates, 50% fat, and 18% protein (table). In a reference amount of , dried sesame seeds supply 570 calories of food energy, and are a rich source (20% or more of the Daily Value, DV) of several B vitamins and dietary minerals, such as calcium, iron, and magnesium (all 75% or more of the DV, table).
The byproduct that remains after oil extraction from sesame seeds, also called sesame oil meal, is rich in protein (35–50%) and is used as feed for poultry and livestock.
As many seeds do, whole sesame seeds contain a significant amount of phytic acid, which is considered an antinutrient in that it binds to certain nutritional elements consumed at the same time, especially minerals, and prevents their absorption by carrying them along as they pass through the small intestine. Heating and cooking reduce the amount of the acid in the seeds. The seeds contain the lignans sesamolin, sesamin, pinoresinol, and lariciresinol.
Health effects
A meta-analysis showed that sesame consumption produced small reductions in both systolic and diastolic blood pressure; another demonstrated improvement in fasting blood glucose and hemoglobin A1c. Sesame oil studies reported a reduction of oxidative stress markers and lipid peroxidation.
Possible harms
Allergy
Sesame can trigger the same allergic reactions, including anaphylaxis, as seen with other food allergens. A cross-reactivity exists between sesame and peanuts, hazelnuts and almonds. In addition to food products derived from sesame seeds, such as tahini and sesame oil, persons with sesame allergies are encouraged to be aware of foods that may contain sesame, such as baked goods. In addition to food sources, individuals allergic to sesame have been warned that a variety of non-food sources may also trigger a reaction to sesame, including cosmetics and skin-care products.
Prevalence of sesame allergy is on the order of 0.1–0.2%, but higher in countries in the Middle East and Asia where consumption is more common as part of traditional diets. In the United States, sesame allergy possibly affects 1.5 million individuals.
Canada requires sesame to be labelled as an allergen. In the European Union, identifying the presence of sesame, along with 13 other foods, either as an ingredient or an unintended contaminant in packaged food is compulsory. In the United States, the FASTER Act mandated labeling from 2023.
Contamination
Contamination by Salmonella, E.coli, pesticides, or other pathogens may occur in large batches of sesame seeds, such as in September 2020 when high levels of a common industrial compound, ethylene oxide, was found in a 250-tonne shipment of sesame seeds from India. After detection in Belgium, recalls for dozens of products and stores were issued across the European Union, totaling some 50 countries. Products with an organic certification were also affected by the contamination. Regular governmental food inspection for sesame contamination, as for Salmonella and E. coli in tahini, hummus or seeds, has found that poor hygiene practices during processing are common sources and routes of contamination.
Culinary use
Sesame seed is a common ingredient in many cuisines. Sesame seed cookies called Benne wafers, both sweet and savory, are popular in places such as Charleston, South Carolina. Sesame seeds, also called benne, were brought into 17th-century colonial America by enslaved West Africans. The whole plant was used in West African cuisine. The seeds thickened soups and puddings, or were roasted and infused to produce a coffee-like drink. Oil from the seeds substituted for butter, and served as a shortening for cakes. The leaves on mature plants, which are rich in mucilage, can be used as a laxative as well as a treatment for dysentery and cholera. After arriving in North America, the plant was grown by slaves as a subsistence staple to supplement their weekly rations. In Caribbean cuisine, sugar and white sesame seeds are combined into a bar resembling peanut brittle and sold in stores and street corners, like Bahamian Benny cakes.
In Asia, sesame seeds are sprinkled onto sushi-style foods. In Japan, whole seeds are found in many salads and baked snacks, and tan and black sesame seed varieties are roasted and used to make the flavouring gomashio. Ground black sesame and rice form zhimahu, a Chinese dessert and breakfast dish. The seeds and oil are used extensively in India, where sesame seeds mixed with heated jaggery, sugar, or palm sugar are made into balls and bars similar to peanut brittle or nut clusters and eaten as snacks, such as chikki.
Sesame is a common ingredient in Middle Eastern cuisine. The seeds are made into tahini paste and sweet halva. It is a common component of the Levantine spice mixture za'atar, popular throughout the Middle East.
Sesame oil is sometimes used for cooking, though not all varieties are suitable for high-temperature frying. The "toasted" form of the oil (as distinguished from the "cold-pressed" form) has a distinctive pleasant aroma and taste, and is sometimes used as a table condiment.
In literature
In myths, the opening of the capsule releases the treasure of sesame seeds, as applied in the story of "Ali Baba and the Forty Thieves" when the phrase "Open sesame" magically opens a sealed cave. Upon ripening, sesame pods split, releasing a pop and possibly indicating the origin of this phrase.
| Biology and health sciences | Lamiales | null |
82490 | https://en.wikipedia.org/wiki/Hyades%20%28star%20cluster%29 | Hyades (star cluster) | The Hyades (; Greek Ὑάδες, also known as Caldwell 41, Collinder 50, or Melotte 25) is the nearest open cluster and one of the best-studied star clusters. Located about away from the Sun, it consists of a roughly spherical group of hundreds of stars sharing the same age, place of origin, chemical characteristics, and motion through space. From the perspective of observers on Earth, the Hyades Cluster appears in the constellation Taurus, where its brightest stars form a "V" shape along with the still-brighter Aldebaran. However, Aldebaran is unrelated to the Hyades, as it is located much closer to Earth (65 ly) and merely happens to lie along the same line of sight.
The five brightest member stars of the Hyades have consumed the hydrogen fuel at their cores and are now evolving into giant stars. Four of these stars, with Bayer designations Gamma, Delta 1, Epsilon, and Theta Tauri, form an asterism that is traditionally identified as the head of Taurus the Bull. The fifth of these stars is Theta1 Tauri, a tight naked-eye companion to the brighter Theta2 Tauri. Epsilon Tauri, known as Ain (the "Bull's Eye"), has a gas giant exoplanet candidate, the first planet to be found in any open cluster.
The age of the Hyades is estimated to be about 625 million years. The core of the cluster, where stars are the most densely packed, has a radius of , and the cluster's tidal radius – where the stars become more strongly influenced by the gravity of the surrounding Milky Way galaxy – is . However, about one-third of confirmed member stars have been observed well outside the latter boundary, in the cluster's extended halo; these stars are probably in the process of escaping from its gravitational influence.
Location and motion
The cluster is sufficiently close to the Sun that its distance can be directly measured by observing the amount of parallax shift of the member stars as the Earth orbits the Sun. This measurement has been performed with great accuracy using the Hipparcos satellite and the Hubble Space Telescope. An alternative method of computing the distance is to fit the cluster members to a standardized infrared color–magnitude diagram for stars of their type, and use the resulting data to infer their intrinsic brightness. Comparing this data to the brightness of the stars as seen from Earth allows their distances to be estimated. Both methods have yielded a distance estimate of to the cluster center. The fact that these independent measurements agree makes the Hyades an important rung on the cosmic distance ladder method for estimating the distances of extragalactic objects.
The stars of the Hyades are more enriched in heavier elements than the Sun and other ordinary stars in the solar neighborhood, with the overall cluster metallicity measured at +0.14. The Hyades Cluster is related to other stellar groups in the Sun's vicinity. Its age, metallicity, and proper motion coincide with those of the larger and more distant Praesepe Cluster, and the trajectories of both clusters can be traced back to the same region of space, indicating a common origin. Another associate is the Hyades Stream, a large collection of scattered stars that also share a similar trajectory with the Hyades Cluster. Recent results have found that at least 15% of stars in the Hyades Stream share the same chemical fingerprint as the Hyades cluster stars. However, about 85% of stars in the Hyades Stream have been shown to be completely unrelated to the original cluster on the grounds of dissimilar age and metallicity; their common motion is attributed to tidal effects of the massive rotating bar at the center of the Milky Way galaxy. Among the remaining members of the Hyades Stream, the exoplanet host star Iota Horologii has recently been proposed as an escaped member of the primordial Hyades Cluster.
The Hyades are unrelated to two other nearby stellar groups, the Pleiades and the Ursa Major Stream, which are easily visible to the naked eye under clear dark skies.
Astrometry
A 2018 Gaia DR1 study of the Hyades Cluster determined a (U, V, W) group velocity of (−41.92 ± 0.16, −19.35 ± 0.13, −1.11 ± 0.11) km/sec, based on the space velocities of the 138 core stars.
A 2019 Gaia DR2 study finds a (U, V, W) group velocity of (−42.24, −19.00, −1.48) km/sec, in very close agreement with the 2018 DR1 derivation.
Another DR2 study from 2019 focused on mapping the 3D Topology & Velocities of the Hyades main body out to 30 parsecs, and included Sub-Stellar members as well. They identified 1764 member candidates, including 10 Brown Dwarfs and 17 White Dwarfs. The White Dwarfs included 9 single stars, and 4 binary systems.
A 2022 Hyades study utilizing Gaia EDR3 derived a (U, V, W) group velocity of (-42.11±6.50, - 19.09±4.37, -1.32±0.44) km/sec, also with close agreement to DR1 and DR2 studies.
History
Together with the other eye-catching open star cluster of the Pleiades, the Hyades form the Golden Gate of the Ecliptic, which has been known for several thousand years.
In Greek mythology, the Hyades were the five daughters of Atlas and half-sisters to the Pleiades. After the death of their brother, Hyas, the weeping sisters were transformed into a cluster of stars that was afterwards associated with rain.
As a naked-eye object, the Hyades cluster has been known since prehistoric times. It is mentioned by numerous Classical authors from Homer to Ovid. In Book 18 of the Iliad the stars of the Hyades appear along with the Pleiades, Ursa Major, and Orion on the shield that the god Hephaistos made for Achilles.
In England the cluster was known as the "April Rainers" from an association with April showers, as recorded in the folk song "Green Grow the Rushes, O".
The cluster was probably first catalogued by Giovanni Battista Hodierna in 1654, and it subsequently appeared in many star atlases of the 17th and 18th centuries. However, Charles Messier did not include the Hyades in his 1781 catalog of deep sky objects. It therefore lacks a Messier number, unlike many other, more distant open clusters – e.g., M44 (Praesepe), M45 (Pleiades), and M67.
In 1869, the astronomer R.A. Proctor observed that numerous stars at large distances from the Hyades share a similar motion through space. In 1908, Lewis Boss reported almost 25 years of observations to support this premise, arguing for the existence of a co-moving group of stars that he called the Taurus Stream (now generally known as the Hyades Stream or Hyades Supercluster). Boss published a chart that traced the scattered stars' movements back to a common point of convergence.
By the 1920s, the notion that the Hyades shared a common origin with the Praesepe Cluster was widespread, with Rudolf Klein-Wassink noting in 1927 that the two clusters are "probably cosmically related". For much of the twentieth century, scientific study of the Hyades focused on determining its distance, modeling its evolution, confirming or rejecting candidate members, and characterizing individual stars.
Morphology and evolution
All stars form in clusters, but most clusters break up less than 50 million years after star formation concludes. The astronomical term for this process is "evaporation." Only extremely massive clusters, orbiting far from the Galactic Center, can avoid evaporation over extended timescales. As one such survivor, the Hyades Cluster probably contained a much larger star population in its infancy. Estimates of its original mass range from 800 to 1,600 times the mass of the Sun (), implying still larger numbers of individual stars.
Star populations
Theory predicts that a young cluster of this size should give birth to stars and substellar objects of all spectral types, from huge, hot O stars down to dim brown dwarfs. However, studies of the Hyades show that it is deficient in stars at both extremes of mass. At an age of 625 million years, the cluster's main sequence turn-off is about , meaning that all heavier stars have evolved into subgiants, giants, or white dwarfs, while less massive stars continue fusing hydrogen on the main sequence. Extensive surveys have revealed a total of 8 white dwarfs in the cluster core, corresponding to the final evolutionary stage of its original population of B-type stars (each about ). The preceding evolutionary stage is currently represented by the cluster's four red clump giants. Their present spectral type is K0 III, but all are actually "retired A stars" of around . An additional "white giant" of type A7 III is the primary of θ2 Tauri, a binary system that includes a less massive companion of spectral type A; this pair is visually associated with θ1 Tauri, one of the four red giants, which also has an A-type binary companion.
The remaining population of confirmed cluster members includes numerous bright stars of spectral types A (at least 21), F (about 60), and G (about 50). All these star types are concentrated much more densely within the tidal radius of the Hyades than within an equivalent 10-parsec radius of the Earth. By comparison, our local 10-parsec sphere contains only 4 A stars, 6 F stars, and 21 G stars.
The Hyades' cohort of lower-mass stars – spectral types K and M – remains poorly understood, despite proximity and long observation. At least 48 K dwarfs are confirmed members, along with about a dozen M dwarfs of spectral types M0-M2. Additional M dwarfs have been proposed in the past. This deficiency at the bottom of the mass range contrasts strongly with the distribution of stars within 10 parsecs of the Solar System, where at least 239 M dwarfs are known, comprising about 76% of all neighborhood stars. In more recent studies more low-mass members were discovered. This is due targeted searches and an improvement in proper motion searches. About 35 L-type (7+1+8+6+3+4+3+3) and 15 T-type (2+1+3+1+4+4) brown dwarfs are currently reported as Hyades members or candidate members. Meanwhile Gaia DR2 allowed the identification of 710 cluster members within 30 parsec, including 23 candidates with estimated masses between 60 and 80 .
Mass segregation
The observed distribution of stellar types in the Hyades Cluster demonstrates a history of mass segregation. With the exception of its white dwarfs, the cluster's central contain only star systems of at least . This tight concentration of heavy stars gives the Hyades its overall structure, with a core defined by bright, closely packed systems and a halo consisting of more widely separated stars in which later spectral types are common. The core radius is 2.7 parsecs (8.8 light-years, a little more than the distance between the Sun and Sirius), while the half-mass radius, within which half the cluster's mass is contained, is . The tidal radius of represents the Hyades' average outer limit, beyond which a star is unlikely to remain gravitationally bound to the cluster core.
Stellar evaporation occurs in the cluster halo as smaller stars are scattered outward by more massive insiders. From the halo they may then be lost to tides exerted by the Galactic core or to shocks generated by collisions with drifting hydrogen clouds. In this way the Hyades probably lost much of its original population of M dwarfs, along with substantial numbers of brighter stars.
Stellar multiplicity
Another result of mass segregation is the concentration of binary systems in the cluster core. More than half of the known F and G stars are binaries, and these are preferentially located within this central region. As in the immediate Solar neighborhood, binarity increases with increasing stellar mass. The fraction of binary systems in the Hyades increases from 26% among K-type stars to 87% among A-type stars. Hyades binaries tend to have small separations, with most binary pairs in shared orbits whose semimajor axes are smaller than 50 astronomical units. Although the exact ratio of single to multiple systems in the cluster remains uncertain, this ratio has considerable implications for our understanding of its population. For example, Perryman and colleagues list about 200 high-probability Hyades members. If the binary fraction is 50%, the total cluster population would be at least 300 individual stars.
Future evolution
Surveys indicate that 90% of open clusters dissolve less than 1 billion years after formation, while only a tiny fraction survive for the present age of the Solar System (about 4.6 billion years). Over the next few hundred million years, the Hyades will continue to lose both mass and membership as its brightest stars evolve off the main sequence and its dimmest stars evaporate out of the cluster halo. It may eventually be reduced to a remnant containing about a dozen star systems, most of them binary or multiple, which will remain vulnerable to ongoing dissipative forces.
Brightest stars
This is a list of Hyades cluster member stars that are fourth magnitude or brighter.
Planets
Four stars in the Hyades have been found to host exoplanets. Epsilon Tauri has a superjovian planet, which was the first planet to be discovered in any open cluster. HD 285507 has a hot Jupiter, K2-25 has a Neptune-sized planet, and K2-136 has a system of three planets. Another star, HD 283869, may also host a planet, but this has not been confirmed as only one transit has been detected.
In culture
In the works of Robert W. Chambers, H. P. Lovecraft, and others, the fictional city of Carcosa is located on a planet in the Hyades.
A 2018 archaeoastronomical paper suggested that the Hyades may have inspired the Norse myth of Ragnarök. Astronomer Donald Olson questioned these findings, pointing out minor errors in the paper's astronomical data.
| Physical sciences | Other notable objects | null |
82518 | https://en.wikipedia.org/wiki/Hymen | Hymen | The hymen is a thin piece of mucosal tissue that surrounds or partially covers the vaginal introitus. A small percentage are born with hymens that are imperforate and completely obstruct the vaginal canal. It forms part of the vulva and is similar in structure to the vagina. The term comes straight from the Greek, for 'membrane'.
In children, a common appearance of the hymen is crescent-shaped, although many shapes are possible. Each shape in the natural range has a Latinate name. During puberty, estrogen causes the hymen to change in appearance and become very elastic. Normal variations of the post-pubertal hymen range from thin and stretchy to thick and somewhat rigid. Very rarely, it may be completely absent.
The hymen can rip or tear during first penetrative intercourse, which usually results in pain and, sometimes, mild temporary bleeding or spotting. Minor injuries to the hymen may heal on their own, and not require surgical intervention. Historically, it was believed that first penetration was necessarily traumatic, but now sources differ on how common tearing or bleeding are as a result of first intercourse. Therefore, the state of the hymen is not a reliable indicator of virginity, though "virginity testing" remains a common practice in some cultures, sometimes accompanied by hymen reconstruction surgery to give the appearance of virginity.
Development and histology
The genital tract develops during embryogenesis, from the third week of gestation to the second trimester, and the hymen is formed following the vagina. At week seven, the urorectal septum forms and separates the rectum from the urogenital sinus. At week nine, the Müllerian ducts move downwards to reach the urogenital sinus, forming the uterovaginal canal and inserting into the urogenital sinus. At week twelve, the Müllerian ducts fuse to create a primitive uterovaginal canal called unaleria. At month five, the vaginal canalization is complete and the fetal hymen is formed from the proliferation of the sinovaginal bulbs (where Müllerian ducts meet the urogenital sinus), and normally becomes perforate before or shortly after birth.
The hymen has dense innervation. In newborn babies, still under the influence of the mother's hormones, the hymen is thick, pale pink, and redundant (folds in on itself and may protrude). For the first two to four years of life, the infant produces hormones that continue this effect. Their hymenal opening tends to be annular (circumferential).
Post neonatal stage, the diameter of the hymenal opening (measured within the hymenal ring) widens by approximately 1 mm for each year of age. During puberty, estrogen causes the hymen to become very elastic and fimbriated.The hymen can stretch or tear as a result of various behaviors, by the use of tampons or menstrual cups, pelvic examinations with a speculum, or sexual intercourse. Remnants of the hymen are called carunculae myrtiformes.
A glass or plastic rod of 6 mm diameter having a globe on one end with varying diameter from 10 to 25 mm, called a Glaister Keen rod, is used for close examination of the hymen or the degree of its rupture. In forensic medicine, it is recommended by health authorities that a physician who must swab near this area of a prepubescent girl avoid the hymen and swab the outer vulval vestibule instead. In cases of suspected rape or child sexual abuse, a detailed examination of the hymen may be performed, but the condition of the hymen alone is often inconclusive.
Anatomic variations
Normal variations of the hymen range from thin and stretchy to thick and somewhat rigid. An imperforate hymen occurs in 1-2 out of 1,000 infants. The only variation that may require medical intervention is the imperforate hymen, which either completely prevents the passage of menstrual fluid or slows it significantly. In either case, surgical intervention may be needed to allow menstrual fluid to pass or intercourse to take place at all.
Prepubescent hymenal openings come in many shapes, depending on hormonal and activity level, the most common being crescentic (posterior rim): no tissue at the 12 o'clock position; crescent-shaped band of tissue from 1–2 to 10–11 o'clock, at its widest around 6 o'clock. From puberty onwards, depending on estrogen and activity levels, the hymenal tissue may be thicker, and the opening is often fimbriated or erratically shaped. In younger children, a torn hymen will typically heal very quickly. In adolescents, the hymenal opening can naturally extend and variation in shape and appearance increases.
Variations of the female reproductive tract can result from agenesis or hypoplasia, canalization defects, lateral fusion and failure of resorption, resulting in various complications.
Imperforate: hymenal opening nonexistent; will require minor surgery if it has not corrected itself by puberty to allow menstrual fluids to escape.
Cribriform, or microperforate: sometimes confused for imperforate, the hymenal opening appears to be nonexistent, but has, under close examination, small perforations.
Septate: the hymenal opening has one or more bands of tissue extending across the opening.
Trauma
Historically, it was believed that first sexual intercourse was necessarily traumatic to the hymen and always resulted in the hymen being "broken" or torn, causing bleeding. However, research on women in Western populations has found that bleeding during first intercourse does not invariably occur. In one cross-cultural study, slightly more than half of all women self-reported bleeding during first intercourse, with significantly different levels of pain and bleeding reported depending on their region of origin. Not all women experience pain, and one study found a correlation between the experience of strong emotions – such as excitement, nervousness, or fear – with experiencing pain during first intercourse.
In several studies of adolescent female rape victims, where patients were examined at a hospital following sexual assault, half or fewer of virgin victims had any injury to the hymen. Tears of the hymen occurred in less than a quarter of cases. However, virgins were significantly more likely to have injuries to the hymen than non-virgins.
In a study of adolescents who had previously had consensual sex, approximately half showed evidence of trauma to the hymen. Trauma to the hymen may also occur in adult non-virgins following consensual sex, although it is rare. Trauma to the hymen may heal without any visible sign of injury. An observational study of adolescent sexual assault victims found that majority of wounds to the hymen healed without any visible sign of injury having occurred.
Trauma to the hymen is hypothesized to occur as a result of various other behaviors, such as tampon or menstrual cup use, pelvic examinations with a speculum, masturbation, gymnastics, or horseback riding, although the true prevalence of trauma as a result of these activities is unclear.
Cultural and religious significance
The hymen is often attributed important cultural significance in certain communities because of its association with a woman's virginity. In those cultures, an intact hymen is highly valued at marriage in the belief that this is a proof of virginity. Some women undergo hymenorrhaphy to restore their hymen for this reason. In October 2018, the UN Human Rights Council, UN Women and the World Health Organization (WHO) stated that virginity testing must end as "it is a painful, humiliating and traumatic practice, constituting violence against women".
Some traditional Christian theological interpretations state that it is intended by God for the husband to be the one to break his wife's hymen, and that the bleeding the hymen, believed occur during first intercourse (but see above), is a blood covenant that seals the bond of holy matrimony between husband and wife (cf. consummation).
Womb fury
In the 16th and 17th centuries, medical researchers mistakenly saw the presence or absence of the hymen as founding evidence of physical diseases such as "womb-fury", i.e., (female) hysteria. If not cured, womb-fury would, according to doctors practicing at the time, result in death.
Other animals
Due to similar reproductive system development, many mammals have hymens, including chimpanzees, elephants, manatees, whales, horses and llamas.
| Biology and health sciences | Reproductive system | Biology |
82687 | https://en.wikipedia.org/wiki/Hoag%27s%20Object | Hoag's Object | Hoag's Object is an unusual ring galaxy in the constellation of Serpens Caput. It is named after Arthur Hoag, who discovered it in 1950 and identified it as either a planetary nebula or a peculiar galaxy. The galaxy has a D25 isophotal diameter of .
Characteristics
A nearly perfect ring of young hot blue stars circles the older yellow nucleus of this ring galaxy c. 600 million light-years away in the constellation Serpens. The ring structure is so perfect and circular that it has been referred to as "The most perfect ring galaxy". The diameter of the 6 arcsecond inner core of the galaxy is about () while the surrounding ring has an inner 28″ diameter of () and an outer 45″ diameter of (). The galaxy is estimated to have a mass of 700 billion suns. By comparison, the Milky Way galaxy has an estimated diameter of 150–200 kly and consists of between 100 and 500 billion stars and a mass between 800 billion and 1.54 trillion suns.
The gap separating the two stellar populations may contain some star clusters that are almost too faint to see. Though ring galaxies are rare, another more distant ring galaxy (SDSS J151713.93+213516.8) can be seen through Hoag's Object, between the nucleus and the outer ring of the galaxy, at roughly the one o'clock position in the image shown here.
Noah Brosch and colleagues showed that the luminous ring lies at the inner edge of a much larger neutral hydrogen ring.
A few other galaxies share the primary characteristics of Hoag's Object, including a bright detached ring of stars, but their centers are elongated or barred, and they may exhibit some spiral structure. While none matches Hoag's Object in symmetry, these galaxies are sometimes called Hoag-type galaxies.
History and formation
Even though Hoag's Object was clearly shown on the Palomar Star Survey, it was not included in either the Morphological Catalogue of Galaxies, the Catalogue of Galaxies and Clusters of Galaxies, or the catalogue of galactic planetary nebulae.
In the initial announcement of his discovery, Hoag proposed the hypothesis that the visible ring was a product of gravitational lensing. This idea was later discarded because the nucleus and the ring have the same redshift, and because more advanced telescopes revealed the ring's knotty structure, which would not be visible if the galaxy were a product of gravitational lensing.
Many of the galaxy's details remain mysterious, foremost of which is how it formed. So-called "classic" ring galaxies are generally formed by the collision of a small galaxy with a larger disk-shaped galaxy, producing a density wave in the disk that leads to a characteristic ring-like appearance. Such an event would have happened at least 2–3 billion years ago, and may have resembled the processes that form polar-ring galaxies. However, there is no sign of any second galaxy that would have acted as the "bullet", and the likely older core of Hoag's Object has a very low velocity relative to the ring, making the typical formation hypothesis implausible. Observations with one of the most sensitive telescopes have also failed to uncover any faint galaxy fragments that should be observable in a collision scenario. However, a team of scientists that analyzes the galaxy admits that "if the carnage happened more than 3 billion years ago, there might not be any detritus left to see."
Noah Brosch suggested that Hoag's Object might be a product of an extreme "bar instability" that occurred a few billion years ago in a barred spiral galaxy. Schweizer et al claim this is an unlikely hypothesis because the nucleus of the object is spheroidal, whereas the nucleus of a barred spiral galaxy is disc-shaped, among other reasons. However, they admit evidence is somewhat thin for this particular dispute to be settled satisfactorily.
| Physical sciences | Notable galaxies | Astronomy |
82728 | https://en.wikipedia.org/wiki/Quantum%20superposition | Quantum superposition | Quantum superposition is a fundamental principle of quantum mechanics that states that linear combinations of solutions to the Schrödinger equation are also solutions of the Schrödinger equation. This follows from the fact that the Schrödinger equation is a linear differential equation in time and position. More precisely, the state of a system is given by a linear combination of all the eigenfunctions of the Schrödinger equation governing that system.
An example is a qubit used in quantum information processing. A qubit state is most generally a superposition of the basis states and :
where is the quantum state of the qubit, and , denote particular solutions to the Schrödinger equation in Dirac notation weighted by the two probability amplitudes and that both are complex numbers. Here corresponds to the classical 0 bit, and to the classical 1 bit. The probabilities of measuring the system in the or state are given by and respectively (see the Born rule). Before the measurement occurs the qubit is in a superposition of both states.
The interference fringes in the double-slit experiment provide another example of the superposition principle.
Wave postulate
The theory of quantum mechanics postulates that a wave equation completely determines the state of a quantum system at all times. Furthermore, this differential equation is restricted to be linear and homogeneous. These conditions mean that for any two solutions of the wave equation, and , a linear combination of those solutions also solve the wave equation:
for arbitrary complex coefficients and . If the wave equation has more than two solutions, combinations of all such solutions are again valid solutions.
Transformation
The quantum wave equation can be solved using functions of position, , or using functions of momentum, and consequently the superposition of momentum functions are also solutions:
The position and momentum solutions are related by a linear transformation, a Fourier transformation. This transformation is itself a quantum superposition and every position wave function can be represented as a superposition of momentum wave functions and vice versa. These superpositions involve an infinite number of component waves.
Generalization to basis states
Other transformations express a quantum solution as a superposition of eigenvectors, each corresponding to a possible result of a measurement on the quantum system. An eigenvector for a mathematical operator, , has the equation
where is one possible measured quantum value for the observable . A superposition of these eigenvectors can represent any solution:
The states like are called basis states.
Compact notation for superpositions
Important mathematical operations on quantum system solutions can be performed using only the coefficients of the superposition, suppressing the details of the superposed functions. This leads to quantum systems expressed in the Dirac bra-ket notation:
This approach is especially effect for systems like quantum spin with no classical coordinate analog. Such shorthand notation is very common in textbooks and papers on quantum mechanics and superposition of basis states is a fundamental tool in quantum mechanics.
Consequences
Paul Dirac described the superposition principle as follows:
The non-classical nature of the superposition process is brought out clearly if we consider the superposition of two states, A and B, such that there exists an observation which, when made on the system in state A, is certain to lead to one particular result, a say, and when made on the system in state B is certain to lead to some different result, b say. What will be the result of the observation when made on the system in the superposed state? The answer is that the result will be sometimes a and sometimes b, according to a probability law depending on the relative weights of A and B in the superposition process. It will never be different from both a and b [i.e., either a or b]. The intermediate character of the state formed by superposition thus expresses itself through the probability of a particular result for an observation being intermediate between the corresponding probabilities for the original states, not through the result itself being intermediate between the corresponding results for the original states.
Anton Zeilinger, referring to the prototypical example of the double-slit experiment, has elaborated regarding the creation and destruction of quantum superposition:
"[T]he superposition of amplitudes ... is only valid if there is no way to know, even in principle, which path the particle took. It is important to realize that this does not imply that an observer actually takes note of what happens. It is sufficient to destroy the interference pattern, if the path information is accessible in principle from the experiment or even if it is dispersed in the environment and beyond any technical possibility to be recovered, but in principle still ‘‘out there.’’ The absence of any such information is the essential criterion for quantum interference to appear.
Theory
General formalism
Any quantum state can be expanded as a sum or superposition of the eigenstates of an Hermitian operator, like the Hamiltonian, because the eigenstates form a complete basis:
where are the energy eigenstates of the Hamiltonian. For continuous variables like position eigenstates, :
where is the projection of the state into the basis and is called the wave function of the particle. In both instances we notice that can be expanded as a superposition of an infinite number of basis states.
Example
Given the Schrödinger equation
where indexes the set of eigenstates of the Hamiltonian with energy eigenvalues we see immediately that
where
is a solution of the Schrödinger equation but is not generally an eigenstate because and are not generally equal. We say that is made up of a superposition of energy eigenstates. Now consider the more concrete case of an electron that has either spin up or down. We now index the eigenstates with the spinors in the basis:
where and denote spin-up and spin-down states respectively. As previously discussed, the magnitudes of the complex coefficients give the probability of finding the electron in either definite spin state:
where the probability of finding the particle with either spin up or down is normalized to 1. Notice that and are complex numbers, so that
is an example of an allowed state. We now get
If we consider a qubit with both position and spin, the state is a superposition of all possibilities for both:
where we have a general state is the sum of the tensor products of the position space wave functions and spinors.
Experiments
Successful experiments involving superpositions of relatively large (by the standards of quantum physics) objects have been performed.
A beryllium ion has been trapped in a superposed state.
A double slit experiment has been performed with molecules as large as buckyballs and functionalized oligoporphyrins with up to 2000 atoms.
Molecules with masses exceeding 10,000 and composed of over 810 atoms have successfully been superposed
Very sensitive magnetometers have been realized using superconducting quantum interference devices (SQUIDS) that operate using quantum interference effects in superconducting circuits.
A piezoelectric "tuning fork" has been constructed, which can be placed into a superposition of vibrating and non-vibrating states. The resonator comprises about 10 trillion atoms.
Recent research indicates that chlorophyll within plants appears to exploit the feature of quantum superposition to achieve greater efficiency in transporting energy, allowing pigment proteins to be spaced further apart than would otherwise be possible.
In quantum computers
In quantum computers, a qubit is the analog of the classical information bit and qubits can be superposed. Unlike classical bits, a superposition of qubits represents information about two states in parallel. Controlling the superposition of qubits is a central challenge in quantum computation. Qubit systems like nuclear spins with small coupling strength are robust to outside disturbances but the same small coupling makes it difficult to readout results.
| Physical sciences | Quantum mechanics | Physics |
82780 | https://en.wikipedia.org/wiki/Crab%20Nebula | Crab Nebula | The Crab Nebula (catalogue designations M1, NGC 1952, Taurus A) is a supernova remnant and pulsar wind nebula in the constellation of Taurus. The common name comes from a drawing that somewhat resembled a crab with arms produced by William Parsons, 3rd Earl of Rosse, in 1842 or 1843 using a telescope. The nebula was discovered by English astronomer John Bevis in 1731. It corresponds with a bright supernova observed in 1054 C.E. by Native American, Japanese, and Arabic stargazers ; this supernova was also recorded by Chinese astronomers as a guest star. The nebula was the first astronomical object identified that corresponds with a historically-observed supernova explosion.
At an apparent magnitude of 8.4, comparable to that of Saturn's moon Titan, it is not visible to the naked eye but can be made out using binoculars under favourable conditions. The nebula lies in the Perseus Arm of the Milky Way galaxy, at a distance of about from Earth. It has a diameter of , corresponding to an apparent diameter of some 7 arcminutes, and is expanding at a rate of about , or 0.5% of the speed of light.
The Crab Pulsar, a neutron star across with a spin rate of 30.2 times per second, lies at the center of the Crab Nebula. The star emits pulses of radiation from gamma rays to radio waves. At X-ray and gamma ray energies above 30 keV, the Crab Nebula is generally the brightest persistent gamma-ray source in the sky, with measured flux extending to above 10 TeV. The nebula's radiation allows detailed study of celestial bodies that occult it. In the 1950s and 1960s, the Sun's corona was mapped from observations of the Crab Nebula's radio waves passing through it, and in 2003, the thickness of the atmosphere of Saturn's moon Titan was measured as it blocked out X-rays from the nebula.
Observational history
The earliest recorded documentation of observation of astronomical object SN 1054 was as it was occurring in 1054, by Chinese astrononomers and Japanese observers, hence its numerical identification. Modern understanding that the Crab Nebula was created by a supernova traces back to 1921, when Carl Otto Lampland announced he had seen changes in the nebula's structure. This
eventually led to the conclusion that the creation of the Crab Nebula corresponds to the bright SN 1054 supernova recorded by medieval astronomers in AD 1054.
First identification
The Crab Nebula was first identified in 1731 by John Bevis. The nebula was independently rediscovered in 1758 by Charles Messier as he was observing a bright comet. Messier catalogued it as the first entry in his catalogue of comet-like objects; in 1757, Alexis Clairaut reexamined the calculations of Edmund Halley and predicted the return of Halley's Comet in late 1758. The exact time of the comet's return required the consideration of perturbations to its orbit caused by planets in the Solar System such as Jupiter, which Clairaut and his two colleagues Jérôme Lalande and Nicole-Reine Lepaute carried out more precisely than Halley, finding that the comet should appear in the constellation of Taurus. It was in searching in vain for the comet that Charles Messier found the Crab Nebula, which he at first thought to be Halley's comet. After some observation, noticing that the object that he was observing was not moving across the sky, Messier concluded that the object was not a comet. Messier then realised the usefulness of compiling a catalogue of celestial objects of a cloudy nature, but fixed in the sky, to avoid incorrectly cataloguing them as comets. This realization led him to compile the "Messier catalogue".
William Herschel observed the Crab Nebula numerous times between 1783 and 1809, but it is not known whether he was aware of its existence in 1783, or if he discovered it independently of Messier and Bevis. After several observations, he concluded that it was composed of a group of stars. William Parsons, 3rd Earl of Rosse observed the nebula at Birr Castle in the early 1840s using a telescope, and made a drawing of it that showed it with arms like those of a crab. He observed it again later, in 1848, using a telescope but could not confirm the supposed resemblance, but the name stuck nevertheless.
Connection to SN 1054
The Crab Nebula was the first astronomical object recognized as being connected to a supernova explosion. In the early twentieth century, the analysis of early photographs of the nebula taken several years apart revealed that it was expanding. Tracing the expansion back revealed that the nebula must have become visible on Earth about 900 years before. Historical records revealed that a new star bright enough to be seen in the daytime had been recorded in the same part of the sky by Chinese astronomers on 4 July 1054, and probably also by Japanese observers.
In 1913, when Vesto Slipher registered his spectroscopy study of the sky, the Crab Nebula was again one of the first objects to be studied. Changes in the cloud, suggesting its small extent, were discovered by Carl Lampland in 1921. That same year, John Charles Duncan demonstrated that the remnant was expanding, while Knut Lundmark noted its proximity to the guest star of 1054.
In 1928, Edwin Hubble proposed associating the cloud with the star of 1054, an idea that remained controversial until the nature of supernovae was understood, and it was Nicholas Mayall who indicated that the star of 1054 was undoubtedly the supernova whose explosion produced the Crab Nebula. The search for historical supernovae started at that moment: seven other historical sightings have been found by comparing modern observations of supernova remnants with astronomical documents of past centuries.
After the original connection to Chinese observations, in 1934 connections were made to a 13th-century Japanese reference to a "guest star" in Meigetsuki a few weeks before the Chinese reference. The event was long considered unrecorded in Islamic astronomy, but in 1978 a reference was found in a 13th-century copy made by Ibn Abi Usaibia of a work by Ibn Butlan, a Nestorian Christian physician active in Baghdad at the time of the supernova.
Given its great distance, the daytime "guest star" observed by the Chinese could only have been a supernova—a massive, exploding star, having exhausted its supply of energy from nuclear fusion and collapsed in on itself. Recent analysis of historical records have found that the supernova that created the Crab Nebula probably appeared in April or early May, rising to its maximum brightness of between apparent magnitude −7 and −4.5 (brighter even than Venus' −4.2 and everything in the night sky except the Moon) by July. The supernova was visible to the naked eye for about two years after its first observation.
Crab Pulsar
In the 1960s, because of the prediction and discovery of pulsars, the Crab Nebula again became a major center of interest. It was then that Franco Pacini predicted the existence of the Crab Pulsar for the first time, which would explain the brightness of the cloud. In late 1968, David H. Staelin and Edward C. Reifenstein III reported the discovery of two rapidly variable radio sources in the area of the Crab Nebula using the Green Bank Telescope. They named them NP 0527 and NP 0532. The period of 33 milliseconds and precise location of the Crab Nebula pulsar NP 0532 was discovered by Richard V. E. Lovelace and collaborators on 10 November 1968 at the Arecibo Radio Observatory. This discovery also proved that pulsars are rotating neutron stars (not pulsating white dwarfs, as many scientists suggested). Soon after the discovery of the Crab Pulsar, David Richards discovered (using the Arecibo Observatory) that the Crab Pulsar spins down and, therefore, the pulsar loses its rotational energy. Thomas Gold has shown that the spin-down power of the pulsar is sufficient to power the Crab Nebula.
The discovery of the Crab Pulsar and the knowledge of its exact age (almost to the day) allows for the verification of basic physical properties of these objects, such as characteristic age and spin-down luminosity, the orders of magnitude involved (notably the strength of the magnetic field), along with various aspects related to the dynamics of the remnant. The role of this supernova to the scientific understanding of supernova remnants was crucial, as no other historical supernova created a pulsar whose precise age is known for certain. The only possible exception to this rule would be SN 1181, whose supposed remnant 3C 58 is home to a pulsar, but its identification using Chinese observations from 1181 is contested.
The inner part of the Crab Nebula is dominated by a pulsar wind nebula enveloping the pulsar. Some sources consider the Crab Nebula to be an example of both a pulsar wind nebula as well as a supernova remnant, while others separate the two phenomena based on the different sources of energy production and behaviour.
Source of high-energy gamma rays
The Crab Nebula was the first astrophysical object confirmed to emit gamma rays in the very-high-energy (VHE) band above 100 GeV in energy. The VHE detection was carried out in 1989 by the Whipple Observatory 10m Gamma-Ray telescope, which opened the VHE gamma-ray window and led to the detection of numerous VHE sources since then.
In 2019 the Crab Nebula was observed to emit gamma rays in excess of 100 TeV, making it the first identified source beyond 100 TeV.
Physical parameters
In visible light, the Crab Nebula consists of a broadly oval-shaped mass of filaments, about 6 arcminutes long and 4 arcminutes wide (by comparison, the full moon is 30 arcminutes across) surrounding a diffuse blue central region. In three dimensions, the nebula is thought to be shaped either like an oblate spheroid (estimated as away) or a prolate spheroid (estimated as away). The filaments are the remnants of the progenitor star's atmosphere, and consist largely of ionised helium and hydrogen, along with carbon, oxygen, nitrogen, iron, neon and sulfur. The filaments' temperatures are typically between 11,000 and 18,000 K, and their densities are about 1,300 particles per cm3.
In 1953, Iosif Shklovsky proposed that the diffuse blue region is predominantly produced by synchrotron radiation, which is radiation given off by the curving motion of electrons in a magnetic field. The radiation corresponded to electrons moving at speeds up to half the speed of light. Three years later, the hypothesis was confirmed by observations. In the 1960s it was found that the source of the curved paths of the electrons was the strong magnetic field produced by a neutron star at the centre of the nebula.
Distance
Even though the Crab Nebula is the focus of much attention among astronomers, its distance remains an open question, owing to uncertainties in every method used to estimate its distance. In 2008, the consensus was that its distance from Earth is . Along its longest visible dimension, it thus measures about across.
The Crab Nebula currently is expanding outward at about . Images taken several years apart reveal the slow expansion of the nebula, and by comparing this angular expansion with its spectroscopically determined expansion velocity, the nebula's distance can be estimated. In 1973, an analysis of many methods used to compute the distance to the nebula had reached a conclusion of about , consistent with the currently cited value.
Tracing back its expansion (assuming a constant decrease of expansion speed due to the nebula's mass) yielded a date for the creation of the nebula several decades after 1054, implying that its outward velocity has decelerated less than assumed since the supernova explosion. This reduced deceleration is believed to be caused by energy from the pulsar that feeds into the nebula's magnetic field, which expands and forces the nebula's filaments outward.
Mass
Estimates of the total mass of the nebula are important for estimating the mass of the supernova's progenitor star. The amount of matter contained in the Crab Nebula's filaments (ejecta mass of ionized and neutral gas; mostly helium) is estimated to be .
Helium-rich torus
One of the many nebular components (or anomalies) of the Crab Nebula is a helium-rich torus which is visible as an east–west band crossing the pulsar region. The torus composes about 25% of the visible ejecta. However, it is suggested by calculation that about 95% of the torus is helium. As yet, there has been no plausible explanation put forth for the structure of the torus.
Central star
At the center of the Crab Nebula are two faint stars, one of which is the star responsible for the existence of the nebula. It was identified as such in 1942, when Rudolf Minkowski found that its optical spectrum was extremely unusual. The region around the star was found to be a strong source of radio waves in 1949 and X-rays in 1963, and was identified as one of the brightest objects in the sky in gamma rays in 1967. Then, in 1968, the star was found to be emitting its radiation in rapid pulses, becoming one of the first pulsars to be discovered.
Pulsars are sources of powerful electromagnetic radiation, emitted in short and extremely regular pulses many times a second. They were a great mystery when discovered in 1967, and the team who identified the first one considered the possibility that it could be a signal from an advanced civilization. However, the discovery of a pulsating radio source in the centre of the Crab Nebula was strong evidence that pulsars were formed by supernova explosions. They now are understood to be rapidly rotating neutron stars, whose powerful magnetic fields concentrates their radiation emissions into narrow beams.
The Crab Pulsar is believed to be about in diameter; it emits pulses of radiation every 33 milliseconds. Pulses are emitted at wavelengths across the electromagnetic spectrum, from radio waves to X-rays. Like all isolated pulsars, its period is slowing very gradually. Occasionally, its rotational period shows sharp changes, known as 'glitches', which are believed to be caused by a sudden realignment inside the neutron star. The rate of energy released as the pulsar slows down is enormous, and it powers the emission of the synchrotron radiation of the Crab Nebula, which has a total luminosity about 148,000 times greater than that of the Sun.
The pulsar's extreme energy output creates an unusually dynamic region at the centre of the Crab Nebula. While most astronomical objects evolve so slowly that changes are visible only over timescales of many years, the inner parts of the Crab Nebula show changes over timescales of only a few days. The most dynamic feature in the inner part of the nebula is the point where the pulsar's equatorial wind slams into the bulk of the nebula, forming a shock front. The shape and position of this feature shifts rapidly, with the equatorial wind appearing as a series of wisp-like features that steepen, brighten, then fade as they move away from the pulsar to well out into the main body of the nebula.
Progenitor star
The star that exploded as a supernova is referred to as the supernova's progenitor star. Two types of stars explode as supernovae: white dwarfs and massive stars. In the so-called Type Ia supernovae, gases falling onto a 'dead' white dwarf raise its mass until it nears a critical level, the Chandrasekhar limit, resulting in a runaway nuclear fusion explosion that obliterates the star; in Type Ib/c and Type II supernovae, the progenitor star is a massive star whose core runs out of fuel to power its nuclear fusion reactions and collapses in on itself, releasing gravitational potential energy in a form that blows away the star's outer layers. Type Ia supernovae do not produce pulsars, so the pulsar in the Crab Nebula shows it must have formed in a core-collapse supernova.
Theoretical models of supernova explosions suggest that the star that exploded to produce the Crab Nebula must have had a mass of between . Stars with masses lower than are thought to be too small to produce supernova explosions, and end their lives by producing a planetary nebula instead, while a star heavier than would have produced a nebula with a different chemical composition from that observed in the Crab Nebula. Recent studies, however, suggest the progenitor could have been a super-asymptotic giant branch star in the range that would have exploded in an electron-capture supernova. In June 2021 a paper in the journal Nature Astronomy reported that the 2018 supernova SN 2018zd (in the galaxy NGC 2146, about 31 million light-years from Earth) appeared to be the first observation of an electron-capture supernova The 1054 supernova explosion that created the Crab Nebula had been thought to be the best candidate for an electron-capture supernova, and the 2021 paper makes it more likely that this was correct.
A significant problem in studies of the Crab Nebula is that the combined mass of the nebula and the pulsar add up to considerably less than the predicted mass of the progenitor star, and the question of where the 'missing mass' is, remains unresolved. Estimates of the mass of the nebula are made by measuring the total amount of light emitted, and calculating the mass required, given the measured temperature and density of the nebula. Estimates range from about , with being the generally accepted value. The neutron star mass is estimated to be between .
The predominant theory to account for the missing mass of the Crab Nebula is that a substantial proportion of the mass of the progenitor was carried away before the supernova explosion in a fast stellar wind, a phenomenon commonly seen in Wolf–Rayet stars. However, this would have created a shell around the nebula. Although attempts have been made at several wavelengths to observe a shell, none has yet been found.
Transits by Solar System bodies
The Crab Nebula lies roughly 1.5 degrees away from the ecliptic—the plane of Earth's orbit around the Sun. This means that the Moon—and occasionally, planets—can transit or occult the nebula. Although the Sun does not transit the nebula, its corona passes in front of it. These transits and occultations can be used to analyse both the nebula and the object passing in front of it, by observing how radiation from the nebula is altered by the transiting body.
Lunar
Lunar transits have been used to map X-ray emissions from the nebula. Before the launch of X-ray-observing satellites, such as the Chandra X-ray Observatory, X-ray observations generally had quite low angular resolution, but when the Moon passes in front of the nebula, its position is very accurately known, and so the variations in the nebula's brightness can be used to create maps of X-ray emission. When X-rays were first observed from the Crab Nebula, a lunar occultation was used to determine the exact location of their source.
Solar
The Sun's corona passes in front of the Crab Nebula every June. Variations in the radio waves received from the Crab Nebula at this time can be used to infer details about the corona's density and structure. Early observations established that the corona extended out to much greater distances than had previously been thought; later observations found that the corona contained substantial density variations.
Other objects
Very rarely, Saturn transits the Crab Nebula. Its transit on 4 January 2003 (UTC) was the first since 31 December 1295 (O.S.); another will not occur until 5 August 2267. Researchers used the Chandra X-ray Observatory to observe Saturn's moon Titan as it crossed the nebula, and found that Titan's X-ray 'shadow' was larger than its solid surface, due to absorption of X-rays in its atmosphere. These observations showed that the thickness of Titan's atmosphere is . The transit of Saturn itself could not be observed, because Chandra was passing through the Van Allen belts at the time.
Gallery
| Physical sciences | Notable nebulae | null |
82804 | https://en.wikipedia.org/wiki/Convergent%20evolution | Convergent evolution | Convergent evolution is the independent evolution of similar features in species of different periods or epochs in time. Convergent evolution creates analogous structures that have similar form or function but were not present in the last common ancestor of those groups. The cladistic term for the same phenomenon is homoplasy. The recurrent evolution of flight is a classic example, as flying insects, birds, pterosaurs, and bats have independently evolved the useful capacity of flight. Functionally similar features that have arisen through convergent evolution are analogous, whereas homologous structures or traits have a common origin but can have dissimilar functions. Bird, bat, and pterosaur wings are analogous structures, but their forelimbs are homologous, sharing an ancestral state despite serving different functions.
The opposite of convergence is divergent evolution, where related species evolve different traits. Convergent evolution is similar to parallel evolution, which occurs when two independent species evolve in the same direction and thus independently acquire similar characteristics; for instance, gliding frogs have evolved in parallel from multiple types of tree frog.
Many instances of convergent evolution are known in plants, including the repeated development of C4 photosynthesis, seed dispersal by fleshy fruits adapted to be eaten by animals, and carnivory.
Overview
In morphology, analogous traits arise when different species live in similar ways and/or a similar environment, and so face the same environmental factors. When occupying similar ecological niches (that is, a distinctive way of life) similar problems can lead to similar solutions. The British anatomist Richard Owen was the first to identify the fundamental difference between analogies and homologies.
In biochemistry, physical and chemical constraints on mechanisms have caused some active site arrangements such as the catalytic triad to evolve independently in separate enzyme superfamilies.
In his 1989 book Wonderful Life, Stephen Jay Gould argued that if one could "rewind the tape of life [and] the same conditions were encountered again, evolution could take a very different course." Simon Conway Morris disputes this conclusion, arguing that convergence is a dominant force in evolution, and given that the same environmental and physical constraints are at work, life will inevitably evolve toward an "optimum" body plan, and at some point, evolution is bound to stumble upon intelligence, a trait presently identified with at least primates, corvids, and cetaceans.
Distinctions
Cladistics
In cladistics, a homoplasy is a trait shared by two or more taxa for any reason other than that they share a common ancestry. Taxa which do share ancestry are part of the same clade; cladistics seeks to arrange them according to their degree of relatedness to describe their phylogeny. Homoplastic traits caused by convergence are therefore, from the point of view of cladistics, confounding factors which could lead to an incorrect analysis.
Atavism
In some cases, it is difficult to tell whether a trait has been lost and then re-evolved convergently, or whether a gene has simply been switched off and then re-enabled later. Such a re-emerged trait is called an atavism. From a mathematical standpoint, an unused gene (selectively neutral) has a steadily decreasing probability of retaining potential functionality over time. The time scale of this process varies greatly in different phylogenies; in mammals and birds, there is a reasonable probability of remaining in the genome in a potentially functional state for around 6 million years.
Parallel vs. convergent evolution
When two species are similar in a particular character, evolution is defined as parallel if the ancestors were also similar, and convergent if they were not. Some scientists have argued that there is a continuum between parallel and convergent evolution, while others maintain that despite some overlap, there are still important distinctions between the two.
When the ancestral forms are unspecified or unknown, or the range of traits considered is not clearly specified, the distinction between parallel and convergent evolution becomes more subjective. For instance, the striking example of similar placental and marsupial forms is described by Richard Dawkins in The Blind Watchmaker as a case of convergent evolution, because mammals on each continent had a long evolutionary history prior to the extinction of the dinosaurs under which to accumulate relevant differences.
At molecular level
Proteins
Protease active sites
The enzymology of proteases provides some of the clearest examples of convergent evolution. These examples reflect the intrinsic chemical constraints on enzymes, leading evolution to converge on equivalent solutions independently and repeatedly.
Serine and cysteine proteases use different amino acid functional groups (alcohol or thiol) as a nucleophile. In order to activate that nucleophile, they orient an acidic and a basic residue in a catalytic triad. The chemical and physical constraints on enzyme catalysis have caused identical triad arrangements to evolve independently more than 20 times in different enzyme superfamilies.
Threonine proteases use the amino acid threonine as their catalytic nucleophile. Unlike cysteine and serine, threonine is a secondary alcohol (i.e. has a methyl group). The methyl group of threonine greatly restricts the possible orientations of triad and substrate, as the methyl clashes with either the enzyme backbone or the histidine base. Consequently, most threonine proteases use an N-terminal threonine in order to avoid such steric clashes.
Several evolutionarily independent enzyme superfamilies with different protein folds use the N-terminal residue as a nucleophile. This commonality of active site but difference of protein fold indicates that the active site evolved convergently in those families.
Cone snail and fish insulin
Conus geographus produces a distinct form of insulin that is more similar to fish insulin protein sequences than to insulin from more closely related molluscs, suggesting convergent evolution, though with the possibility of horizontal gene transfer.
Ferrous iron uptake via protein transporters in land plants and chlorophytes
Distant homologues of the metal ion transporters ZIP in land plants and chlorophytes have converged in structure, likely to take up Fe2+ efficiently. The IRT1 proteins from Arabidopsis thaliana and rice have extremely different amino acid sequences from Chlamydomonass IRT1, but their three-dimensional structures are similar, suggesting convergent evolution.
Na+,K+-ATPase and Insect resistance to cardiotonic steroids
Many examples of convergent evolution exist in insects in terms of developing resistance at a molecular level to toxins. One well-characterized example is the evolution of resistance to cardiotonic steroids (CTSs) via amino acid substitutions at well-defined positions of the α-subunit of Na+,K+-ATPase (ATPalpha). Variation in ATPalpha has been surveyed in various CTS-adapted species spanning six insect orders. Among 21 CTS-adapted species, 58 (76%) of 76 amino acid substitutions at sites implicated in CTS resistance occur in parallel in at least two lineages. 30 of these substitutions (40%) occur at just two sites in the protein (positions 111 and 122). CTS-adapted species have also recurrently evolved neo-functionalized duplications of ATPalpha, with convergent tissue-specific expression patterns.
Nucleic acids
Convergence occurs at the level of DNA and the amino acid sequences produced by translating structural genes into proteins. Studies have found convergence in amino acid sequences in echolocating bats and the dolphin; among marine mammals; between giant and red pandas; and between the thylacine and canids. Convergence has also been detected in a type of non-coding DNA, cis-regulatory elements, such as in their rates of evolution; this could indicate either positive selection or relaxed purifying selection.
In animal morphology
Bodyplans
Swimming animals including fish such as herrings, marine mammals such as dolphins, and ichthyosaurs (of the Mesozoic) all converged on the same streamlined shape. A similar shape and swimming adaptations are even present in molluscs, such as Phylliroe. The fusiform bodyshape (a tube tapered at both ends) adopted by many aquatic animals is an adaptation to enable them to travel at high speed in a high drag environment. Similar body shapes are found in the earless seals and the eared seals: they still have four legs, but these are strongly modified for swimming.
The marsupial fauna of Australia and the placental mammals of the Old World have several strikingly similar forms, developed in two clades, isolated from each other. The body, and especially the skull shape, of the thylacine (Tasmanian tiger or Tasmanian wolf) converged with those of Canidae such as the red fox, Vulpes vulpes.
Echolocation
As a sensory adaptation, echolocation has evolved separately in cetaceans (dolphins and whales) and bats, but from the same genetic mutations.
Electric fishes
The Gymnotiformes of South America and the Mormyridae of Africa independently evolved passive electroreception (around 119 and 110 million years ago, respectively). Around 20 million years after acquiring that ability, both groups evolved active electrogenesis, producing weak electric fields to help them detect prey.
Eyes
One of the best-known examples of convergent evolution is the camera eye of cephalopods (such as squid and octopus), vertebrates (including mammals) and cnidarians (such as jellyfish). Their last common ancestor had at most a simple photoreceptive spot, but a range of processes led to the progressive refinement of camera eyes—with one sharp difference: the cephalopod eye is "wired" in the opposite direction, with blood and nerve vessels entering from the back of the retina, rather than the front as in vertebrates. As a result, vertebrates have a blind spot.
Sex organs
Hydrostatic penises have convergently evolved at least six times in male amniotes. In these species, males copulate with females and internally fertilize their eggs. Similar intromittent organs have evolved in invertebrates such as octopuses and gastropods.
Flight
Birds and bats have homologous limbs because they are both ultimately derived from terrestrial tetrapods, but their flight mechanisms are only analogous, so their wings are examples of functional convergence. The two groups have independently evolved their own means of powered flight. Their wings differ substantially in construction. The bat wing is a membrane stretched across four extremely elongated fingers and the legs. The airfoil of the bird wing is made of feathers, strongly attached to the forearm (the ulna) and the highly fused bones of the wrist and hand (the carpometacarpus), with only tiny remnants of two fingers remaining, each anchoring a single feather. So, while the wings of bats and birds are functionally convergent, they are not anatomically convergent. Birds and bats also share a high concentration of cerebrosides in the skin of their wings. This improves skin flexibility, a trait useful for flying animals; other mammals have a far lower concentration. The extinct pterosaurs independently evolved wings from their fore- and hindlimbs, while insects have wings that evolved separately from different organs.
Flying squirrels and sugar gliders are much alike in their mammalian body plans, with gliding wings stretched between their limbs, but flying squirrels are placentals while sugar gliders are marsupials, widely separated within the mammal lineage from the placentals.
Hummingbird hawk-moths and hummingbirds have evolved similar flight and feeding patterns.
Insect mouthparts
Insect mouthparts show many examples of convergent evolution. The mouthparts of different insect groups consist of a set of homologous organs, specialised for the dietary intake of that insect group. Convergent evolution of many groups of insects led from original biting-chewing mouthparts to different, more specialised, derived function types. These include, for example, the proboscis of flower-visiting insects such as bees and flower beetles, or the biting-sucking mouthparts of blood-sucking insects such as fleas and mosquitos.
Opposable thumbs
Opposable thumbs allowing the grasping of objects are most often associated with primates, like humans and other apes, monkeys, and lemurs. Opposable thumbs also evolved in giant pandas, but these are completely different in structure, having six fingers including the thumb, which develops from a wrist bone entirely separately from other fingers.
Primates
Convergent evolution in humans includes blue eye colour and light skin colour. When humans migrated out of Africa, they moved to more northern latitudes with less intense sunlight. It was beneficial to them to reduce their skin pigmentation. It appears certain that there was some lightening of skin colour before European and East Asian lineages diverged, as there are some skin-lightening genetic differences that are common to both groups. However, after the lineages diverged and became genetically isolated, the skin of both groups lightened more, and that additional lightening was due to different genetic changes.
Lemurs and humans are both primates. Ancestral primates had brown eyes, as most primates do today. The genetic basis of blue eyes in humans has been studied in detail and much is known about it. It is not the case that one gene locus is responsible, say with brown dominant to blue eye colour. However, a single locus is responsible for about 80% of the variation. In lemurs, the differences between blue and brown eyes are not completely known, but the same gene locus is not involved.
In plants
The annual life-cycle
While most plant species are perennial, about 6% follow an annual life cycle, living for only one growing season. The annual life cycle independently emerged in over 120 plant families of angiosperms. The prevalence of annual species increases under hot-dry summer conditions in the four species-rich families of annuals (Asteraceae, Brassicaceae, Fabaceae, and Poaceae), indicating that the annual life cycle is adaptive.
Carbon fixation
C4 photosynthesis, one of the three major carbon-fixing biochemical processes, has arisen independently up to 40 times. About 7,600 plant species of angiosperms use carbon fixation, with many monocots including 46% of grasses such as maize and sugar cane, and dicots including several species in the Chenopodiaceae and the Amaranthaceae.
Fruits
Fruits with a wide variety of structural origins have converged to become edible. Apples are pomes with five carpels; their accessory tissues form the apple's core, surrounded by structures from outside the botanical fruit, the receptacle or hypanthium. Other edible fruits include other plant tissues; the fleshy part of a tomato is the walls of the pericarp. This implies convergent evolution under selective pressure, in this case the competition for seed dispersal by animals through consumption of fleshy fruits.
Seed dispersal by ants (myrmecochory) has evolved independently more than 100 times, and is present in more than 11,000 plant species. It is one of the most dramatic examples of convergent evolution in biology.
Carnivory
Carnivory has evolved multiple times independently in plants in widely separated groups. In three species studied, Cephalotus follicularis, Nepenthes alata and Sarracenia purpurea, there has been convergence at the molecular level. Carnivorous plants secrete enzymes into the digestive fluid they produce. By studying phosphatase, glycoside hydrolase, glucanase, RNAse and chitinase enzymes as well as a pathogenesis-related protein and a thaumatin-related protein, the authors found many convergent amino acid substitutions. These changes were not at the enzymes' catalytic sites, but rather on the exposed surfaces of the proteins, where they might interact with other components of the cell or the digestive fluid. The authors also found that homologous genes in the non-carnivorous plant Arabidopsis thaliana tend to have their expression increased when the plant is stressed, leading the authors to suggest that stress-responsive proteins have often been co-opted in the repeated evolution of carnivory.
Methods of inference
Phylogenetic reconstruction and ancestral state reconstruction proceed by assuming that evolution has occurred without convergence. Convergent patterns may, however, appear at higher levels in a phylogenetic reconstruction, and are sometimes explicitly sought by investigators. The methods applied to infer convergent evolution depend on whether pattern-based or process-based convergence is expected. Pattern-based convergence is the broader term, for when two or more lineages independently evolve patterns of similar traits. Process-based convergence is when the convergence is due to similar forces of natural selection.
Pattern-based measures
Earlier methods for measuring convergence incorporate ratios of phenotypic and phylogenetic distance by simulating evolution with a Brownian motion model of trait evolution along a phylogeny. More recent methods also quantify the strength of convergence. One drawback to keep in mind is that these methods can confuse long-term stasis with convergence due to phenotypic similarities. Stasis occurs when there is little evolutionary change among taxa.
Distance-based measures assess the degree of similarity between lineages over time. Frequency-based measures assess the number of lineages that have evolved in a particular trait space.
Process-based measures
Methods to infer process-based convergence fit models of selection to a phylogeny and continuous trait data to determine whether the same selective forces have acted upon lineages. This uses the Ornstein–Uhlenbeck process to test different scenarios of selection. Other methods rely on an a priori specification of where shifts in selection have occurred.
| Biology and health sciences | Basics_4 | Biology |
82867 | https://en.wikipedia.org/wiki/Clementine | Clementine | A clementine (Citrus × clementina) is a tangor, a citrus fruit hybrid between a willowleaf mandarin orange (C. × deliciosa) and a sweet orange (C. × sinensis), named in honor of Clément Rodier, a French missionary who first discovered and propagated the cultivar in Algeria. The exterior is a deep orange colour with a smooth, glossy appearance. Clementines can be separated into 7 to 14 segments. Similar to tangerines, they tend to be easy to peel. They are typically juicy and sweet, with less acid than oranges. Their oils, like other citrus fruits, contain mostly limonene as well as myrcene, linalool, α-pinene and many complex aromatics.
They are sometimes sold under the name Easy-peelers.
History
The clementine is a spontaneous citrus hybrid that arose in the late 19th century in Misserghin, Algeria, in the garden of the orphanage of the French Missionary Brother Clément Rodier, for whom it would be formally named in 1902. Some sources have attributed an earlier origin for the hybrid, pointing to similar fruit native to the provinces of Guangxi and Guangdong in present-day China, but these are likely distinct mandarin hybrids, and genomic analysis of the clementine has shown it to have arisen from a cross between a sweet orange (Citrus × sinensis) and the Mediterranean willowleaf mandarin (Citrus × deliciosa), consistent with Algerian origin.
There are three types of clementines: seedless clementines, clementines (maximum of 10 seeds), and Monreal (more than 10 seeds). Clementines resemble other citrus varieties such as the satsuma and tangerines.
Cultivation
Clementines differ from other citrus in having lower heat requirement, which means the tolerance to fruit maturity and sensitivity to unfavorable conditions during the flowering and fruit-setting period is higher. However, in regions of high total heat, the Clementine bears fruit early; only slightly later than satsuma mandarins. These regions such as North Africa, Mediterranean basin, and California, also favor maximizing the Clementine size and quality.
It was introduced into California commercial agriculture in 1914, though it was grown at the Citrus Research Center (now part of the University of California, Riverside) as early as 1909. Clementines lose their desirable seedless characteristic when they are cross-pollinated with other fruit. In 2006, to prevent this, growers such as Paramount Citrus in California threatened to sue local beekeepers to keep bees away from their crops.
Types
Seedless – exists in North Africa. Seedless versions of the clementine are known as the common type (seedless or practically seedless). Common Clementines are very similar to the Monreal type; the two types are virtually identical in terms of tree specifics. The seedless Clementine tree is self-incompatible; which is why the fruit has so few or no seeds. In order to be pollinated, it needs to be cross-pollinated.
Monreal – exists in North Africa. The Monreal clementine can self-pollinate and has seeds. Monreal clementines are on average larger than the seedless variety, has a more abundant bloom and is sweeter.
Sweetclems — are typically grown in Spain and northern Africa. Unlike other Clementine varieties, they usually have 10 slices. They are specialised to be easy to peel. They have a sweet taste, as suggested by their name, but it is not overbearing and quite mild. The sweetclem has several other brand names, and can also be referred to as an easy-peeler, a clemengold, and a clemcott, amongst others.
Varieties
Algerian, the original Rodier cultivar.
Fina, a Spanish cultivar originally grown on a bitter orange rootstock that gave it superb flavor, but due to disease vulnerability is now grown on a broader range of rootstocks, affecting the flavor profile.
Clemenules or Nules – A popular, seedless, easy to peel clementine with a very pleasing sweet flavor. A mutation of the Fina variety, Nules is the most widely planted clementine in Spain, where it matures from mid-November to mid to late-January. Also widely planted in California, where it matures from October to December. It produces seedless fruit that is larger than the Fina, but less sweet.
Clementine del Golfo di Taranto, a (practically) seedless Italian cultivar given Protected geographical indication (PGI) status by the European Union, produced around the Gulf of Taranto. They have a sweet flavour and an intense aroma.
Clementine di Calabria, another Italian PGI variety, grown in the Calabria region.
Nutrition
A typical clementine contains 87% water, 12% carbohydrates, and negligible amounts of fat and protein (table). Among micronutrients, only vitamin C is in significant content (59% of the Daily Value) in a 100 gram reference serving, with all other nutrients in low amounts.
Potential drug interactions
A 2017 study indicated that clementine phytochemicals may interact with drugs in a manner similar to those of grapefruit. A follow-up study in 2019, however, has called these results into question.
| Biology and health sciences | Citrus fruits | Plants |
82871 | https://en.wikipedia.org/wiki/Data%20model | Data model | A data model is an abstract model that organizes elements of data and standardizes how they relate to one another and to the properties of real-world entities. For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner.
The corresponding professional activity is called generally data modeling or, more specifically, database design.
Data models are typically specified by a data expert, data specialist, data scientist, data librarian, or a data scholar.
A data modeling language and notation are often represented in graphical form as diagrams.
A data model can sometimes be referred to as a data structure, especially in the context of programming languages. Data models are often complemented by function models, especially in the context of enterprise models.
A data model explicitly determines the structure of data; conversely, structured data is data organized according to an explicit data model or data structure. Structured data is in contrast to unstructured data and semi-structured data.
Overview
The term data model can refer to two distinct but closely related concepts. Sometimes it refers to an abstract formalization of the objects and relationships found in a particular application domain: for example the customers, products, and orders found in a manufacturing organization. At other times it refers to the set of concepts used in defining such formalizations: for example concepts such as entities, attributes, relations, or tables. So the "data model" of a banking application may be defined using the entity–relationship "data model". This article uses the term in both senses.
Managing large quantities of structured and unstructured data is a primary function of information systems. Data models describe the structure, manipulation, and integrity aspects of the data stored in data management systems such as relational databases. They may also describe data with a looser structure, such as word processing documents, email messages, pictures, digital audio, and video: XDM, for example, provides a data model for XML documents.
The role of data models
The main aim of data models is to support the development of information systems by providing the definition and format of data. According to West and Fowler (1999) "if this is done consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data. The results of this are indicated above. However, systems and interfaces often cost more than they should, to build, operate, and maintain. They may also constrain the business rather than support it. A major cause is that the quality of the data models implemented in systems and interfaces is poor".
"Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces".
"Entity types are often not identified, or incorrectly identified. This can lead to replication of data, data structure, and functionality, together with the attendant costs of that duplication in development and maintenance".
"Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25-70% of the cost of current systems".
"Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data has not been standardized. For example, engineering design data and drawings for process plant are still sometimes exchanged on paper".
The reason for these problems is a lack of standards that will ensure that data models will both meet business needs and be consistent.
A data model explicitly determines the structure of data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually, data models are specified in a data modeling language.[3]
Three perspectives
A data model instance may be one of three kinds according to ANSI in 1975:
Conceptual data model: describes the semantics of a domain, being the scope of the model. For example, it may be a model of the interest area of an organization or industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationship assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial 'language' with a scope that is limited by the scope of the model.
Logical data model: describes the semantics, as represented by a particular data manipulation technology. This consists of descriptions of tables and columns, object oriented classes, and XML tags, among other things.
Physical data model: describes the physical means by which data are stored. This is concerned with partitions, CPUs, tablespaces, and the like.
The significance of this approach, according to ANSI, is that it allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual model. The table/column structure can change without (necessarily) affecting the conceptual model. In each case, of course, the structures must remain consistent with the other model. The table/column structure may be different from a direct translation of the entity classes and attributes, but it must ultimately carry out the objectives of the conceptual entity class structure. Early phases of many software development projects emphasize the design of a conceptual data model. Such a design can be detailed into a logical data model. In later stages, this model may be translated into physical data model. However, it is also possible to implement a conceptual model directly.
History
One of the earliest pioneering works in modeling information systems was done by Young and Kent (1958), who argued for "a precise and abstract way of specifying the informational and time characteristics of a data processing problem". They wanted to create "a notation that should enable the analyst to organize the problem around any piece of hardware". Their work was the first effort to create an abstract specification and invariant basis for designing different alternative implementations using different hardware components. The next step in IS modeling was taken by CODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of "a proper structure for machine-independent problem definition language, at the system level of data processing". This led to the development of a specific IS information algebra.
In the 1960s data modeling gained more significance with the initiation of the management information system (MIS) concept. According to Leondes (2002), "during that time, the information system provided the data and information for management purposes. The first generation database system, called Integrated Data Store (IDS), was designed by Charles Bachman at General Electric. Two famous database models, the network data model and the hierarchical data model, were proposed during this period of time". Towards the end of the 1960s, Edgar F. Codd worked out his theories of data arrangement, and proposed the relational model for database management based on first-order predicate logic.
In the 1970s entity–relationship modeling emerged as a new type of conceptual data modeling, originally formalized in 1976 by Peter Chen. Entity–relationship models were being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database. This technique can describe any ontology, i.e., an overview and classification of concepts and their relationships, for a certain area of interest.
In the 1970s G.M. Nijssen developed "Natural Language Information Analysis Method" (NIAM) method, and developed this in the 1980s in cooperation with Terry Halpin into Object–Role Modeling (ORM). However, it was Terry Halpin's 1989 PhD thesis that created the formal foundation on which Object–Role Modeling is based.
Bill Kent, in his 1978 book Data and Reality, compared a data model to a map of a territory, emphasizing that in the real world, "highways are not painted red, rivers don't have county lines running down the middle, and you can't see contour lines on a mountain". In contrast to other researchers who tried to create models that were mathematically clean and elegant, Kent emphasized the essential messiness of the real world, and the task of the data modeler to create order out of chaos without excessively distorting the truth.
In the 1980s, according to Jan L. Harrington (2000), "the development of the object-oriented paradigm brought about a fundamental change in the way we look at data and the procedures that operate on data. Traditionally, data and procedures have been stored separately: the data and their relationship in a database, the procedures in an application program. Object orientation, however, combined an entity's procedure with its data."
During the early 1990s, three Dutch mathematicians Guido Bakema, Harm van der Lek, and JanPieter Zwart, continued the development on the work of G.M. Nijssen. They focused more on the communication part of the semantics. In 1997 they formalized the method Fully Communication Oriented Information Modeling FCO-IM.
Types
Database model
A database model is a specification describing how a database is structured and used.
Several such models have been suggested. Common models include:
Flat model
This may not strictly qualify as a data model. The flat (or table) model consists of a single, two-dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another.
Hierarchical model
The hierarchical model is similar to the network model except that links in the hierarchical model form a tree structure, while the network model allows arbitrary graph.
Network model
This model organizes data using two fundamental constructs, called records and sets. Records contain fields, and sets define one-to-many relationships between records: one owner, many members. The network data model is an abstraction of the design concept used in the implementation of databases.
Relational model
is a database model based on first-order predicate logic. Its core idea is to describe a database as a collection of predicates over a finite set of predicate variables, describing constraints on the possible values and combinations of values. The power of the relational data model lies in its mathematical foundations and a simple user-level paradigm.
Object–relational model
Similar to a relational database model, but objects, classes, and inheritance are directly supported in database schemas and in the query language.
Object–role modeling
A method of data modeling that has been defined as "attribute free", and "fact-based". The result is a verifiably correct system, from which other common artifacts, such as ERD, UML, and semantic models may be derived. Associations between data objects are described during the database design procedure, such that normalization is an inevitable result of the process.
Star schema
The simplest style of data warehouse schema. The star schema consists of a few "fact tables" (possibly only one, justifying the name) referencing any number of "dimension tables". The star schema is considered an important special case of the snowflake schema.
Data structure diagram
A data structure diagram (DSD) is a diagram and data model used to describe conceptual data models by providing graphical notations which document entities and their relationships, and the constraints that bind them. The basic graphic elements of DSDs are boxes, representing entities, and arrows, representing relationships. Data structure diagrams are most useful for documenting complex data entities.
Data structure diagrams are an extension of the entity–relationship model (ER model). In DSDs, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as boxes composed of attributes which specify the constraints that bind entities together. DSDs differ from the ER model in that the ER model focuses on the relationships between different entities, whereas DSDs focus on the relationships of the elements within an entity and enable users to fully see the links and relationships between each entity.
There are several styles for representing data structure diagrams, with the notable difference in the manner of defining cardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality.
Entity–relationship model
An entity–relationship model (ERM), sometimes referred to as an entity–relationship diagram (ERD), could be used to represent an abstract conceptual data model (or semantic data model or physical data model) used in software engineering to represent structured data. There are several notations used for ERMs. Like DSD's, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as lines, with the relationship constraints as descriptions on the line. The E-R model, while robust, can become visually cumbersome when representing entities with several attributes.
There are several styles for representing data structure diagrams, with a notable difference in the manner of defining cardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality.
Geographic data model
A data model in Geographic information systems is a mathematical construct for representing geographic objects or surfaces as data. For example,
the vector data model represents geography as points, lines, and polygons
the raster data model represents geography as cell matrixes that store numeric values;
and the Triangulated irregular network (TIN) data model represents geography as sets of contiguous, nonoverlapping triangles.
Generic data model
Generic data models are generalizations of conventional data models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type. Generic data models are developed as an approach to solving some shortcomings of conventional data models. For example, different modelers usually produce different conventional data models of the same domain. This can lead to difficulty in bringing the models of different people together and is an obstacle for data exchange and data integration. Invariably, however, this difference is attributable to different levels of abstraction in the models and differences in the kinds of facts that can be instantiated (the semantic expression capabilities of the models). The modelers need to communicate and agree on certain elements that are to be rendered more concretely, in order to make the differences less significant.
Semantic data model
A semantic data model in software engineering is a technique to define the meaning of data within the context of its interrelationships with other data. A semantic data model is an abstraction that defines how the stored symbols relate to the real world. A semantic data model is sometimes called a conceptual data model.
The logical data structure of a database management system (DBMS), whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. Therefore, the need to define data from a conceptual view has led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure. The real world, in terms of resources, ideas, events, etc., are symbolically defined within physical data stores. A semantic data model is an abstraction that defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world.
Topics
Data architecture
Data architecture is the design of data for use in defining the target state and the subsequent planning needed to hit the target state. It is usually one of several architecture domains that form the pillars of an enterprise architecture or solution architecture.
A data architecture describes the data structures used by a business and/or its applications. There are descriptions of data in storage and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc.
Essential to realizing the target state, Data architecture describes how data is processed, stored, and utilized in a given system. It provides criteria for data processing operations that make it possible to design data flows and also control the flow of data in the system.
Data modeling
Data modeling in software engineering is the process of creating a data model by applying formal data model descriptions using data modeling techniques. Data modeling is a technique for defining business requirements for a database. It is sometimes called database modeling because a data model is eventually implemented in a database.
The figure illustrates the way data models are developed and used today. A conceptual data model is developed based on the data requirements for the application that is being developed, perhaps in the context of an activity model. The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface or database design.
Data properties
Some important properties of data for which requirements need to be met are:
definition-related properties
relevance: the usefulness of the data in the context of your business.
clarity: the availability of a clear and shared definition for the data.
consistency: the compatibility of the same type of data from different sources.
content-related properties
timeliness: the availability of data at the time required and how up-to-date that data is.
accuracy: how close to the truth the data is.
properties related to both definition and content
completeness: how much of the required data is available.
accessibility: where, how, and to whom the data is available or not available (e.g. security).
cost: the cost incurred in obtaining the data, and making it available for use.
Data organization
Another kind of data model describes how to organize data using a database management system or other data management technology. It describes, for example, relational tables and columns or object-oriented classes and attributes. Such a data model is sometimes referred to as the physical data model, but in the original ANSI three schema architecture, it is called "logical". In that architecture, the physical model describes the storage media (cylinders, tracks, and tablespaces). Ideally, this model is derived from the more conceptual data model described above. It may differ, however, to account for constraints like processing capacity and usage patterns.
While data analysis is a common term for data modeling, the activity actually has more in common with the ideas and methods of synthesis (inferring general concepts from particular instances) than it does with analysis (identifying component concepts from more general ones). {Presumably we call ourselves systems analysts because no one can say systems synthesists.} Data modeling strives to bring the data structures of interest together into a cohesive, inseparable, whole by eliminating unnecessary data redundancies and by relating data structures with relationships.
A different approach is to use adaptive systems such as artificial neural networks that can autonomously create implicit models of data.
Data structure
A data structure is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data. Often a carefully chosen data structure will allow the most efficient algorithm to be used. The choice of the data structure often begins from the choice of an abstract data type.
A data model describes the structure of the data within a given domain and, by implication, the underlying structure of that domain itself. This means that a data model in fact specifies a dedicated grammar for a dedicated artificial language for that domain. A data model represents classes of entities (kinds of things) about which a company wishes to hold information, the attributes of that information, and relationships among those entities and (often implicit) relationships among those attributes. The model describes the organization of the data to some extent irrespective of how data might be represented in a computer system.
The entities represented by a data model can be the tangible entities, but models that include such concrete entity classes tend to change over time. Robust data models often identify abstractions of such entities. For example, a data model might include an entity class called "Person", representing all the people who interact with an organization. Such an abstract entity class is typically more appropriate than ones called "Vendor" or "Employee", which identify specific roles played by those people.
Data model theory
The term data model can have two meanings:
A data model theory, i.e. a formal description of how data may be structured and accessed.
A data model instance, i.e. applying a data model theory to create a practical data model instance for some particular application.
A data model theory has three main components:
The structural part: a collection of data structures which are used to create databases representing the entities or objects modeled by the database.
The integrity part: a collection of rules governing the constraints placed on these data structures to ensure structural integrity.
The manipulation part: a collection of operators which can be applied to the data structures, to update and query the data contained in the database.
For example, in the relational model, the structural part is based on a modified concept of the mathematical relation; the integrity part is expressed in first-order logic and the manipulation part is expressed using the relational algebra, tuple calculus and domain calculus.
A data model instance is created by applying a data model theory. This is typically done to solve some business enterprise requirement. Business requirements are normally captured by a semantic logical data model. This is transformed into a physical data model instance from which is generated a physical database. For example, a data modeler may use a data modeling tool to create an entity–relationship model of the corporate data repository of some business enterprise. This model is transformed into a relational model, which in turn generates a relational database.
Patterns
Patterns are common data modeling structures that occur in many data models.
Related models
Data-flow diagram
A data-flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. It differs from the flowchart as it shows the data flow instead of the control flow of the program. A data-flow diagram can also be used for the visualization of data processing (structured design). Data-flow diagrams were invented by Larry Constantine, the original developer of structured design, based on Martin and Estrin's "data-flow graph" model of computation.
It is common practice to draw a context-level data-flow diagram first which shows the interaction between the system and outside entities. The DFD is designed to show how a system is divided into smaller portions and to highlight the flow of data between those parts. This context-level data-flow diagram is then "exploded" to show more detail of the system being modeled
Information model
An Information model is not a type of data model, but more or less an alternative model. Within the field of software engineering, both a data model and an information model can be abstract, formal representations of entity types that include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations.
According to Lee (1999) an information model is a representation of concepts, relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. It can provide sharable, stable, and organized structure of information requirements for the domain context. More in general the term information model is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases the concept is specialised to Facility Information Model, Building Information Model, Plant Information Model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility.
An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity–relationship models or XML schemas.
Object model
An object model in computer science is a collection of objects or classes through which a program can examine and manipulate some specific parts of its world. In other words, the object-oriented interface to some service or system. Such an interface is said to be the object model of the represented service or system. For example, the Document Object Model (DOM) is a collection of objects that represent a page in a web browser, used by script programs to examine and dynamically change the page. There is a Microsoft Excel object model for controlling Microsoft Excel from another program, and the ASCOM Telescope Driver is an object model for controlling an astronomical telescope.
In computing the term object model has a distinct second meaning of the general properties of objects in a specific computer programming language, technology, notation or methodology that uses them. For example, the Java object model, the COM object model, or the object model of OMT. Such object models are usually defined using concepts such as class, message, inheritance, polymorphism, and encapsulation. There is an extensive literature on formalized object models as a subset of the formal semantics of programming languages.
Object–role modeling
Object–Role Modeling (ORM) is a method for conceptual modeling, and can be used as a tool for information and rules analysis.
Object–Role Modeling is a fact-oriented method for performing systems analysis at the conceptual level. The quality of a database application depends critically on its design. To help ensure correctness, clarity, adaptability and productivity, information systems are best specified first at the conceptual level, using concepts and language that people can readily understand.
The conceptual design may include data, process and behavioral perspectives, and the actual DBMS used to implement the design might be based on one of many logical data models (relational, hierarchic, network, object-oriented, etc.).
Unified Modeling Language models
The Unified Modeling Language (UML) is a standardized general-purpose modeling language in the field of software engineering. It is a graphical language for visualizing, specifying, constructing, and documenting the artifacts of a software-intensive system. The Unified Modeling Language offers a standard way to write a system's blueprints, including:
Conceptual things such as business processes and system functions
Concrete things such as programming language statements, database schemas, and
Reusable software components.
UML offers a mix of functional models, data models, and database models.
| Mathematics | Data structures and types | null |
82916 | https://en.wikipedia.org/wiki/Gear | Gear | A gear or gearwheel is a rotating machine part typically used to transmit rotational motion and/or torque by means of a series of teeth that engage with compatible teeth of another gear or other part. The teeth can be integral saliences or cavities machined on the part, or separate pegs inserted into it. In the latter case, the gear is usually called a cogwheel. A cog may be one of those pegs or the whole gear. Two or more meshing gears are called a gear train.
The smaller member of a pair of meshing gears is often called pinion. Most commonly, gears and gear trains can be used to trade torque for rotational speed between two axles or other rotating parts and/or to change the axis of rotation and/or to invert the sense of rotation. A gear may also be used to transmit linear force and/or linear motion to a rack, a straight bar with a row of compatible teeth.
Gears are among the most common mechanical parts. They come in a great variety of shapes and materials, and are used for many different functions and applications. Diameters may range from a few μm in micromachines, to a few mm in watches and toys to over 10 metres in some mining equipment. Other types of parts that are somewhat similar in shape and function to gears include the sprocket, which is meant to engage with a link chain instead of another gear, and the timing pulley, meant to engage a timing belt. Most gears are round and have equal teeth, designed to operate as smoothly as possible; but there are several applications for non-circular gears, and the Geneva drive has an extremely uneven operation, by design.
Gears can be seen as instances of the basic lever "machine". When a small gear drives a larger one, the mechanical advantage of this ideal lever causes the torque T to increase but the rotational speed ω to decrease. The opposite effect is obtained when a large gear drives a small one. The changes are proportional to the gear ratio r, the ratio of the tooth counts: namely, , and . Depending on the geometry of the pair, the sense of rotation may also be inverted (from clockwise to anti-clockwise, or vice-versa).
Most vehicles have a transmission or "gearbox" containing a set of gears that can be meshed in multiple configurations. The gearbox lets the operator vary the torque that is applied to the wheels without changing the engine's speed. Gearboxes are used also in many other machines, such as lathes and conveyor belts. In all those cases, terms like "first gear", "high gear", and "reverse gear" refer to the overall torque ratios of different meshing configurations, rather than to specific physical gears. These terms may be applied even when the vehicle does not actually contain gears, as in a continuously variable transmission.
History
The earliest surviving gears date from the 4th century BC in China (Zhan Guo times – Late East Zhou dynasty), which have been preserved at the Luoyang Museum of Henan Province, China.
In Europe, Aristotle mentions gears around 330 BC, as wheel drives in windlasses. He observed that the direction of rotation is reversed when one gear wheel drives another gear wheel. Philon of Byzantium was one of the first who used gears in water raising devices. Gears appear in works connected to Hero of Alexandria, in Roman Egypt circa AD 50, but can be traced back to the mechanics of the Library of Alexandria in 3rd-century BC Ptolemaic Egypt, and were greatly developed by the Greek polymath Archimedes (287–212 BC). The earliest surviving gears in Europe were found in the Antikythera mechanism an example of a very early and intricate geared device, designed to calculate astronomical positions of the sun, moon, and planets, and predict eclipses. Its time of construction is now estimated between 150 and 100 BC.
The Chinese engineer Ma Jun (–265) described a south-pointing chariot. A set of differential gears connected to the wheels and to a pointer on top of the chariot kept the direction of latter unchanged as the chariot turned.
Another early surviving example of geared mechanism is a complex calendrical device showing the phase of the Moon, the day of the month and the places of the Sun and the Moon in the Zodiac was invented in the Byzantine empire in the early 6th century.
Geared mechanical water clocks were built in China by 725.
Around 1221, a geared astrolabe was built in Isfahan showing the position of the moon in the zodiac and its phase, and the number of days since new moon.
The worm gear was invented in the Indian subcontinent, for use in roller cotton gins, some time during the 13th–14th centuries.
A complex astronomical clock, called the Astrarium, was built between 1348 and 1364 by Giovanni Dondi dell'Orologio. It had seven faces and 107 moving parts; it showed the positions of the sun, the moon and the five planets then known, as well as religious feast days. The Salisbury Cathedral clock, built in 1386, it is the world's oldest still working geared mechanical clock.
Differential gears were used by the British clock maker Joseph Williamson in 1720.
However, the oldest functioning gears by far were created by Nature, and are seen in the hind legs of the nymphs of the planthopper insect Issus coleoptratus.
Etymology
The word gear is probably from Old Norse gørvi (plural gørvar) 'apparel, gear,' related to gøra, gørva 'to make, construct, build; set in order, prepare,' a common verb in Old Norse, "used in a wide range of situations from writing a book to dressing meat". In this context, the meaning of 'toothed wheel in machinery' first attested 1520s; specific mechanical sense of 'parts by which a motor communicates motion' is from 1814; specifically of a vehicle (bicycle, automobile, etc.) by 1888.
A cog is a tooth on a wheel. From Middle English cogge, from Old Norse (compare Norwegian kugg ('cog'), Swedish kugg, kugge ('cog, tooth')), from Proto-Germanic *kuggō (compare Dutch kogge ('cogboat'), German Kock), from Proto-Indo-European *gugā ('hump, ball') (compare Lithuanian gugà ('pommel, hump, hill'), from PIE *gēw- ('to bend, arch'). First used c. 1300 in the sense of 'a wheel having teeth or cogs; late 14c., 'tooth on a wheel'; cog-wheel, early 15c.
Materials
The gears of the Antikythera mechanism are made of bronze, and the earliest surviving Chinese gears are made of iron, These metals, as well as tin, have been generally used for clocks and similar mechanisms to this day.
Historically, large gears, such as used in flour mills, were commonly made of wood rather than metal. They were cogwheels, made by inserting a series of wooden pegs or cogs around the rim of a wheel. The cogs were often made of maple wood.
Wooden gears have been gradually replaced by ones made or metal, such as cast iron at first, then steel and aluminum. Steel is most commonly used because of its high strength-to-weight ratio and low cost. Aluminum is not as strong as steel for the same geometry, but is lighter and easier to machine. powder metallurgy may be used with alloys that cannot be easily cast or machined.
Still, because of cost or other considerations, some early metal gears had wooden cogs, each tooth forming a type of specialised 'through' mortise and tenon joint
More recently engineering plastics and composite materials have been replacing metals in many applications, especially those with moderate speed and torque. They are not as strong as steel, but are cheaper, can be mass-manufactured by injection molding don't need lubrication. Plastic gears may even be intentionally designed to be the weakest part in a mechanism, so that in case of jamming they will fail first and thus avoid damage to more expensive parts. Such sacrificial gears may be a simpler alternative to other overload-protection devices such as clutches and torque- or current-limited motors.
In spite of the advantages of metal and plastic, wood continued to be used for large gears until a couple of centuries ago, because of cost, weight, tradition, or other considerations. In 1967 the Thompson Manufacturing Company of Lancaster, New Hampshire still had a very active business in supplying tens of thousands of maple gear teeth per year, mostly for use in paper mills and grist mills, some dating back over 100 years.
Manufacture
The most common techniques for gear manufacturing are dies, sand, and investment casting; injection molding; powder metallurgy; blanking; and gear cutting.
As of 2014, an estimated 80% of all gearing produced worldwide is produced by net shape molding. Molded gearing is usually powder metallurgy, plastic injection, or metal die casting. Gears produced by powder metallurgy often require a sintering step after they are removed from the mold. Cast gears require gear cutting or other machining to shape the teeth to the necessary precision. The most common form of gear cutting is hobbing, but gear shaping, milling, and broaching may be used instead.
Metal gears intended for heavy duty operation, such as in the transmissions of cars and trucks, the teeth are heat treated to make them hard and more wear resistant while leaving the core soft but tough. For large gears that are prone to warp, a quench press is used.
Gears can be made by 3D printing; however, this alternative is typically used only for prototypes or very limited production quantities, because of its high cost, low accuracy, and relatively low strength of the resulting part.
Comparison with other drive mechanisms
Besides gear trains, other alternative methods of transmitting torque between non-coaxial parts include link chains driven by sprockets, friction drives, belts and pulleys, hydraulic couplings, and timing belts.
One major advantage of gears is that their rigid body and the snug interlocking of the teeth ensure precise tracking of the rotation across the gear train, limited only by backlash and other mechanical defects. For this reason they are favored in precision applications such as watches. Gear trains also can have fewer separate parts (only two) and have minimal power loss, minimal wear, and long life. Gears are also often the most efficient and compact way of transmitting torque between two non-parallel axes.
On the other hand, gears are more expensive to manufacture, may require periodic lubrication, and may have greater mass and rotational inertia than the equivalent pulleys. More importantly, the distance between the axes of matched gears is limited and cannot be changed once they are manufactured. There are also applications where slippage under overload or transients (as occurs with belts, hydraulics, and friction wheels) is not only acceptable but desirable.
Ideal gear model
For basic analysis purposes, each gear can be idealized as a perfectly rigid body that, in normal operation, turns around a rotation axis that is fixed in space, without sliding along it. Thus, each point of the gear can move only along a circle that is perpendicular to its axis and centered on it. At any moment t, all points of the gear will be rotating around that axis with the same angular speed ω(t), in the same sense. The speed need not be constant over time.
The action surface of the gear consists of all points of its surface that, in normal operation, may contact the matching gear with positive pressure. All other parts of the surface are irrelevant (except that they cannot be crossed by any part of the matching gear). In a gear with N teeth, the working surface has N-fold rotational symmetry about the axis, meaning that it is congruent with itself when the gear rotates by of a turn.
If the gear is meant to transmit or receive torque with a definite sense only (clockwise or counterclockwise with respect to some reference viewpoint), the action surface consists of N separate patches, the tooth faces; which have the same shape and are positioned in the same way relative to the axis, spaced turn apart.
If the torque on each gear may have both senses, the action surface will have two sets of N tooth faces; each set will be effective only while the torque has one specific sense, and the two sets can be analyzed independently of the other. However, in this case the gear usually has also "flip over" symmetry, so that the two sets of tooth faces are congruent after the gear is flipped. This arrangement ensures that the two gears are firmly locked together, at all times, with no backlash.
During operation, each point p of each tooth face will at some moment contact a tooth face of the matching gear at some point q of one of its tooth faces. At that moment and at those points, the two faces must have the same perpendicular direction but opposite orientation. But since the two gears are rotating around different axes, the points p and q are moving along different circles; therefore, the contact cannot last more than one instant, and p will then either slide across the other face, or stop contacting it altogether.
On the other hand, at any given moment there is at least one such pair of contact points; usually more than one, even a whole line or surface of contact.
Actual gears deviate from this model in many ways: they are not perfectly rigid, their mounting does not ensure that the rotation axis will be perfectly fixed in space, the teeth may have slightly different shapes and spacing, the tooth faces are not perfectly smooth, and so on. Yet, these deviations from the ideal model can be ignored for a basic analysis of the operation of a gear set.
Relative axis position
One criterion for classifying gears is the relative position and direction of the axes or rotation of the gears that are to be meshed together.
Parallel
In the most common configuration, the axes of rotation of the two gears are parallel, and usually their sizes are such that they contact near a point between the two axes. In this configuration, the two gears turn in opposite senses.
Occasionally the axes are parallel but one gear is nested inside the other. In this configuration, both gears turn in the same sense.
If the two gears are cut by an imaginary plane perpendicular to the axes, each section of one gear will interact only with the corresponding section of the other gear. Thus the three-dimensional gear train can be understood as a stack of gears that are flat and infinitesimally thin — that is, essentially two-dimensional.
Crossed
In a crossed arrangement, the axes of rotation of the two gears are not parallel but cross at an arbitrary angle except zero or 180 degrees.
For best operation, each wheel then must be a bevel gear, whose overall shape is like a slice (frustum) of a cone whose apex is the meeting point of the two axes.
Bevel gears with equal numbers of teeth and shaft axes at 90 degrees are called miter (US) or mitre (UK) gears.
Independently of the angle between the axes, the larger of two unequal matching bevel gears may be internal or external, depending the desired relative sense of rotation.
If the two gears are sliced by an imaginary sphere whose center is the point where the two axes cross, each section will remain on the surface of that sphere as the gear rotates, and the section of one gear will interact only with the corresponding section of the other gear. In this way, a pair of meshed 3D gears can be understood as a stack of nested infinitely thin cup-like gears.
Skew
The gears in a matching pair are said to be skew if their axes of rotation are skew lines -- neither parallel nor intersecting.
In this case, the best shape for each pitch surface is neither cylindrical nor conical but a portion of a hyperboloid of revolution. Such gears are called hypoid for short. Hypoid gears are most commonly found with shafts at 90 degrees.
Contact between hypoid gear teeth may be even smoother and more gradual than with spiral bevel gear teeth, but also have a sliding action along the meshing teeth as it rotates and therefore usually require some of the most viscous types of gear oil to avoid it being extruded from the mating tooth faces, the oil is normally designated HP (for hypoid) followed by a number denoting the viscosity. Also, the pinion can be designed with fewer teeth than a spiral bevel pinion, with the result that gear ratios of 60:1 and higher are feasible using a single set of hypoid gears. This style of gear is most common in motor vehicle drive trains, in concert with a differential. Whereas a regular (nonhypoid) ring-and-pinion gear set is suitable for many applications, it is not ideal for vehicle drive trains because it generates more noise and vibration than a hypoid does. Bringing hypoid gears to market for mass-production applications was an engineering improvement of the 1920s.
Tooth orientation
Internal and external
A gear is said to be external if its teeth are directed generally away from the rotation axis, and internal otherwise. In a pair of matching wheels, only one of them (the larger one) may be internal.
Crown
A crown gear or contrate gear is one whose teeth project at right angles to the plane. A crown gear is also sometimes meshed with an escapement such as found in mechanical clocks.
Tooth cut direction
Gear teeth typically extend across the whole thickness of the gear. Another criterion for classifying gears is the general direction of the teeth across that dimension. This attribute is affected by the relative position and direction of the axes or rotation of the gears that are to be meshed together.
Straight
In a cylindrical spur gear or straight-cut gear, the tooth faces are straight along the direction parallel to the axis of rotation. Any imaginary cylinder with the same axis will cut the teeth along parallel straight lines.
The teeth can be either internal or external. Two spur gears mesh together correctly only if fitted to parallel shafts. No axial thrust is created by the tooth loads. Spur gears are excellent at moderate speeds but tend to be noisy at high speeds.
For arrangements with crossed non-parallel axes, the faces in a straight-cut gear are parts of a general conical surface whose generating lines (generatrices) go through the meeting point of the two axes, resulting in a bevel gear. Such gears are generally used only at speeds below , or, for small gears, 1000 rpm.
Helical
In a helical or dry fixed gear the tooth walls are not parallel to the axis of rotation, but are set at an angle. An imaginary pitch surface (cylinder, cone, or hyperboloid, depending on the relative axis positions) intersects each tooth face along an arc of a helix. Helical gears can be meshed in parallel or orientations. The former refers to when the shafts are parallel to each other; this is the most common orientation. In the latter, the shafts are non-parallel, and in this configuration the gears are sometimes known as "skew gears".
The angled teeth engage more gradually than do spur gear teeth, causing them to run more smoothly and quietly. With parallel helical gears, each pair of teeth first make contact at a single point at one side of the gear wheel; a moving curve of contact then grows gradually across the tooth face to a maximum, then recedes until the teeth break contact at a single point on the opposite side. In spur gears, teeth suddenly meet at a line contact across their entire width, causing stress and noise. Spur gears make a characteristic whine at high speeds. For this reason spur gears are used in low-speed applications and in situations where noise control is not a problem, and helical gears are used in high-speed applications, large power transmission, or where noise abatement is important. The speed is considered high when the pitch line velocity exceeds 25 m/s.
A disadvantage of helical gears is a resultant thrust along the axis of the gear, which must be accommodated by appropriate thrust bearings. However, this issue can be circumvented by using a herringbone gear or double helical gear, which has no axial thrust - and also provides self-aligning of the gears. This results in less axial thrust than a comparable spur gear.
A second disadvantage of helical gears is a greater degree of sliding friction between the meshing teeth, often addressed with additives in the lubricant.
For a "crossed" or "skew" configuration, the gears must have the same pressure angle and normal pitch; however, the helix angle and handedness can be different. The relationship between the two shafts is actually defined by the helix angle(s) of the two shafts and the handedness, as defined:
for gears of the same handedness,
for gears of opposite handedness,
where is the helix angle for the gear. The crossed configuration is less mechanically sound because there is only a point contact between the gears, whereas in the parallel configuration there is a line contact.
Quite commonly, helical gears are used with the helix angle of one having the negative of the helix angle of the other; such a pair might also be referred to as having a right-handed helix and a left-handed helix of equal angles. The two equal but opposite angles add to zero: the angle between shafts is zero—that is, the shafts are parallel. Where the sum or the difference (as described in the equations above) is not zero, the shafts are crossed. For shafts crossed at right angles, the helix angles are of the same hand because they must add to 90 degrees. (This is the case with the gears in the illustration above: they mesh correctly in the crossed configuration: for the parallel configuration, one of the helix angles should be reversed. The gears illustrated cannot mesh with the shafts parallel.)
3D animation of helical gears (parallel axis)
3D animation of helical gears (crossed axis)
Double helical
Double helical gears overcome the problem of axial thrust presented by single helical gears by using a double set of teeth, slanted in opposite directions. A double helical gear can be thought of as two mirrored helical gears mounted closely together on a common axle. This arrangement cancels out the net axial thrust, since each half of the gear thrusts in the opposite direction, resulting in a net axial force of zero. This arrangement can also remove the need for thrust bearings. However, double helical gears are more difficult to manufacture due to their more complicated shape.
Herringbone gears are a special type of helical gears. They do not have a groove in the middle like some other double helical gears do; the two mirrored helical gears are joined so that their teeth form a V shape. This can also be applied to bevel gears, as in the final drive of the Citroën Type A. Another type of double helical gear is a Wüst gear.
For both possible rotational directions, there exist two possible arrangements for the oppositely-oriented helical gears or gear faces. One arrangement is called stable, and the other unstable. In a stable arrangement, the helical gear faces are oriented so that each axial force is directed toward the center of the gear. In an unstable arrangement, both axial forces are directed away from the center of the gear. In either arrangement, the total (or net) axial force on each gear is zero when the gears are aligned correctly. If the gears become misaligned in the axial direction, the unstable arrangement generates a net force that may lead to disassembly of the gear train, while the stable arrangement generates a net corrective force. If the direction of rotation is reversed, the direction of the axial thrusts is also reversed, so a stable configuration becomes unstable, and vice versa.
Stable double helical gears can be directly interchanged with spur gears without any need for different bearings.
Worm
Worms resemble screws. A worm is meshed with a worm wheel, which looks similar to a spur gear.
Worm-and-gear sets are a simple and compact way to achieve a high torque, low speed gear ratio. For example, helical gears are normally limited to gear ratios of less than 10:1 while worm-and-gear sets vary from 10:1 to 500:1. A disadvantage is the potential for considerable sliding action, leading to low efficiency.
A worm gear is a species of helical gear, but its helix angle is usually somewhat large (close to 90 degrees) and its body is usually fairly long in the axial direction. These attributes give it screw like qualities. The distinction between a worm and a helical gear is that at least one tooth persists for a full rotation around the helix. If this occurs, it is a 'worm'; if not, it is a 'helical gear'. A worm may have as few as one tooth. If that tooth persists for several turns around the helix, the worm appears, superficially, to have more than one tooth, but what one in fact sees is the same tooth reappearing at intervals along the length of the worm. The usual screw nomenclature applies: a one-toothed worm is called single thread or single start; a worm with more than one tooth is called multiple thread or multiple start. The helix angle of a worm is not usually specified. Instead, the lead angle, which is equal to 90 degrees minus the helix angle, is given.
In a worm-and-gear set, the worm can always drive the gear. However, if the gear attempts to drive the worm, it may or may not succeed. Particularly if the lead angle is small, the gear's teeth may simply lock against the worm's teeth, because the force component circumferential to the worm is not sufficient to overcome friction. In traditional music boxes, however, the gear drives the worm, which has a large helix angle. This mesh drives the speed-limiter vanes which are mounted on the worm shaft.
Worm-and-gear sets that do lock are called self locking, which can be used to advantage, as when it is desired to set the position of a mechanism by turning the worm and then have the mechanism hold that position. An example is the machine head found on some types of stringed instruments.
If the gear in a worm-and-gear set is an ordinary helical gear only a single point of contact is achieved. If medium to high power transmission is desired, the tooth shape of the gear is modified to achieve more intimate contact by making both gears partially envelop each other. This is done by making both concave and joining them at a saddle point; this is called a cone-drive or "Double enveloping".
Worm gears can be right or left-handed, following the long-established practice for screw threads.
Tooth profile
Another criterion to classify gears is the tooth profile, the shape of the cross-section of a tooth face by an imaginary cut perpendicular to the pitch surface, such as the transverse, normal, or axial plane.
The tooth profile is crucial for the smoothness and uniformity of the movement of matching gears, as well as for the friction and wear.
Artisanal
The teeth of antique or artisanal gears that were cut by hand from sheet material, like those in the Antikhytera mechanism, generally had simple profiles, such as triangles.
The teeth of larger gears — such as used in windmills — were usually pegs with simple shapes like cylinders, parallelepipeds, or triangular prisms inserted into a smooth wooden or metal wheel; or were holes with equally simple shapes cut into such a wheel.
Because of their sub-optimal profile, the effective gear ratio of such artisanal matching gears was not constant, but fluctuated over each tooth cycle, resulting in vibrations, noise, and accelerated wear.
Cage
A cage gear, also called a lantern gear or lantern pinion is one of those artisanal has cylindrical rods for teeth, parallel to the axle and arranged in a circle around it, much as the bars on a round bird cage or lantern. The assembly is held together by disks at each end, into which the tooth rods and axle are set. Cage gears are more efficient than solid pinions, and dirt can fall through the rods rather than becoming trapped and increasing wear. They can be constructed with very simple tools as the teeth are not formed by cutting or milling, but rather by drilling holes and inserting rods.
Sometimes used in clocks, a cage gear should always be driven by a gearwheel, not used as the driver. The cage gear was not initially favoured by conservative clock makers. It became popular in turret clocks where dirty working conditions were most commonplace. Domestic American clock movements often used them.
Mathematical
In most modern gears, the tooth profile is usually not straight or circular, but of special form designed to achieve a constant angular velocity ratio.
There is an infinite variety of tooth profiles that will achieve this goal. In fact, given a fairly arbitrary tooth shape, it is possible to develop a tooth profile for the mating gear that will do it.
Parallel and crossed axes
However, two constant velocity tooth profiles are the most commonly used in modern times for gears with parallel or crossed axes, based on the cycloid and involute curves.
Cycloidal gears were more common until the late 1800s. Since then, the involute has largely superseded it, particularly in drive train applications. The cycloid is in some ways the more interesting and flexible shape; however the involute has two advantages: it is easier to manufacture, and it permits the center-to-center spacing of the gears to vary over some range without ruining the constancy of the velocity ratio. Cycloidal gears only work properly if the center spacing is exactly right. Cycloidal gears are still commonly used in mechanical clocks.
Skew axes
For non-parallel axes with non-straight tooth cuts, the best tooth profile is one of several spiral bevel gear shapes. These include Gleason types (circular arc with non-constant tooth depth), Oerlikon and Curvex types (circular arc with constant tooth depth), Klingelnberg Cyclo-Palloid (Epicycloid with constant tooth depth) or Klingelnberg Palloid.
The tooth faces in these gear types are not involute cylinders or cones but patches of octoidal surfaces. Manufacturing such tooth faces may require a 5-axis milling machine.
Spiral bevel gears have the same advantages and disadvantages relative to their straight-cut cousins as helical gears do to spur gears, such as lower noise and vibration. Simplified calculated bevel gears on the basis of an equivalent cylindrical gear in normal section with an involute tooth form show a deviant tooth form with reduced tooth strength by 10-28% without offset and 45% with offset.
Special gear trains
Rack and pinion
A rack is a toothed bar or rod that can be thought of as a sector gear with an infinitely large radius of curvature. Torque can be converted to linear force by meshing a rack with a round gear called a pinion: the pinion turns, while the rack moves in a straight line. Such a mechanism is used in the steering of automobiles to convert the rotation of the steering wheel into the left-to-right motion of the tie rod(s) that are attached to the front wheels.
Racks also feature in the theory of gear geometry, where, for instance, the tooth shape of an interchangeable set of gears may be specified for the rack (infinite radius), and the tooth shapes for gears of particular actual radii are then derived from that. The rack and pinion gear type is also used in a rack railway.
Epicyclic gear train
In epicyclic gearing, one or more of the gear axes moves. Examples are sun and planet gearing (see below), cycloidal drive, automatic transmissions, and mechanical differentials.
Sun and planet
Sun and planet gearing is a method of converting reciprocating motion into rotary motion that was used in steam engines. James Watt used it on his early steam engines to get around the patent on the crank, but it also provided the advantage of increasing the flywheel speed so Watt could use a lighter flywheel.
In the illustration, the sun is yellow, the planet red, the reciprocating arm is blue, the flywheel is green and the driveshaft is gray.
Non-circular gears
Non-circular gears are designed for special purposes. While a regular gear is optimized to transmit torque to another engaged member with minimum noise and wear and maximum efficiency, a non-circular gear's main objective might be ratio variations, axle displacement oscillations and more. Common applications include textile machines, potentiometers and continuously variable transmissions.
Non-rigid gears
Most gears are ideally rigid bodies which transmit torque and movement through the lever principle and contact forces between the teeth. Namely, the torque applied to one gear causes it to rotate as rigid body, so that its teeth push against those of the matched gear, which in turn rotates as a rigid body transmitting the torque to its axle. Some specialized gear escape this pattern, however.
Harmonic gear
A harmonic gear or strain wave gear is a specialized gearing mechanism often used in industrial motion control, robotics and aerospace for its advantages over traditional gearing systems, including lack of backlash, compactness and high gear ratios.
Though the diagram does not demonstrate the correct configuration, it is a "timing gear," conventionally with far more teeth than a traditional gear to ensure a higher degree of precision.
Magnetic gear
In a magnetic gear pair there is no contact between the two members; the torque is instead transmitted through magnetic fields. The cogs of each gear are constant magnets with periodic alternation of opposite magnetic poles on mating surfaces. Gear components are mounted with a backlash capability similar to other mechanical gearings. Although they cannot exert as much force as a traditional gear due to limits on magnetic field strength, such gears work without touching and so are immune to wear, have very low noise, minimal power losses from friction and can slip without damage making them very reliable. They can be used in configurations that are not possible for gears that must be physically touching and can operate with a non-metallic barrier completely separating the driving force from the load. The magnetic coupling can transmit force into a hermetically sealed enclosure without using a radial shaft seal, which may leak. Magnetic gears are also used in brushless motors along with electromagnets to make the motor spin.
Nomenclature
General
Rotational frequency, n Measured in rotation over time, such as revolutions per minute (RPM or rpm).
Angular frequency, ω Measured in radians per second. 1RPM = 2rad/minute = rad/second.
Number of teeth, N How many teeth a gear has, an integer. In the case of worms, it is the number of thread starts that the worm has.
Gear, wheel The larger of two interacting gears or a gear on its own.
Pinion The smaller of two interacting gears.
Path of contact Path followed by the point of contact between two meshing gear teeth.
Line of action, pressure line Line along which the force between two meshing gear teeth is directed. It has the same direction as the force vector. In general, the line of action changes from moment to moment during the period of engagement of a pair of teeth. For involute gears, however, the tooth-to-tooth force is always directed along the same line—that is, the line of action is constant. This implies that for involute gears the path of contact is also a straight line, coincident with the line of action—as is indeed the case.
Axis Axis of revolution of the gear; center line of the shaft.
Pitch point Point where the line of action crosses a line joining the two gear axes.
Pitch circle, pitch line Circle centered on and perpendicular to the axis, and passing through the pitch point. A predefined diametral position on the gear where the circular tooth thickness, pressure angle and helix angles are defined.
Pitch diameter, d A predefined diametral position on the gear where the circular tooth thickness, pressure angle and helix angles are defined. The standard pitch diameter is a design dimension and cannot be measured, but is a location where other measurements are made. Its value is based on the number of teeth (N), the normal module (mn; or normal diametral pitch, Pd), and the helix angle ():
in metric units or in imperial units.
Module or modulus, m Since it is impractical to calculate circular pitch with irrational numbers, mechanical engineers usually use a scaling factor that replaces it with a regular value instead. This is known as the module or modulus of the wheel and is simply defined as:
where m is the module and p the circular pitch. The units of module are customarily millimeters; an English Module is sometimes used with the units of inches. When the diametral pitch, DP, is in English units,
in conventional metric units.
The distance between the two axis becomes:
where a is the axis distance, z1 and z2 are the number of cogs (teeth) for each of the two wheels (gears). These numbers (or at least one of them) is often chosen among primes to create an even contact between every cog of both wheels, and thereby avoid unnecessary wear and damage. An even uniform gear wear is achieved by ensuring the tooth counts of the two gears meshing together are relatively prime to each other; this occurs when the greatest common divisor (GCD) of each gear tooth count equals 1, e.g. GCD(16,25)=1; if a 1:1 gear ratio is desired a relatively prime gear may be inserted in between the two gears; this maintains the 1:1 ratio but reverses the gear direction; a second relatively prime gear could also be inserted to restore the original rotational direction while maintaining uniform wear with all 4 gears in this case. Mechanical engineers, at least in continental Europe, usually use the module instead of circular pitch. The module, just like the circular pitch, can be used for all types of cogs, not just evolvent based straight cogs.
Operating pitch diameters Diameters determined from the number of teeth and the center distance at which gears operate. Example for pinion:
Pitch surface In cylindrical gears, cylinder formed by projecting a pitch circle in the axial direction. More generally, the surface formed by the sum of all the pitch circles as one moves along the axis. For bevel gears it is a cone.
Angle of action Angle with vertex at the gear center, one leg on the point where mating teeth first make contact, the other leg on the point where they disengage.
Arc of action Segment of a pitch circle subtended by the angle of action.
Pressure angle, θ The complement of the angle between the direction that the teeth exert force on each other, and the line joining the centers of the two gears. For involute gears, the teeth always exert force along the line of action, which, for involute gears, is a straight line; and thus, for involute gears, the pressure angle is constant.
Outside diameter, Do Diameter of the gear, measured from the tops of the teeth.
Root diameter Diameter of the gear, measured at the base of the tooth.
Addendum, a Radial distance from the pitch surface to the outermost point of the tooth.
Dedendum, b Radial distance from the depth of the tooth trough to the pitch surface.
Whole depth, ht The distance from the top of the tooth to the root; it is equal to addendum plus dedendum or to working depth plus clearance.
Clearance Distance between the root circle of a gear and the addendum circle of its mate.
Working depth Depth of engagement of two gears, that is, the sum of their operating addendums.
Circular pitch, p Distance from one face of a tooth to the corresponding face of an adjacent tooth on the same gear, measured along the pitch circle.
Diametral pitch, DP
Ratio of the number of teeth to the pitch diameter. Could be measured in teeth per inch or teeth per centimeter, but conventionally has units of per inch of diameter. Where the module, m, is in metric units
in imperial units
Base circle In involute gears, the tooth profile is generated by the involute of the base circle. The radius of the base circle is somewhat smaller than that of the pitch circle
Base pitch, normal pitch, pb In involute gears, distance from one face of a tooth to the corresponding face of an adjacent tooth on the same gear, measured along the base circle
Interference Contact between teeth other than at the intended parts of their surfaces
Interchangeable set A set of gears, any of which mates properly with any other
Helical gear
Helix angle, ψ the Angle between a tangent to the helix and the gear axis. It is zero in the limiting case of a spur gear, albeit it can be considered as the hypotenuse angle as well.
Normal circular pitch, pn Circular pitch in the plane normal to the teeth.
Transverse circular pitch, p Circular pitch in the plane of rotation of the gear. Sometimes just called "circular pitch".
Several other helix parameters can be viewed either in the normal or transverse planes. The subscript n usually indicates the normal.
Worm gear
Lead Distance from any point on a thread to the corresponding point on the next turn of the same thread, measured parallel to the axis.
Linear pitch, p Distance from any point on a thread to the corresponding point on the adjacent thread, measured parallel to the axis. For a single-thread worm, lead and linear pitch are the same.
Lead angle, λ Angle between a tangent to the helix and a plane perpendicular to the axis. Note that the complement of the helix angle is usually given for helical gears.
Pitch diameter, dw Same as described earlier in this list. Note that for a worm it is still measured in a plane perpendicular to the gear axis, not a tilted plane.
Subscript w denotes the worm, subscript g denotes the gear.
Tooth contact
Point of contact Any point at which two tooth profiles touch each other.
Line of contact A line or curve along which two tooth surfaces are tangent to each other.
Path of action The locus of successive contact points between a pair of gear teeth, during the phase of engagement. For conjugate gear teeth, the path of action passes through the pitch point. It is the trace of the surface of action in the plane of rotation.
Line of action The path of action for involute gears. It is the straight line passing through the pitch point and tangent to both base circles.
Surface of action The imaginary surface in which contact occurs between two engaging tooth surfaces. It is the summation of the paths of action in all sections of the engaging teeth.
Plane of action The surface of action for involute, parallel axis gears with either spur or helical teeth. It is tangent to the base cylinders.
Zone of action (contact zone) For involute, parallel-axis gears with either spur or helical teeth, is the rectangular area in the plane of action bounded by the length of action and the effective face width.
Path of contact The curve on either tooth surface along which theoretical single point contact occurs during the engagement of gears with crowned tooth surfaces or gears that normally engage with only single point contact.
Length of action The distance on the line of action through which the point of contact moves during the action of the tooth profile.
Arc of action, Qt The arc of the pitch circle through which a tooth profile moves from the beginning to the end of contact with a mating profile.
Arc of approach, Qa The arc of the pitch circle through which a tooth profile moves from its beginning of contact until the point of contact arrives at the pitch point.
Arc of recess, Qr The arc of the pitch circle through which a tooth profile moves from contact at the pitch point until contact ends.
Contact ratio, mc or ε The number of angular pitches through which a tooth surface rotates from the beginning to the end of contact. In a simple way, it can be defined as a measure of the average number of teeth in contact during the period during which a tooth comes and goes out of contact with the mating gear.
Transverse contact ratio, mp or εα The contact ratio in a transverse plane. It is the ratio of the angle of action to the angular pitch. For involute gears it is most directly obtained as the ratio of the length of action to the base pitch.
Face contact ratio, mF or εβ The contact ratio in an axial plane, or the ratio of the face width to the axial pitch. For bevel and hypoid gears it is the ratio of face advance to circular pitch.
Total contact ratio, mt or εγ The sum of the transverse contact ratio and the face contact ratio.
Modified contact ratio, mo For bevel gears, the square root of the sum of the squares of the transverse and face contact ratios.
Limit diameter Diameter on a gear at which the line of action intersects the maximum (or minimum for internal pinion) addendum circle of the mating gear. This is also referred to as the start of active profile, the start of contact, the end of contact, or the end of active profile.
Start of active profile (SAP) Intersection of the limit diameter and the involute profile.
Face advance Distance on a pitch circle through which a helical or spiral tooth moves from the position at which contact begins at one end of the tooth trace on the pitch surface to the position where contact ceases at the other end.
Tooth thickness
Circular thickness Length of arc between the two sides of a gear tooth, on the specified datum circle.
Transverse circular thickness Circular thickness in the transverse plane.
Normal circular thickness Circular thickness in the normal plane. In a helical gear it may be considered as the length of arc along a normal helix.
Axial thickness In helical gears and worms, tooth thickness in an axial cross section at the standard pitch diameter.
Base circular thickness In involute teeth, length of arc on the base circle between the two involute curves forming the profile of a tooth.
Normal chordal thickness Length of the chord that subtends a circular thickness arc in the plane normal to the pitch helix. Any convenient measuring diameter may be selected, not necessarily the standard pitch diameter.
Chordal addendum (chordal height) Height from the top of the tooth to the chord subtending the circular thickness arc. Any convenient measuring diameter may be selected, not necessarily the standard pitch diameter.
Profile shift Displacement of the basic rack datum line from the reference cylinder, made non-dimensional by dividing by the normal module. It is used to specify the tooth thickness, often for zero backlash.
Rack shift Displacement of the tool datum line from the reference cylinder, made non-dimensional by dividing by the normal module. It is used to specify the tooth thickness.
Measurement over pins Measurement of the distance taken over a pin positioned in a tooth space and a reference surface. The reference surface may be the reference axis of the gear, a datum surface or either one or two pins positioned in the tooth space or spaces opposite the first. This measurement is used to determine tooth thickness.
Span measurement Measurement of the distance across several teeth in a normal plane. As long as the measuring device has parallel measuring surfaces that contact on an unmodified portion of the involute, the measurement wis along a line tangent to the base cylinder. It is used to determine tooth thickness.
Modified addendum teeth Teeth of engaging gears, one or both of which have non-standard addendum.
Full-depth teeth Teeth in which the working depth equals 2.000 divided by the normal diametral pitch.
Stub teeth Teeth in which the working depth is less than 2.000 divided by the normal diametral pitch.
Equal addendum teeth Teeth in which two engaging gears have equal addendums.
Long and short-addendum teeth Teeth in which the addendums of two engaging gears are unequal.
Undercut An undercut is a condition in generated gear teeth when any part of the fillet curve lies inside of a line drawn tangent to the working profile at its point of juncture with the fillet. Undercut may be deliberately introduced to facilitate finishing operations. With undercut the fillet curve intersects the working profile. Without undercut the fillet curve and the working profile have a common tangent.
Root fillet or fillet curve, the concave portion of the tooth profile where it joins the bottom of the tooth space.2
Pitch
Pitch is the distance between a point on one tooth and the corresponding point on an adjacent tooth. It is a dimension measured along a line or curve in the transverse, normal, or axial directions. The use of the single word pitch without qualification may be ambiguous, and for this reason it is preferable to use specific designations such as transverse circular pitch, normal base pitch, axial pitch.
Circular pitch, p Arc distance along a specified pitch circle or pitch line between corresponding profiles of adjacent teeth.
Transverse circular pitch, pt Circular pitch in the transverse plane.
Normal circular pitch, pn, pe Circular pitch in the normal plane, and also the length of the arc along the normal pitch helix between helical teeth or threads.
Axial pitch, px Linear pitch in an axial plane and in a pitch surface. In helical gears and worms, axial pitch has the same value at all diameters. In gearing of other types, axial pitch may be confined to the pitch surface and may be a circular measurement. The term axial pitch is preferred to the term linear pitch. The axial pitch of a helical worm and the circular pitch of its worm gear are the same.
Normal base pitch, pN, pbn An involute helical gear is the base pitch in the normal plane. It is the normal distance between parallel helical involute surfaces on the plane of action in the normal plane, or is the length of arc on the normal base helix. It is a constant distance in any helical involute gear.
Transverse base pitch, pb, pbt In an involute gear, the pitch is on the base circle or along the line of action. Corresponding sides of involute gear teeth are parallel curves, and the base pitch is the constant and fundamental distance between them along a common normal in a transverse plane.
Diametral pitch (transverse), Pd Ratio of the number of teeth to the standard pitch diameter in inches.
Normal diametral pitch, Pnd Value of diametral pitch in a normal plane of a helical gear or worm.
Angular pitch, θN, τ Angle subtended by the circular pitch, usually expressed in radians.
degrees or radians
Backlash
Backlash is the error in motion that occurs when gears change direction. It exists because there is always some gap between the trailing face of the driving tooth and the leading face of the tooth behind it on the driven gear, and that gap must be closed before force can be transferred in the new direction. The term "backlash" can also be used to refer to the size of the gap, not just the phenomenon it causes; thus, one could speak of a pair of gears as having, for example, "0.1 mm of backlash." A pair of gears could be designed to have zero backlash, but this would presuppose perfection in manufacturing, uniform thermal expansion characteristics throughout the system, and no lubricant. Therefore, gear pairs are designed to have some backlash. It is usually provided by reducing the tooth thickness of each gear by half the desired gap distance. In the case of a large gear and a small pinion, however, the backlash is usually taken entirely off the gear and the pinion is given full sized teeth. Backlash can also be provided by moving the gears further apart. The backlash of a gear train equals the sum of the backlash of each pair of gears, so in long trains backlash can become a problem.
For situations that require precision, such as instrumentation and control, backlash can be minimized through one of several techniques. For instance, the gear can be split along a plane perpendicular to the axis, one half fixed to the shaft in the usual manner, the other half placed alongside it, free to rotate about the shaft, but with springs between the two-halves providing relative torque between them, so that one achieves, in effect, a single gear with expanding teeth. Another method involves tapering the teeth in the axial direction and letting the gear slide in the axial direction to take up slack.
Standard pitches and the module system
Although gears can be made with any pitch, for convenience and interchangeability standard pitches are frequently used. Pitch is a property associated with linear dimensions and so differs whether the standard values are in the imperial (inch) or metric systems. Using inch measurements, standard diametral pitch values with units of "per inch" are chosen; the diametral pitch is the number of teeth on a gear of one inch pitch diameter. Common standard values for spur gears are 3, 4, 5, 6, 8, 10, 12, 16, 20, 24, 32, 48, 64, 72, 80, 96, 100, 120, and 200. Certain standard pitches such as and in inch measurements, which mesh with linear rack, are actually (linear) circular pitch values with units of "inches"
When gear dimensions are in the metric system the pitch specification is generally in terms of module or modulus, which is effectively a length measurement across the pitch diameter. The term module is understood to mean the pitch diameter in millimetres divided by the number of teeth. When the module is based upon inch measurements, it is known as the English module to avoid confusion with the metric module. Module is a direct dimension ("millimeters per tooth"), unlike diametral pitch, which is an inverse dimension ("teeth per inch"). Thus, if the pitch diameter of a gear is 40 mm and the number of teeth 20, the module is 2, which means that there are 2 mm of pitch diameter for each tooth. The preferred standard module values are 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.8, 1.0, 1.25, 1.5, 2.0, 2.5, 3, 4, 5, 6, 8, 10, 12, 16, 20, 25, 32, 40 and 50.
Gear model in modern physics
Modern physics adopted the gear model in different ways. In the nineteenth century, James Clerk Maxwell developed a model of electromagnetism in which magnetic field lines were rotating tubes of incompressible fluid. Maxwell used a gear wheel and called it an "idle wheel" to explain the electric current as a rotation of particles in opposite directions to that of the rotating field lines.
More recently, quantum physics uses "quantum gears" in their model. A group of gears can serve as a model for several different systems, such as an artificially constructed nanomechanical device or a group of ring molecules.
The three wave hypothesis compares the wave–particle duality to a bevel gear.
Gear mechanism in natural world
The gear mechanism was previously considered exclusively artificial, but as early as 1957, gears had been recognized in the hind legs of various species of planthoppers and scientists from the University of Cambridge characterized their functional significance in 2013 by doing high-speed photography of the nymphs of Issus coleoptratus at Cambridge University. These gears are found only in the nymph forms of all planthoppers, and are lost during the final molt to the adult stage. In I. coleoptratus, each leg has a 400-micrometer strip of teeth, pitch radius 200 micrometers, with 10 to 12 fully interlocking spur-type gear teeth, including filleted curves at the base of each tooth to reduce the risk of shearing. The joint rotates like mechanical gears, and synchronizes Issus's hind legs when it jumps to within 30 microseconds, preventing yaw rotation. The gears are not connected all the time. One is located on each of the juvenile insect's hind legs, and when it prepares to jump, the two sets of teeth lock together. As a result, the legs move in almost perfect unison, giving the insect more power as the gears rotate to their stopping point and then unlock.
| Technology | Components_2 | null |
82926 | https://en.wikipedia.org/wiki/Floor | Floor | A floor is the bottom surface of a room or vehicle. Floors vary from simple dirt in a cave to many layered surfaces made with modern technology. Floors may be stone, wood, bamboo, metal or any other material that can support the expected load.
The levels of a building are often referred to as floors, although sometimes referred to as storeys.
Floors typically consist of a subfloor for support and a floor covering used to give a good walking surface. In modern buildings the subfloor often has electrical wiring, plumbing, and other services built in. As floors must meet many needs, some essential to safety, floors are built to strict building codes in some regions.
Special floor structures
Where a special floor structure like a floating floor is laid upon another floor, both may be called subfloors.
Special floor structures are used for a number of purposes:
Balcony, a platform projecting from a wall
Floating floor, normally for noise or vibration reduction
Glass floor, as in glass bottomed elevators
Nightingale floor makes a noise when an intruder walks on it
Raised floor, utilities underneath can be accessed easily
Sprung floor, improves the performance and safety of athletes and dancers
Raked floor, improves the view of performers on a stage for an audience
Floor covering
Floor covering is a term to generically describe any material applied over a floor structure to provide a walking surface. Flooring is the general term for a permanent or temporary covering of a floor, or for the work of installing such a floor covering. Both terms are used interchangeably but floor covering refers more to loose-laid materials.
Materials almost always classified as floor covering include carpet, area rugs, and resilient flooring such as linoleum or vinyl flooring. Materials commonly called flooring include wood flooring, laminated wood, ceramic tile, stone, terrazzo, and various seamless chemical floor coatings.
The choice of material for floor covering is affected by factors such as cost, endurance, noise insulation, comfort and cleaning effort, and sometimes concern about allergens. Some types of flooring must not be installed below grade (lower than ground level), and laminate or hardwood should be avoided where there may be moisture or condensation.
The subfloor may be finished in a way that makes it usable without any extra work. See:
Earthen floor adobe or clay floors
Solid ground floor cement screed or granolithic
A number of special features may be used to ornament a floor or perform a useful service. Examples include floor medallions, which provide a decorative centerpiece of a floor design, or gratings used to drain water or to rub dirt off shoes.
Subfloor construction
Floors may be built on beams or joists or use structures like prefabricated hollow core slabs. The subfloor builds on those and attaches by various means particular to the support structure, but the support and subfloor together always provides the strength of a floor one can sense underfoot. Nowadays, subfloors are generally made from at least two layers of moisture-resistant ("AC" grade, one side finished and sanded flat) plywood or composite sheeting, jointly also termed Underlayments on floor joists of 2x8, 2x10, or 2x12's (dimensional lumber) spaced generally on centers, in the United States and Canada. Some flooring components used solely on concrete slabs consist of a dimpled rubberized or plastic layer much like bubble wrap that provide little tiny pillars for the sheet material above. These are manufactured in squares and the edges fit together like a mortise and tenon joint. Like a floor on joists not on concrete, a second sheeting underlayment layer is added with staggered joints to disperse forces that would open a joint under the stress of live loads like a person walking.
Three layers are common only in highest-quality construction. The two layers in high-quality construction will both be thick sheets (as will the third when present), but they may have a combined thickness of only half that in cheaper construction panel overlaid by plywood subflooring. At the highest end, or in select rooms of the building there might be three sheeting layers, and such stiff subflooring is necessary to prevent the cracking of large floor tiles of or more on a side. The structure under such a floor will frequently also have extra "bracing" and "blocking" joist-to-joist intended to spread the weight to have as little sagging on any joist as possible when there is a live load on the floor above.
In Europe and North America only a few rare floors have no separate floor covering on top, and those are normally because of a temporary condition pending sales or occupancy; in semi-custom new construction and some rental markets, such floors are provided for the new home buyer or renter to select their preferred floor coverings, usually a wall-to-wall carpet or one-piece vinyl floor covering. Wood clad (hardwood) and tile covered finished floors generally require a stiffer, higher-quality subfloor, especially for the later class. Since the wall base and flooring interact forming a joint, such later added semi-custom floors will generally not be hardwood, for that joint construction would be in the wrong order unless the wall base trim was also delayed pending the choosing.
The subfloor may also provide underfloor heating and if floor radiant heating is not used, will certainly suffer puncture openings to be put through for forced air ducts for both heating and air conditioning, or pipe holes for forced hot water or steam heating transport piping conveying the heat from furnace to the local room's heat exchangers (radiators).
Some subfloors are inset below the top surface level of surrounding flooring's joists and such subfloors and a normal height joist are joined to make a plywood box both molding and containing at least of concrete (A mud floor" in builders' parlance). Alternatively, only a slightly inset floor topped by a fibrous mesh and concrete building composite floor cladding is used for smaller high quality tile floors; these "concrete" subfloors have a good thermal match with ceramic tiles and so are popular with builders constructing kitchen, laundry and especially both common and high end bathrooms and any other room where large expanses of well supported ceramic tile will be used as a finished floor. Floors using small ( and smaller) ceramic tiles generally use only an additional layer of plywood (if that) and substitute adhesive and substrate materials making do with both a flexible joints and semi-flexible mounting compounds and so are designed to withstand the greater flexing which large tiles cannot tolerate without breaking.
Ground floor construction
A ground-level floor can be an earthen floor made of soil, or be solid ground floors made of concrete slab.
Ground level slab floors are uncommon in northern latitudes where freezing provides significant structural problems, except in heated interior spaces such as basements or for outdoor unheated structures such as a gazebo or shed where unitary temperatures are not creating pockets of troublesome meltwaters. Ground-level slab floors are prepared for pouring by grading the site, which usually also involves removing topsoil and other organic materials well away from the slab site. Once the site has reached a suitable firm inorganic base material that is graded further so that it is flat and level, and then topped by spreading a layer-cake of force dispersing sand and gravel. Deeper channels may be dug, especially the slab ends and across the slab width at regular intervals in which a continuous run of rebar is bent and wired to sit at two heights within forming a sub-slab "concrete girder". Above the targeted bottom height (coplanar with the compacted sand and gravel topping) a separate grid of rebar or welded wire mesh is usually added to reinforce the concrete, and will be tied to the under slab "girder" rebar at intervals. The under slab cast girders are used especially if it the slab be used structurally, i.e., to support part of the building.
Upper floor construction
Floors in wood-frame homes are usually constructed with joists centered no more than apart, according to most building codes. Heavy floors, such as those made of stone, require more closely spaced joists. If the span between load-bearing walls is too long for joists to safely support, then a heavy crossbeam (thick or laminated wood, or a metal I-beam or H-beam) may be used. A "subfloor" of plywood or waferboard is then laid over the joists.
Utilities
In modern buildings, there are numerous services provided via ducts or wires underneath the floor or above the ceiling. The floor of one level typically also holds the ceiling of the level below (if any).
Services provided by subfloors include:
Air conditioning
Telecommunications
Electrical wiring
Fire protection
Thermal insulation
Plumbing
Sewerage
Soundproofing
Underfloor heating
In floors supported by joists, utilities are run through the floor by drilling small holes through the joists to serve as conduits. Where the floor is over the basement or crawlspace, utilities may instead be run under the joists, making the installation less expensive. Also, ducts for air conditioning (central heating and cooling) are large and cannot cross through joists or beams; thus, ducts are typically at or near the plenum, or come directly from underneath (or from an attic).
Pipes for plumbing, sewerage, underfloor heating, and other utilities may be laid directly in slab floors, typically via cellular floor raceways. However, later maintenance of these systems can be expensive, requiring the opening of concrete or other fixed structures. Electrically heated floors are available, and both kinds of systems can also be used in wood floors as well.
Problems with floors
Wood floors, particularly older ones, will tend to 'squeak' in certain places. This is caused by the wood rubbing against other wood, usually at a joint of the subfloor. Firmly securing the pieces to each other with screws or nails may reduce this problem.
Floor vibration is a problem with floors. Wood floors tend to pass sound, particularly heavy footsteps and low bass frequencies. Floating floors can reduce this problem. Concrete floors are usually so massive they do not have this problem, but they are also much more expensive to construct and must meet more stringent building requirements due to their weight.
Floors with a chemical sealer, like stained concrete or epoxy finishes, usually have a slick finish presenting a potential slip and fall hazard, however there are anti skid additives and coatings which can help mitigate this and provide increased traction. Reliable, science-backed floor slip resistance testing can help floor owners and designers determine if their floor is too slippery, or allow them to choose an appropriate flooring for the intended purpose before installation.
The flooring may need protection sometimes. A gym floor cover can be used to reduce the need to satisfy incompatible requirements.
Floor cleaning
Floor cleaning is a major occupation throughout the world and has been since ancient times. Cleaning is essential for hygiene, to prevent injuries due to slips, and to remove dirt. Floors are also treated to protect or beautify the surface. The correct method to clean one type of floor can often damage another, so it is important to use the correct treatment.
| Technology | Architectural elements | null |
82933 | https://en.wikipedia.org/wiki/Chloroform | Chloroform | Chloroform, or trichloromethane (often abbreviated as TCM), is an organochloride with the formula and a common solvent. It is a volatile, colorless, sweet-smelling, dense liquid produced on a large scale as a precursor to refrigerants and PTFE. Chloroform was once used as an inhalational anesthetic between the 19th century and the first half of the 20th century. It is miscible with many solvents but it is only very slightly soluble in water (only 8 g/L at 20°C).
Structure and name
The molecule adopts a tetrahedral molecular geometry with C3v symmetry. The chloroform molecule can be viewed as a methane molecule with three hydrogen atoms replaced with three chlorine atoms, leaving a single hydrogen atom.
The name "chloroform" is a portmanteau of terchloride (tertiary chloride, a trichloride) and formyle, an obsolete name for the methylylidene radical (CH) derived from formic acid.
Natural occurrence
Many kinds of seaweed produce chloroform, and fungi are believed to produce chloroform in soil. Abiotic processes are also believed to contribute to natural chloroform productions in soils, although the mechanism is still unclear.
Chloroform is a volatile organic compound.
History
Chloroform was synthesized independently by several investigators :
Moldenhawer, a German pharmacist from Frankfurt an der Oder, appears to have produced chloroform in 1830 by mixing chlorinated lime with ethanol; however, he mistook it for Chloräther (chloric ether, 1,2-dichloroethane).
Samuel Guthrie, a U.S. physician from Sackets Harbor, New York, also appears to have produced chloroform in 1831 by reacting chlorinated lime with ethanol, and noted its anaesthetic properties; however, he also believed that he had prepared chloric ether.
Justus von Liebig carried out the alkaline cleavage of chloral. Liebig incorrectly states that the empirical formula of chloroform was and named it "Chlorkohlenstoff" ("carbon chloride").
Eugène Soubeiran obtained the compound by the action of chlorine bleach on both ethanol and acetone.
In 1834, French chemist Jean-Baptiste Dumas determined chloroform's empirical formula and named it: "Es scheint mir also erweisen, dass die von mir analysirte Substanz, … zur Formel hat: C2H2Cl6." (Thus it seems to me to show that the substance I analyzed … has as [its empirical] formula: C2H2Cl6.). [Note: The coefficients of his empirical formula should be halved.] ... "Diess hat mich veranlasst diese Substanz mit dem Namen 'Chloroform' zu belegen." (This had caused me to impose the name "chloroform" upon this substance [i.e., formyl chloride or chloride of formic acid].)
In 1835, Dumas prepared the substance by alkaline cleavage of trichloroacetic acid.
In 1842, Robert Mortimer Glover in London discovered the anaesthetic qualities of chloroform on laboratory animals.
In 1847, Scottish obstetrician James Y. Simpson was the first to demonstrate the anaesthetic properties of chloroform, provided by local pharmacist William Flockhart of Duncan, Flockhart and company, in humans, and helped to popularize the drug for use in medicine.
By the 1850s, chloroform was being produced on a commercial basis. In Britain, about 750,000 doses a week were being produced by 1895, using the Liebig procedure, which retained its importance until the 1960s. Today, chloroform – along with dichloromethane – is prepared exclusively and on a massive scale by the chlorination of methane and chloromethane.
Production
Industrially, chloroform is produced by heating a mixture of chlorine and either methyl chloride () or methane (). At 400–500 °C, free radical halogenation occurs, converting these precursors to progressively more chlorinated compounds:
Chloroform undergoes further chlorination to yield carbon tetrachloride ():
The output of this process is a mixture of the four chloromethanes: chloromethane, methylene chloride (dichloromethane), trichloromethane (chloroform), and tetrachloromethane (carbon tetrachloride). These can then be separated by distillation.
Chloroform may also be produced on a small scale via the haloform reaction between acetone and sodium hypochlorite:
Deuterochloroform
Deuterated chloroform is an isotopologue of chloroform with a single deuterium atom. is a common solvent used in NMR spectroscopy. Deuterochloroform is produced by the reaction of hexachloroacetone with heavy water. The haloform process is now obsolete for production of ordinary chloroform. Deuterochloroform can also be prepared by reacting sodium deuteroxide with chloral hydrate.
Inadvertent formation of chloroform
The haloform reaction can also occur inadvertently in domestic settings. Sodium hypochlorite solution (chlorine bleach) mixed with common household liquids such as acetone, methyl ethyl ketone, ethanol, or isopropyl alcohol can produce some chloroform, in addition to other compounds, such as chloroacetone or dichloroacetone.
Uses
In terms of scale, the most important reaction of chloroform is with hydrogen fluoride to give monochlorodifluoromethane (HCFC-22), a precursor in the production of polytetrafluoroethylene (Teflon) and other fluoropolymers:
The reaction is conducted in the presence of a catalytic amount of mixed antimony halides. Chlorodifluoromethane is then converted to tetrafluoroethylene, the main precursor of Teflon.
Solvent
The hydrogen attached to carbon in chloroform participates in hydrogen bonding, making it a good solvent for many materials.
Worldwide, chloroform is also used in pesticide formulations, as a solvent for lipids, rubber, alkaloids, waxes, gutta-percha, and resins, as a cleaning agent, as a grain fumigant, in fire extinguishers, and in the rubber industry. is a common solvent used in NMR spectroscopy.
Refrigerant
Chloroform is used as a precursor to make R-22 (chlorodifluoromethane). This is done by reacting it with a solution of hydrofluoric acid (HF) which fluorinates the molecule and releases hydrochloric acid as a byproduct. Before the Montreal Protocol was enforced, most of the chloroform produced in the United States was used in the production of chlorodifluoromethane. However, its production remains high, as it is a key precursor of PTFE.
Although chloroform has properties such as a low boiling point, and a low global warming potential of only 31 (compared to the 1760 of R-22), which are appealing properties for a refrigerant, there is little information to suggest that it has seen widespread use as a refrigerant in any consumer products.
Lewis acid
In solvents such as and alkanes, chloroform hydrogen bonds to a variety of Lewis bases. is classified as a hard acid, and the ECW model lists its acid parameters as EA = 1.56 and CA = 0.44.
Reagent
As a reagent, chloroform serves as a source of the dichlorocarbene intermediate . It reacts with aqueous sodium hydroxide, usually in the presence of a phase transfer catalyst, to produce dichlorocarbene, . This reagent effects ortho-formylation of activated aromatic rings, such as phenols, producing aryl aldehydes in a reaction known as the Reimer–Tiemann reaction. Alternatively, the carbene can be trapped by an alkene to form a cyclopropane derivative. In the Kharasch addition, chloroform forms the free radical which adds to alkenes.
Anaesthetic
Chloroform is a powerful general anesthetic, euphoriant, anxiolytic, and sedative when inhaled or ingested. The anaesthetic qualities of chloroform were first described in 1842 in a thesis by Robert Mortimer Glover, which won the Gold Medal of the Harveian Society for that year. Glover also undertook practical experiments on dogs to prove his theories, refined his theories, and presented them in his doctoral thesis at the University of Edinburgh in the summer of 1847, identifying anaesthetizing halogenous compounds as a "new order of poisonous substances".
The Scottish obstetrician James Young Simpson was one of those examiners required to read the thesis, but later claimed to have never read it and to have come to his own conclusions independently. Perkins-McVey, among others, have raised doubts about the credibility of Simpson's claim, noting that Simpson's publications on the subject in 1847 explicitly echo Glover's and, being one of the thesis examiners, Simpson was likely aware of the content of Glover's study, even if he skirted his duties as an examiner. In 1847 and 1848, Glover would pen a series of heated letters accusing Simpson of stealing his discovery, which had already earned Simpson considerable notoriety. Whatever the source of his inspiration, on 4 November 1847, Simpson argued that he had discovered the anaesthetic qualities of chloroform in humans. He and two colleagues entertained themselves by trying the effects of various substances, and thus revealed the potential for chloroform in medical procedures.
A few days later, during the course of a dental procedure in Edinburgh, Francis Brodie Imlach became the first person to use chloroform on a patient in a clinical context.
In May 1848, Robert Halliday Gunning made a presentation to the Medico-Chirurgical Society of Edinburgh following a series of laboratory experiments on rabbits that confirmed Glover's findings and also refuted Simpson's claims of originality. The laboratory experiments that proved the dangers of chloroform were largely ignored.
The use of chloroform during surgery expanded rapidly in Europe; for instance in the 1850s chloroform was used by the physician John Snow during the births of Queen Victoria's last two children Leopold and Beatrice. In the United States, chloroform began to replace ether as an anesthetic at the beginning of the 20th century; it was abandoned in favor of ether on discovery of its toxicity, especially its tendency to cause fatal cardiac arrhythmias analogous to what is now termed "sudden sniffer's death". Some people used chloroform as a recreational drug or to attempt suicide. One possible mechanism of action of chloroform is that it increases the movement of potassium ions through certain types of potassium channels in nerve cells. Chloroform could also be mixed with other anaesthetic agents such as ether to make C.E. mixture, or ether and alcohol to make A.C.E. mixture.
In 1848, Hannah Greener, a 15-year-old girl who was having an infected toenail removed, died after being given the anaesthetic. Her autopsy establishing the cause of death was undertaken by John Fife assisted by Robert Mortimer Glover. A number of physically fit patients died after inhaling it. In 1848, however, John Snow developed an inhaler that regulated the dosage and so successfully reduced the number of deaths.
The opponents and supporters of chloroform disagreed on the question of whether the medical complications were due to respiratory disturbance or whether chloroform had a specific effect on the heart. Between 1864 and 1910, numerous commissions in Britain studied chloroform but failed to come to any clear conclusions. It was only in 1911 that Levy proved in experiments with animals that chloroform can cause ventricular fibrillation. Despite this, between 1865 and 1920, chloroform was used in 80 to 95% of all narcoses performed in the UK and German-speaking countries. In Germany, comprehensive surveys of the fatality rate during anaesthesia were made by Gurlt between 1890 and 1897. At the same time in the UK the medical journal The Lancet carried out a questionnaire survey and compiled a report detailing numerous adverse reactions to anesthetics, including chloroform. In 1934, Killian gathered all the statistics compiled until then and found that the chances of suffering fatal complications under ether were between 1:14,000 and 1:28,000, whereas with chloroform the chances were between 1:3,000 and 1:6,000. The rise of gas anaesthesia using nitrous oxide, improved equipment for administering anesthetics, and the discovery of hexobarbital in 1932 led to the gradual decline of chloroform narcosis.
The latest reported anaesthetic use of chloroform in the Western world dates to 1987, when the last doctor who used it retired, about 140 years after its first use.
Criminal use
Chloroform has been used by criminals to knock out, daze, or murder victims. Joseph Harris was charged in 1894 with using chloroform to rob people. Serial killer H. H. Holmes used chloroform overdoses to kill his female victims. In September 1900, chloroform was implicated in the murder of the U.S. businessman William Marsh Rice. Chloroform was deemed a factor in the alleged murder of a woman in 1991, when she was asphyxiated while asleep. In 2002, 13-year-old Kacie Woody was sedated with chloroform when she was abducted by David Fuller and during the time that he had her, before he shot and killed her. In a 2007 plea bargain, a man confessed to using stun guns and chloroform to sexually assault minors.
The use of chloroform as an incapacitating agent has become widely recognized, bordering on cliché, through the adoption by crime fiction authors of plots involving criminals' use of chloroform-soaked rags to render victims unconscious. However, it is nearly impossible to incapacitate someone using chloroform in this way. It takes at least five minutes of inhalation of chloroform to render a person unconscious. Most criminal cases involving chloroform involve co-administration of another drug, such as alcohol or diazepam, or the victim being complicit in its administration. After a person has lost consciousness owing to chloroform inhalation, a continuous volume must be administered, and the chin must be supported to keep the tongue from obstructing the airway, a difficult procedure, typically requiring the skills of an anesthesiologist. In 1865, as a direct result of the criminal reputation chloroform had gained, the medical journal The Lancet offered a "permanent scientific reputation" to anyone who could demonstrate "instantaneous insensibility", i.e. loss of consciousness, using chloroform.
Safety
Exposure
Chloroform is formed as a by-product of water chlorination, along with a range of other disinfection by-products, and it is therefore often present in municipal tap water and swimming pools. Reported ranges vary considerably, but are generally below the current health standard for total trihalomethanes (THMs) of 100 μg/L. However, when considered in combination with other trihalomethanes often present in drinking water, the concentration of THMs often exceeds the recommended limit of exposure.
While few studies have assessed the risks posed by chloroform exposure through drinking water in isolation from other THMs, many studies have shown that exposure to the general category of THMs, including chloroform, is associated with an increased risk of cancer of the bladder or lower GI tract.
Historically, chloroform exposure may well have been higher, owing to its common use as an anesthetic, as an ingredient in cough syrups, and as a constituent of tobacco smoke, where DDT had previously been used as a fumigant.
Pharmacology
Chloroform is well absorbed, metabolized, and eliminated rapidly by mammals after oral, inhalation, or dermal exposure. Accidental splashing into the eyes has caused irritation. Prolonged dermal exposure can result in the development of sores as a result of defatting. Elimination is primarily through the lungs as chloroform and carbon dioxide; less than 1% is excreted in the urine.
Chloroform is metabolized in the liver by the cytochrome P-450 enzymes, by oxidation to trichloromethanol and by reduction to the dichloromethyl free radical. Other metabolites of chloroform include hydrochloric acid and diglutathionyl dithiocarbonate, with carbon dioxide as the predominant end-product of metabolism.
Like most other general anesthetics and sedative-hypnotic drugs, chloroform is a positive allosteric modulator at GABAA receptors. Chloroform causes depression of the central nervous system (CNS), ultimately producing deep coma and respiratory center depression. When ingested, chloroform causes symptoms similar to those seen after inhalation. Serious illness has followed ingestion of . The mean lethal oral dose in an adult is estimated at .
The anesthetic use of chloroform has been discontinued, because it caused deaths from respiratory failure and cardiac arrhythmias. Following chloroform-induced anesthesia, some patients suffered nausea, vomiting, hyperthermia, jaundice, and coma owing to hepatic dysfunction. At autopsy, liver necrosis and degeneration have been observed. The hepatotoxicity and nephrotoxicity of chloroform is thought to be due largely to phosgene, one of its metabolites.
Conversion to phosgene
Chloroform converts slowly in the presence of UV light and air to the extremely poisonous gas, phosgene (), releasing HCl in the process.
To prevent accidents, commercial chloroform is stabilized with ethanol or amylene, but samples that have been recovered or dried no longer contain any stabilizer. Amylene has been found to be ineffective, and the phosgene can affect analytes in samples, lipids, and nucleic acids dissolved in or extracted with chloroform. When ethanol is used as a stabiliser for chloroform, it reacts with phosgene (which is soluble in chloroform) to form the relatively harmless diethyl carbonate ester:
2 CH3CH2OH + COCl2 → CO3(CH2CH3)2 + 2 HCl
Phosgene and HCl can be removed from chloroform by washing with saturated aqueous carbonate solutions, such as sodium bicarbonate. This procedure is simple and results in harmless products. Phosgene reacts with water to form carbon dioxide and HCl, and the carbonate salt neutralizes the resulting acid.
Suspected samples can be tested for phosgene using filter paper which when treated with 5% diphenylamine, 5% dimethylaminobenzaldehyde in ethanol, and then dried, turns yellow in the presence of phosgene vapour. There are several colorimetric and fluorometric reagents for phosgene, and it can also be quantified using mass spectrometry.
Regulation
Chloroform is suspected of causing cancer (i.e. it is possibly carcinogenic, IARC Group 2B) as per the International Agency for Research on Cancer (IARC) Monographs.
It is classified as an extremely hazardous substance in the United States, as defined in Section 302 of the US Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities that produce, store, or use it in significant quantities.
Bioremediation of chloroform
Some anaerobic bacteria use chloroform for respiration, termed organohalide respiration, converting it to dichloromethane.
Gallery
| Physical sciences | Halocarbons | Chemistry |
82940 | https://en.wikipedia.org/wiki/Ossicles | Ossicles | The ossicles (also called auditory ossicles) are three irregular bones in the middle ear of humans and other mammals, and are among the smallest bones in the human body. Although the term "ossicle" literally means "tiny bone" (from Latin ossiculum) and may refer to any small bone throughout the body, it typically refers specifically to the malleus, incus and stapes ("hammer, anvil, and stirrup") of the middle ear.
The auditory ossicles serve as a kinematic chain to transmit and amplify (intensify) sound vibrations collected from the air by the ear drum to the fluid-filled labyrinth (cochlea). The absence or pathology of the auditory ossicles would constitute a moderate-to-severe conductive hearing loss.
Structure
The ossicles are, in order from the eardrum to the inner ear (from superficial to deep): the malleus, incus, and stapes, terms that in Latin are translated as "the hammer, anvil, and stirrup".
The malleus () articulates with the incus through the incudomalleolar joint and is attached to the tympanic membrane (eardrum), from which vibrational sound pressure motion is passed.
The incus () is connected to both the other bones.
The stapes () articulates with the incus through the incudostapedial joint and is attached to the membrane of the fenestra ovalis, the elliptical or oval window or opening between the middle ear and the vestibule of the inner ear. It is the smallest bone in the body.
Development
Studies have shown that ear bones in mammal embryos are attached to the dentary, which is part of the lower jaw. These are ossified portions of cartilage—called Meckel's cartilage—that are attached to the jaw. As the embryo develops, the cartilage hardens to form bone. Later in development, the bone structure breaks loose from the jaw and migrates to the inner ear area. The structure is known as the middle ear, and is made up of the stapes, incus, malleus, and tympanic membrane. These correspond to the columella, quadrate, articular, and angular structures in the amphibian, bird or reptile jaw.
Evolution
Function
As sound waves vibrate the tympanic membrane (eardrum), it in turn moves the nearest ossicle, the malleus, to which it is attached. The malleus then transmits the vibrations, via the incus, to the stapes, and so ultimately to the membrane of the fenestra ovalis (oval window), the opening to the vestibule of the inner ear.
Sound traveling through the air is mostly reflected when it comes into contact with a liquid medium; only about 1/30 of the sound energy moving through the air would be transferred into the liquid. This is observed from the abrupt cessation of sound that occurs when the head is submerged underwater. This is because the relative incompressibility of a liquid presents resistance to the force of the sound waves traveling through the air. The ossicles give the eardrum a mechanical advantage via lever action and a reduction in the area of force distribution; the resulting vibrations are stronger but don't move as far. This allows more efficient coupling than if the sound waves were transmitted directly from the outer ear to the oval window. This reduction in the area of force application allows a large enough increase in pressure to transfer most of the sound energy into the liquid. The increased pressure will compress the fluid found in the cochlea and transmit the stimulus. Thus, the lever action of the ossicles changes the vibrations so as to improve the transfer and reception of sound, and is a form of impedance matching.
However, the extent of the movements of the ossicles is controlled (and constricted) by two muscles attached to them (the tensor tympani and the stapedius). It is believed that these muscles can contract to dampen the vibration of the ossicles, in order to protect the inner ear from excessively loud noise (theory 1) and that they give better frequency resolution at higher frequencies by reducing the transmission of low frequencies (theory 2) (see acoustic reflex). These muscles are more highly developed in bats and serve to block outgoing cries of the bats during echolocation (SONAR).
Clinical relevance
Occasionally the joints between the ossicles become rigid. One condition, otosclerosis, results in the fusing of the stapes to the oval window. This reduces hearing and may be treated surgically using a passive middle ear implant.
History
There is some doubt as to the discoverers of the auditory ossicles and several anatomists from the early 16th century have the discovery attributed to them with the two earliest being Alessandro Achillini and Jacopo Berengario da Carpi. Several sources, including Eustachi and Casseri, attribute the discovery of the malleus and incus to the anatomist and philosopher Achillini. The first written description of the malleus and incus was by Berengario da Carpi in his Commentaria super anatomia Mundini (1521), although he only briefly described two bones and noted their theoretical association with the transmission of sound. Niccolo Massa's Liber introductorius anatomiae described the same bones in slightly more detail and likened them both to little hammers. A much more detailed description of the first two ossicles followed in Andreas Vesalius' De humani corporis fabrica in which he devoted a chapter to them. Vesalius was the first to compare the second element of the ossicles to an anvil although he offered the molar as an alternative comparison for its shape. The first published description of the stapes came in Pedro Jimeno's Dialogus de re medica (1549) although it had been previously described in public lectures by Giovanni Filippo Ingrassia at the University of Naples as early as 1546.
The term ossicle derives from , a diminutive of "bone" (; genitive ). The malleus gets its name from Latin malleus, meaning "hammer", the incus gets its name from Latin incus meaning "anvil" from incudere meaning "to forge with a hammer", and the stapes gets its name from Modern Latin "stirrup", probably an alteration of Late Latin stapia related to stare "to stand" and pedem, an accusative of pes "foot", so called because the bone is shaped like a stirrup – this was an invented Modern Latin word for "stirrup", for which there was no classical Latin word, as the ancients did not use stirrups.
| Biology and health sciences | Sensory nervous system | Biology |
82961 | https://en.wikipedia.org/wiki/Radio%20astronomy | Radio astronomy | Radio astronomy is a subfield of astronomy that studies celestial objects at radio frequencies. The first detection of radio waves from an astronomical object was in 1933, when Karl Jansky at Bell Telephone Laboratories reported radiation coming from the Milky Way. Subsequent observations have identified a number of different sources of radio emission. These include stars and galaxies, as well as entirely new classes of objects, such as radio galaxies, quasars, pulsars, and masers. The discovery of the cosmic microwave background radiation, regarded as evidence for the Big Bang theory, was made through radio astronomy.
Radio astronomy is conducted using large radio antennas referred to as radio telescopes, that are either used singularly, or with multiple linked telescopes utilizing the techniques of radio interferometry and aperture synthesis. The use of interferometry allows radio astronomy to achieve high angular resolution, as the resolving power of an interferometer is set by the distance between its components, rather than the size of its components.
Radio astronomy differs from radar astronomy in that the former is a passive observation (i.e., receiving only) and the latter an active one (transmitting and receiving).
History
Before Jansky observed the Milky Way in the 1930s, physicists speculated that radio waves could be observed from astronomical sources. In the 1860s, James Clerk Maxwell's equations had shown that electromagnetic radiation is associated with electricity and magnetism, and could exist at any wavelength. Several attempts were made to detect radio emission from the Sun including an experiment by German astrophysicists Johannes Wilsing and Julius Scheiner in 1896 and a centimeter wave radiation apparatus set up by Oliver Lodge between 1897 and 1900. These attempts were unable to detect any emission due to technical limitations of the instruments. The discovery of the radio reflecting ionosphere in 1902, led physicists to conclude that the layer would bounce any astronomical radio transmission back into space, making them undetectable.
Karl Jansky made the discovery of the first astronomical radio source serendipitously in the early 1930s. As a newly hired radio engineer with Bell Telephone Laboratories, he was assigned the task to investigate static that might interfere with short wave transatlantic voice transmissions. Using a large directional antenna, Jansky noticed that his analog pen-and-paper recording system kept recording a persistent repeating signal or "hiss" of unknown origin. Since the signal peaked about every 24 hours, Jansky first suspected the source of the interference was the Sun crossing the view of his directional antenna. Continued analysis, however, showed that the source was not following the 24-hour daily cycle of the Sun exactly, but instead repeating on a cycle of 23 hours and 56 minutes. Jansky discussed the puzzling phenomena with his friend, astrophysicist Albert Melvin Skellett, who pointed out that the observed time between the signal peaks was the exact length of a sidereal day; the time it took for "fixed" astronomical objects, such as a star, to pass in front of the antenna every time the Earth rotated. By comparing his observations with optical astronomical maps, Jansky eventually concluded that the radiation source peaked when his antenna was aimed at the densest part of the Milky Way in the constellation of Sagittarius.
Jansky announced his discovery at a meeting in Washington, D.C., in April 1933 and the field of radio astronomy was born. In October 1933, his discovery was published in a journal article entitled "Electrical disturbances apparently of extraterrestrial origin" in the Proceedings of the Institute of Radio Engineers. Jansky concluded that since the Sun (and therefore other stars) were not large emitters of radio noise, the strange radio interference may be generated by interstellar gas and dust in the galaxy, in particular, by "thermal agitation of charged particles." (Jansky's peak radio source, one of the brightest in the sky, was designated Sagittarius A in the 1950s and was later hypothesized to be emitted by electrons in a strong magnetic field. Current thinking is that these are ions in orbit around a massive black hole at the center of the galaxy at a point now designated as Sagittarius A*. The asterisk indicates that the particles at Sagittarius A are ionized.)
After 1935, Jansky wanted to investigate the radio waves from the Milky Way in further detail, but Bell Labs reassigned him to another project, so he did no further work in the field of astronomy. His pioneering efforts in the field of radio astronomy have been recognized by the naming of the fundamental unit of flux density, the jansky (Jy), after him.
Grote Reber was inspired by Jansky's work, and built a parabolic radio telescope 9m in diameter in his backyard in 1937. He began by repeating Jansky's observations, and then conducted the first sky survey in the radio frequencies. On February 27, 1942, James Stanley Hey, a British Army research officer, made the first detection of radio waves emitted by the Sun. Later that year George Clark Southworth, at Bell Labs like Jansky, also detected radiowaves from the Sun. Both researchers were bound by wartime security surrounding radar, so Reber, who was not, published his 1944 findings first. Several other people independently discovered solar radio waves, including E. Schott in Denmark and Elizabeth Alexander working on Norfolk Island.
At Cambridge University, where ionospheric research had taken place during World War II, J. A. Ratcliffe along with other members of the Telecommunications Research Establishment that had carried out wartime research into radar, created a radiophysics group at the university where radio wave emissions from the Sun were observed and studied.
This early research soon branched out into the observation of other celestial radio sources and interferometry techniques were pioneered to isolate the angular source of the detected emissions. Martin Ryle and Antony Hewish at the Cavendish Astrophysics Group developed the technique of Earth-rotation aperture synthesis. The radio astronomy group in Cambridge went on to found the Mullard Radio Astronomy Observatory near Cambridge in the 1950s. During the late 1960s and early 1970s, as computers (such as the Titan) became capable of handling the computationally intensive Fourier transform inversions required, they used aperture synthesis to create a 'One-Mile' and later a '5 km' effective aperture using the One-Mile and Ryle telescopes, respectively. They used the Cambridge Interferometer to map the radio sky, producing the Second (2C) and Third (3C) Cambridge Catalogues of Radio Sources.
Techniques
Radio astronomers use different techniques to observe objects in the radio spectrum. Instruments may simply be pointed at an energetic radio source to analyze its emission. To "image" a region of the sky in more detail, multiple overlapping scans can be recorded and pieced together in a mosaic image. The type of instrument used depends on the strength of the signal and the amount of detail needed.
Observations from the Earth's surface are limited to wavelengths that can pass through the atmosphere. At low frequencies or long wavelengths, transmission is limited by the ionosphere, which reflects waves with frequencies less than its characteristic plasma frequency. Water vapor interferes with radio astronomy at higher frequencies, which has led to building radio observatories that conduct observations at millimeter wavelengths at very high and dry sites, in order to minimize the water vapor content in the line of sight. Finally, transmitting devices on Earth may cause radio-frequency interference. Because of this, many radio observatories are built at remote places.
Radio telescopes
Radio telescopes may need to be extremely large in order to receive signals with low signal-to-noise ratio. Also since angular resolution is a function of the diameter of the "objective" in proportion to the wavelength of the electromagnetic radiation being observed, radio telescopes have to be much larger in comparison to their optical counterparts. For example, a 1-meter diameter optical telescope is two million times bigger than the wavelength of light observed giving it a resolution of roughly 0.3 arc seconds, whereas a radio telescope "dish" many times that size may, depending on the wavelength observed, only be able to resolve an object the size of the full moon (30 minutes of arc).
Radio interferometry
The difficulty in achieving high resolutions with single radio telescopes led to radio interferometry, developed by British radio astronomer Martin Ryle and Australian engineer, radiophysicist, and radio astronomer Joseph Lade Pawsey and Ruby Payne-Scott in 1946. The first use of a radio interferometer for an astronomical observation was carried out by Payne-Scott, Pawsey and Lindsay McCready on 26 January 1946 using a single converted radar antenna (broadside array) at 200 MHz near Sydney, Australia. This group used the principle of a sea-cliff interferometer in which the antenna (formerly a World War II radar) observed the Sun at sunrise with interference arising from the direct radiation from the Sun and the reflected radiation from the sea. With this baseline of almost 200 meters, the authors determined that the solar radiation during the burst phase was much smaller than the solar disk and arose from a region associated with a large sunspot group. The Australia group laid out the principles of aperture synthesis in a ground-breaking paper published in 1947. The use of a sea-cliff interferometer had been demonstrated by numerous groups in Australia, Iran and the UK during World War II, who had observed interference fringes (the direct radar return radiation and the reflected signal from the sea) from incoming aircraft.
The Cambridge group of Ryle and Vonberg observed the Sun at 175 MHz for the first time in mid July 1946 with a Michelson interferometer consisting of two radio antennas with spacings of some tens of meters up to 240 meters. They showed that the radio radiation was smaller than 10 arc minutes in size and also detected circular polarization in the Type I bursts. Two other groups had also detected circular polarization at about the same time (David Martyn in Australia and Edward Appleton with James Stanley Hey in the UK).
Modern radio interferometers consist of widely separated radio telescopes observing the same object that are connected together using coaxial cable, waveguide, optical fiber, or other type of transmission line. This not only increases the total signal collected, it can also be used in a process called aperture synthesis to vastly increase resolution. This technique works by superposing ("interfering") the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is the size of the antennas furthest apart in the array. In order to produce a high quality image, a large number of different separations between different telescopes are required (the projected separation between any two telescopes as seen from the radio source is called a "baseline") – as many different baselines as possible are required in order to get a good quality image. For example, the Very Large Array has 27 telescopes giving 351 independent baselines at once.
Very-long-baseline interferometry
Beginning in the 1970s, improvements in the stability of radio telescope receivers permitted telescopes from all over the world (and even in Earth orbit) to be combined to perform very-long-baseline interferometry. Instead of physically connecting the antennas, data received at each antenna is paired with timing information, usually from a local atomic clock, and then stored for later analysis on magnetic tape or hard disk. At that later time, the data is correlated with data from other antennas similarly recorded, to produce the resulting image. Using this method it is possible to synthesise an antenna that is effectively the size of the Earth. The large distances between the telescopes enable very high angular resolutions to be achieved, much greater in fact than in any other field of astronomy. At the highest frequencies, synthesised beams less than 1 milliarcsecond are possible.
The pre-eminent VLBI arrays operating today are the Very Long Baseline Array (with telescopes located across North America) and the European VLBI Network (telescopes in Europe, China, South Africa and Puerto Rico). Each array usually operates separately, but occasional projects are observed together producing increased sensitivity. This is referred to as Global VLBI. There are also a VLBI networks, operating in Australia and New Zealand called the LBA (Long Baseline Array), and arrays in Japan, China and South Korea which observe together to form the East-Asian VLBI Network (EAVN).
Since its inception, recording data onto hard media was the only way to bring the data recorded at each telescope together for later correlation. However, the availability today of worldwide, high-bandwidth networks makes it possible to do VLBI in real time. This technique (referred to as e-VLBI) was originally pioneered in Japan, and more recently adopted in Australia and in Europe by the EVN (European VLBI Network) who perform an increasing number of scientific e-VLBI projects per year.
Astronomical sources
Radio astronomy has led to substantial increases in astronomical knowledge, particularly with the discovery of several classes of new objects, including pulsars, quasars and radio galaxies. This is because radio astronomy allows us to see things that are not detectable in optical astronomy. Such objects represent some of the most extreme and energetic physical processes in the universe.
The cosmic microwave background radiation was also first detected using radio telescopes. However, radio telescopes have also been used to investigate objects much closer to home, including observations of the Sun and solar activity, and radar mapping of the planets.
Other sources include:
Sun
Jupiter
Sagittarius A, the Galactic Center of the Milky Way, with one portion Sagittarius A* thought to be a radio wave emitting supermassive black hole
Active galactic nuclei and pulsars have jets of charged particles which emit synchrotron radiation
Merging galaxy clusters often show diffuse radio emission
Supernova remnants can also show diffuse radio emission; pulsars are a type of supernova remnant that shows highly synchronous emission.
The cosmic microwave background is blackbody radio/microwave emission
Earth's radio signal is mostly natural and stronger than for example Jupiter's, but is produced by Earth's auroras and bounces at the ionosphere back into space.
International regulation
Radio astronomy service (also: radio astronomy radiocommunication service) is, according to Article 1.58 of the International Telecommunication Union's (ITU) Radio Regulations (RR), defined as "A radiocommunication service involving the use of radio astronomy". Subject of this radiocommunication service is to receive radio waves transmitted by astronomical or celestial objects.
Frequency allocation
The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012).
In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is with-in the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared.
primary allocation: is indicated by writing in capital letters (see example below)
secondary allocation: is indicated by small letters
exclusive or shared utilization: is within the responsibility of administrations
In line to the appropriate ITU Region the frequency bands are allocated (primary or secondary) to the radio astronomy service as follows.
| Physical sciences | Radio astronomy | Astronomy |
82974 | https://en.wikipedia.org/wiki/Post-traumatic%20stress%20disorder | Post-traumatic stress disorder | Post-traumatic stress disorder (PTSD) is a mental and behavioral disorder that develops from experiencing a traumatic event, such as sexual assault, domestic violence, child abuse, warfare and its associated traumas, natural disaster, traffic collision, or other threats on a person's life or well-being. Symptoms may include disturbing thoughts, feelings, or dreams related to the events, mental or physical distress to trauma-related cues, attempts to avoid trauma-related cues, alterations in the way a person thinks and feels, and an increase in the fight-or-flight response. These symptoms last for more than a month after the event and can include triggers such as misophonia. Young children are less likely to show distress, but instead may express their memories through play.
A person with PTSD is at a higher risk of suicide and intentional self-harm. PTSD may provoke violent behavior by the sufferer.
Most people who experience traumatic events do not develop PTSD. People who experience interpersonal violence such as rape, other sexual assaults, being kidnapped, stalking, physical abuse by an intimate partner, and childhood abuse are more likely to develop PTSD than those who experience non-assault based trauma, such as accidents and natural disasters. Those who experience prolonged trauma, such as slavery, concentration camps, or chronic domestic abuse, may develop complex post-traumatic stress disorder (C-PTSD). C-PTSD is similar to PTSD, but has a distinct effect on a person's emotional regulation and core identity.
Prevention may be possible when counselling is targeted at those with early symptoms, but is not effective when provided to all trauma-exposed individuals regardless of whether symptoms are present. The main treatments for people with PTSD are counselling (psychotherapy) and medication. Antidepressants of the SSRI or SNRI type are the first-line medications used for PTSD and are moderately beneficial for about half of people. Benefits from medication are less than those seen with counselling. It is not known whether using medications and counselling together has greater benefit than either method separately. Medications, other than some SSRIs or SNRIs, do not have enough evidence to support their use and, in the case of benzodiazepines, may worsen outcomes.
In the United States, about 3.5% of adults have PTSD in a given year, and 9% of people develop it at some point in their life. In much of the rest of the world, rates during a given year are between 0.5% and 1%. Higher rates may occur in regions of armed conflict. It is more common in women than men.
Symptoms of trauma-related mental disorders have been documented since at least the time of the ancient Greeks. A few instances of evidence of post-traumatic illness have been argued to exist from the seventeenth and eighteenth centuries, such as the diary of Samuel Pepys, who described intrusive and distressing symptoms following the 1666 Fire of London. During the world wars, the condition was known under various terms, including 'shell shock', 'war nerves', neurasthenia and 'combat neurosis'. The term "post-traumatic stress disorder" came into use in the 1970s, in large part due to the diagnoses of U.S. military veterans of the Vietnam War. It was officially recognized by the American Psychiatric Association in 1980 in the third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III).
Signs and symptoms
Symptoms of PTSD generally begin within the first three months after the inciting traumatic event, but may not begin until years later. In the typical case, the individual with PTSD persistently avoids either trauma-related thoughts and emotions or discussion of the traumatic event and may even have amnesia of the event (dissociative amnesia). However, the event is commonly relived by the individual through intrusive, recurrent recollections, dissociative episodes of reliving the trauma ("flashbacks"), and nightmares (50 to 70%). While it is common to have symptoms after any traumatic event, these must persist to a sufficient degree (i.e., causing dysfunction in life or clinical levels of distress) for longer than one month after the trauma to be classified as PTSD (clinically significant dysfunction or distress for less than one month after the trauma may be acute stress disorder). Some following a traumatic event experience post-traumatic growth.
Associated medical conditions
Trauma survivors often develop depression, anxiety disorders, and mood disorders in addition to PTSD. More than 50% of those with PTSD have co-morbid anxiety, mood, or substance use disorders.
Substance use disorder, such as alcohol use disorder, commonly co-occur with PTSD. Recovery from post-traumatic stress disorder or other anxiety disorders may be hindered, or the condition worsened, when substance use disorders are comorbid with PTSD. Resolving these problems can bring about improvement in an individual's mental health status and anxiety levels.
PTSD has a strong association with tinnitus, and speculation exists that PTSD may cause some tinnitus seen in association with the condition.
In children and adolescents, there is a strong association between emotional regulation difficulties (e.g., mood swings, anger outbursts, temper tantrums) and post-traumatic stress symptoms, independent of age, gender, or type of trauma.
Moral injury, the feeling of moral distress such as a shame or guilt following a moral transgression, is associated with PTSD but is distinguished from it. Moral injury is associated with shame and guilt, while PTSD is associated with anxiety and fear.
In a population based study examining veterans of the Vietnam War, the presence of PTSD and exposure to high level stressors on the battlefield were associated with a two-fold increased risk of death, with the leading causes of death being ischemic heart disease or cancers of the respiratory tract including lung cancer.
Risk factors
Persons considered at risk for developing PTSD include combat military personnel, survivors of natural disasters, concentration camp survivors, and survivors of violent crime. Persons employed in occupations that expose them to violence (such as soldiers) or disasters (such as emergency service workers) are also at risk. Other occupations at an increased risk include police officers, firefighters, first responders, ambulance personnel, health care professionals, train drivers, divers, journalists, and sailors, as well as people who work at banks, post offices, or in stores. The intensity of the traumatic event is also associated with a subsequent risk of developing PTSD, with experiences related to witnessed death, or witnessed or experienced torture, injury, bodily disfigurement, traumatic brain injury being highly associated with the development of PTSD. Similarly, experiences that are unexpected or in which the victim cannot escape are also associated with a high risk of developing PTSD.
Trauma
PTSD has been associated with a wide range of traumatic events. The risk of developing PTSD after a traumatic event varies by trauma type and is the highest following exposure to sexual violence (11.4%), particularly rape (19.0%). Men are more likely to experience a traumatic event (of any type), but women are more likely to experience the kind of high-impact traumatic event that can lead to PTSD, such as interpersonal violence and sexual assault.
Motor vehicle collision survivors, both children and adults, are at an increased risk of PTSD. Globally, about 2.6% of adults are diagnosed with PTSD following a non-life-threatening traffic accident, and a similar proportion of children develop PTSD. Risk of PTSD almost doubles to 4.6% for life-threatening auto accidents. Females were more likely to be diagnosed with PTSD following a road traffic accident, whether the accident occurred during childhood or adulthood.
Post-traumatic stress reactions have been studied in children and adolescents. The rate of PTSD might be lower in children than adults, but in the absence of therapy, symptoms may continue for decades. One estimate suggests that the proportion of children and adolescents having PTSD in a non-wartorn population in a developed country may be 1% compared to 1.5% to 3% of adults. On average, 16% of children exposed to a traumatic event develop PTSD, with the incidence varying according to type of exposure and gender. Similar to the adult population, risk factors for PTSD in children include: female gender, exposure to disasters (natural or man-made), negative coping behaviors, and/or lacking proper social support systems.
Predictor models have consistently found that childhood trauma, chronic adversity, neurobiological differences, and familial stressors are associated with risk for PTSD after a traumatic event in adulthood. It has been difficult to find consistently aspects of the events that predict, but peritraumatic dissociation has been a fairly consistent predictive indicator of the development of PTSD. Proximity to, duration of, and severity of the trauma make an impact. It has been speculated that interpersonal traumas cause more problems than impersonal ones, but this is controversial. The risk of developing PTSD is increased in individuals who are exposed to physical abuse, physical assault, or kidnapping. Women who experience physical violence are more likely to develop PTSD than men.
Intimate partner violence
An individual that has been exposed to domestic violence is predisposed to the development of PTSD. There is a strong association between the development of PTSD in mothers that experienced domestic violence during the perinatal period of their pregnancy.
Those who have experienced sexual assault or rape may develop symptoms of PTSD. The likelihood of sustained symptoms of PTSD is higher if the rapist confined or restrained the person, if the person being raped believed the rapist would kill them, the person who was raped was very young or very old, and if the rapist was someone they knew. The likelihood of sustained severe symptoms is also higher if people around the survivor ignore (or are ignorant of) the rape or blame the rape survivor.
War-related trauma, refugees
Military service in combat is a risk factor for developing PTSD. Around 22% of people exposed to combat develop PTSD; in about 25% of military personnel who develop PTSD, its appearance is delayed.
Refugees are also at an increased risk for PTSD due to their exposure to war, hardships, and traumatic events. The rates for PTSD within refugee populations range from 4% to 86%. While the stresses of war affect everyone involved, displaced persons have been shown to be more so than others.
Challenges related to the overall psychosocial well-being of refugees are complex and individually nuanced. Refugees have reduced levels of well-being and a high rate of mental distress due to past and ongoing trauma. Groups that are particularly affected and whose needs often remain unmet are women, older people, and unaccompanied minors. Post-traumatic stress and depression in refugee populations also tend to affect their educational success.
Unexpected death of a loved one
Sudden, unexpected death of a loved one is the most common traumatic event type reported in cross-national studies. However, the majority of people who experience this type of event will not develop PTSD. An analysis from the WHO World Mental Health Surveys found a 5.2% risk of developing PTSD after learning of the unexpected death of a loved one. Because of the high prevalence of this type of traumatic event, unexpected death of a loved one accounts for approximately 20% of PTSD cases worldwide.
Life-threatening illness
Medical conditions associated with an increased risk of PTSD include cancer, heart attack, and stroke. 22% of cancer survivors present with lifelong PTSD like symptoms. Intensive-care unit (ICU) hospitalization is also a risk factor for PTSD. Some women experience PTSD from their experiences related to breast cancer and mastectomy. Loved ones of those who experience life-threatening illnesses are also at risk for developing PTSD, such as parents of a child with chronic illnesses.
Research exists which demonstrates that survivors of psychotic episodes, which exist in diseases such as schizophrenia, schizoaffective disorder, bipolar I disorder, and others, are at greater risk for PTSD due to the experiences one may have during and after psychosis. Such traumatic experiences include, but are not limited to, the treatment patients experience in psychiatric hospitals, police interactions due to psychotic behavior, suicidal behavior and attempts, social stigma and embarrassment due to behavior while in psychosis, frequent terrifying experiences due to psychosis, and the fear of losing control or actual loss of control. The incidence of PTSD in survivors of psychosis may be as low as 11% and as high at 67%.
Cancer
Prevalence estimates of cancer‐related PTSD range between 7% and 14%, with an additional 10% to 20% of patients experiencing subsyndromal post-traumatic stress symptoms (PTSS). Both PTSD and PTSS have been associated with increased distress and impaired quality of life, and have been reported in newly diagnosed patients as well as in long‐term survivors.
The PTSD Field Trials for the Diagnostic and Statistical Manual, Fourth Edition (DSM-IV), revealed that 22% of cancer survivors present with lifetime cancer-related PTSD (CR-PTSD), endorsing cancer diagnosis and treatment as a traumatic stressor.
Therefore, as the number of people diagnosed with cancer increases and cancer survivorship improves, cancer-related PTSD becomes a more prominent issue, and thus, providing for cancer patients' physical and psychological needs becomes increasingly important.
Evidence‐based treatments such as eye movement desensitization and reprocessing (EMDR) and cognitive behavioral therapy (CBT) are available for PTSD, and indeed, there have been promising reports of their effectiveness in cancer patients.
Pregnancy-related trauma
Women who experience miscarriage are at risk of PTSD. Those who experience subsequent miscarriages have an increased risk of PTSD compared to those experiencing only one. PTSD can also occur after childbirth and the risk increases if a woman has experienced trauma prior to the pregnancy. Prevalence of PTSD following normal childbirth (that is, excluding stillbirth or major complications) is estimated to be between 2.8 and 5.6% at six weeks postpartum, with rates dropping to 1.5% at six months postpartum. Symptoms of PTSD are common following childbirth, with prevalence of 24–30.1% at six weeks, dropping to 13.6% at six months. Emergency childbirth is also associated with PTSD.
Natural disasters
Genetics
There is evidence that susceptibility to PTSD is hereditary. Approximately 30% of the variance in PTSD is caused from genetics alone. For twin pairs exposed to combat in Vietnam, having a monozygotic (identical) twin with PTSD was associated with an increased risk of the co-twin's having PTSD compared to twins that were dizygotic (non-identical twins). Women with a smaller hippocampus might be more likely to develop PTSD following a traumatic event based on preliminary findings. Research has also found that PTSD shares many genetic influences common to other psychiatric disorders. Panic and generalized anxiety disorders and PTSD share 60% of the same genetic variance. Alcohol, nicotine, and drug dependence share greater than 40% genetic similarities.
Pathophysiology
Neuroendocrinology
PTSD symptoms may result when a traumatic event causes an over-reactive adrenaline response, which creates deep neurological patterns in the brain. These patterns can persist long after the event that triggered the fear, making an individual hyper-responsive to future fearful situations. During traumatic experiences, the high levels of stress hormones secreted suppress hypothalamic activity that may be a major factor toward the development of PTSD.
PTSD causes biochemical changes in the brain and body, that differ from other psychiatric disorders such as major depression. Individuals diagnosed with PTSD respond more strongly to a dexamethasone suppression test than individuals diagnosed with clinical depression.
Most people with PTSD show a low secretion of cortisol and high secretion of catecholamines in urine, with a norepinephrine/cortisol ratio consequently higher than comparable non-diagnosed individuals. This is in contrast to the normative fight-or-flight response, in which both catecholamine and cortisol levels are elevated after exposure to a stressor.
Brain catecholamine levels are high, and corticotropin-releasing factor (CRF) concentrations are high. Together, these findings suggest abnormality in the hypothalamic-pituitary-adrenal (HPA) axis.
The maintenance of fear has been shown to include the HPA axis, the locus coeruleus-noradrenergic systems, and the connections between the limbic system and frontal cortex. The HPA axis that coordinates the hormonal response to stress, which activates the LC-noradrenergic system, is implicated in the over-consolidation of memories that occurs in the aftermath of trauma. This over-consolidation increases the likelihood of one's developing PTSD. The amygdala is responsible for threat detection and the conditioned and unconditioned fear responses that are carried out as a response to a threat.
The HPA axis is responsible for coordinating the hormonal response to stress. Given the strong cortisol suppression to dexamethasone in PTSD, HPA axis abnormalities are likely predicated on strong negative feedback inhibition of cortisol, itself likely due to an increased sensitivity of glucocorticoid receptors.
PTSD has been hypothesized to be a maladaptive learning pathway to fear response through a hypersensitive, hyperreactive, and hyperresponsive HPA axis.
Low cortisol levels may predispose individuals to PTSD: Following war trauma, Swedish soldiers serving in Bosnia and Herzegovina with low pre-service salivary cortisol levels had a higher risk of reacting with PTSD symptoms, following war trauma, than soldiers with normal pre-service levels. Because cortisol is normally important in restoring homeostasis after the stress response, it is thought that trauma survivors with low cortisol experience a poorly contained—that is, longer and more distressing—response, setting the stage for PTSD.
It is thought that the locus coeruleus-noradrenergic system mediates the over-consolidation of fear memory. High levels of cortisol reduce noradrenergic activity, and because people with PTSD tend to have reduced levels of cortisol, it has been proposed that individuals with PTSD cannot regulate the increased noradrenergic response to traumatic stress. Intrusive memories and conditioned fear responses are thought to be a result of the response to associated triggers. Neuropeptide Y (NPY) has been reported to reduce the release of norepinephrine and has been demonstrated to have anxiolytic properties in animal models. Studies have shown people with PTSD demonstrate reduced levels of NPY, possibly indicating their increased anxiety levels.
Other studies indicate that people with PTSD have chronically low levels of serotonin, which contributes to the commonly associated behavioral symptoms such as anxiety, ruminations, irritability, aggression, suicidality, and impulsivity. Serotonin also contributes to the stabilization of glucocorticoid production.
Dopamine levels in a person with PTSD can contribute to symptoms: low levels can contribute to anhedonia, apathy, impaired attention, and motor deficits; high levels can contribute to psychosis, agitation, and restlessness.
Studies have also described elevated concentrations of the thyroid hormone triiodothyronine in PTSD. This kind of type 2 allostatic adaptation may contribute to increased sensitivity to catecholamines and other stress mediators.
Hyperresponsiveness in the norepinephrine system can also be caused by continued exposure to high stress. Overactivation of norepinephrine receptors in the prefrontal cortex can be connected to the flashbacks and nightmares frequently experienced by those with PTSD. A decrease in other norepinephrine functions (awareness of the current environment) prevents the memory mechanisms in the brain from processing the experience, and emotions the person is experiencing during a flashback are not associated with the current environment.
There is considerable controversy within the medical community regarding the neurobiology of PTSD. A 2012 review showed no clear relationship between cortisol levels and PTSD. The majority of reports indicate people with PTSD have elevated levels of corticotropin-releasing hormone, lower basal cortisol levels, and enhanced negative feedback suppression of the HPA axis by dexamethasone.
Neuroimmunology
Studies on the peripheral immune have found dysfunction with elevated cytokine levels and a higher risk of immune-related chronic diseases among individuals with PTSD. Neuroimmune dysfunction has also been found in PTSD, raising the possibility of a suppressed central immune response due to reduced activity of microglia in the brain in response to immune challenges. Individuals with PTSD, compared to controls, have lower increase in a marker of microglial activation (18-kDa translocator protein) following lipopolysaccharide administration. This neuroimmune suppression is also associated with greater severity of anhedonic symptoms. Researchers suggest that treatments aimed at restoring neuroimmune function could be beneficial for alleviating PTSD symptoms.
Neuroanatomy
A meta-analysis of structural MRI studies found an association with reduced total brain volume, intracranial volume, and volumes of the hippocampus, insula cortex, and anterior cingulate. Much of this research stems from PTSD in those exposed to the Vietnam War.
People with PTSD have decreased brain activity in the dorsal and rostral anterior cingulate cortices and the ventromedial prefrontal cortex, areas linked to the experience and regulation of emotion.
The amygdala is strongly involved in forming emotional memories, especially fear-related memories. During high stress, the hippocampus, which is associated with placing memories in the correct context of space and time and memory recall, is suppressed. According to one theory, this suppression may be the cause of the flashbacks that can affect people with PTSD. When someone with PTSD undergoes stimuli similar to the traumatic event, the body perceives the event as occurring again because the memory was never properly recorded in the person's memory.
The amygdalocentric model of PTSD proposes that the amygdala is very much aroused and insufficiently controlled by the medial prefrontal cortex and the hippocampus, in particular during extinction. This is consistent with an interpretation of PTSD as a syndrome of deficient extinction ability.
The basolateral nucleus (BLA) of the amygdala is responsible for the comparison and development of associations between unconditioned and conditioned responses to stimuli, which results in the fear conditioning present in PTSD. The BLA activates the central nucleus (CeA) of the amygdala, which elaborates the fear response, (including behavioral response to threat and elevated startle response). Descending inhibitory inputs from the medial prefrontal cortex (mPFC) regulate the transmission from the BLA to the CeA, which is hypothesized to play a role in the extinction of conditioned fear responses.
While as a whole, amygdala hyperactivity is reported by meta analysis of functional neuroimaging in PTSD, there is a large degree of heterogeniety, more so than in social anxiety disorder or phobic disorder. Comparing dorsal (roughly the CeA) and ventral (roughly the BLA) clusters, hyperactivity is more robust in the ventral cluster, while hypoactivity is evident in the dorsal cluster. The distinction may explain the blunted emotions in PTSD (via desensitization in the CeA) as well as the fear related component.
In a 2007 study, Vietnam War combat veterans with PTSD showed a 20% reduction in the volume of their hippocampus compared with veterans who did not have such symptoms. This finding was not replicated in chronic PTSD patients traumatized at an air show plane crash in 1988 (Ramstein, Germany).
Evidence suggests that endogenous cannabinoid levels are reduced in PTSD, particularly anandamide, and that cannabinoid receptors (CB1) are increased in order to compensate. There appears to be a link between increased CB1 receptor availability in the amygdala and abnormal threat processing and hyperarousal, but not dysphoria, in trauma survivors.
A 2020 study found no evidence for conclusions from prior research that suggested low IQ is a risk factor for developing PTSD.
Diagnosis
PTSD can be difficult to diagnose, because of:
the subjective nature of most of the diagnostic criteria (although this is true for many mental disorders);
the potential for over-reporting, e.g., while seeking disability benefits, or when PTSD could be a mitigating factor at criminal sentencing
the potential for under-reporting, e.g., stigma, pride, fear that a PTSD diagnosis might preclude certain employment opportunities;
symptom overlap with other mental disorders such as obsessive compulsive disorder and generalized anxiety disorder;
association with other mental disorders such as major depressive disorder and generalized anxiety disorder;
substance use disorders, which often produce some of the same signs and symptoms as PTSD; and
substance use disorders can increase vulnerability to PTSD or exacerbate PTSD symptoms or both; and
PTSD increases the risk for developing substance use disorders.
the differential expression of symptoms culturally (specifically with respect to avoidance and numbing symptoms, distressing dreams, and somatic symptoms)
Screening
There are a number of PTSD screening instruments for adults, such as the PTSD Checklist for DSM-5 (PCL-5) and the Primary Care PTSD Screen for DSM-5 (PC-PTSD-5). The 17 item PTSD checklist is also capable of monitoring the severity of symptoms and the response to treatment.
There are also several screening and assessment instruments for use with children and adolescents. These include the Child PTSD Symptom Scale (CPSS), Child Trauma Screening Questionnaire, and UCLA Post-traumatic Stress Disorder Reaction Index for DSM-IV.
In addition, there are also screening and assessment instruments for caregivers of very young children (six years of age and younger). These include the Young Child PTSD Screen, the Young Child PTSD Checklist, and the Diagnostic Infant and Preschool Assessment.
Assessment
Evidence-based assessment principles, including a multimethod assessment approach, form the foundation of PTSD assessment. Those who conduct assessments for PTSD may use various clinician-administered interviews and instruments to provide an official PTSD diagnosis. Some commonly used, reliable, and valid assessment instruments for PTSD diagnosis, in accordance with the DSM-5, include the Clinician-Administered PTSD Scale for the DSM-5 (CAPS-5), PTSD Symptom Scale Interview (PSS-I-5), and Structured Clinical Interview for DSM-5 – PTSD Module (SCID-5 PTSD Module).
In the DSM and ICD
PTSD was classified as an anxiety disorder in the DSM-IV, but has since been reclassified as a "trauma- and stressor-related disorder" in the DSM-5. The DSM-5 diagnostic criteria for PTSD include four symptom clusters: re-experiencing, avoidance, negative alterations in cognition/mood, and alterations in arousal and reactivity.
The International Classification of Diseases and Related Health Problems, 10th Revision (ICD-10) classifies PTSD under "Reaction to severe stress, and adjustment disorders." The ICD-10 criteria for PTSD include re-experiencing, avoidance, and either increased reactivity or inability to recall certain details related to the event.
The ICD-11 diagnostic description for PTSD contains three components or symptom groups (1) re-experiencing, (2) avoidance, and (3) heightened sense of threat. ICD-11 no longer includes verbal thoughts about the traumatic event as a symptom. There is a predicted lower rate of diagnosed PTSD using ICD-11 compared to ICD-10 or DSM-5. ICD-11 also proposes identifying a distinct group with complex post-traumatic stress disorder (CPTSD), who have more often experienced several or sustained traumas and have greater functional impairment than those with PTSD.
Differential diagnosis
A diagnosis of PTSD requires that the person has been exposed to an extreme stressor. Any stressor can result in a diagnosis of adjustment disorder and it is an appropriate diagnosis for a stressor and a symptom pattern that does not meet the criteria for PTSD.
The symptom pattern for acute stress disorder must occur and be resolved within four weeks of the trauma. If it lasts longer, and the symptom pattern fits that characteristic of PTSD, the diagnosis may be changed.
Obsessive–compulsive disorder (OCD) may be diagnosed for intrusive thoughts that are recurring but not related to a specific traumatic event.
In extreme cases of prolonged, repeated traumatization where there is no viable chance of escape, survivors may develop complex post-traumatic stress disorder. This occurs as a result of layers of trauma rather than a single traumatic event, and includes additional symptomatology, such as the loss of a coherent sense of self.
Prevention
Modest benefits have been seen from early access to cognitive behavioral therapy. Critical incident stress management has been suggested as a means of preventing PTSD, but subsequent studies suggest the likelihood of its producing negative outcomes. A 2019 Cochrane review did not find any evidence to support the use of an intervention offered to everyone, and that "multiple session interventions may result in worse outcome than no intervention for some individuals." The World Health Organization recommends against the use of benzodiazepines and antidepressants in for acute stress (symptoms lasting less than one month). Some evidence supports the use of hydrocortisone for prevention in adults, although there is limited or no evidence supporting propranolol, escitalopram, temazepam, or gabapentin.
Psychological debriefing
Trauma-exposed individuals often receive treatment called psychological debriefing in an effort to prevent PTSD, which consists of interviews that are meant to allow individuals to directly confront the event and share their feelings with the counselor and to help structure their memories of the event. However, several meta-analyses find that psychological debriefing is unhelpful, is potentially harmful and does not reduce the future risk of developing PTSD. This is true for both single-session debriefing and multiple session interventions. As of 2017 the American Psychological Association assessed psychological debriefing as No Research Support/Treatment is Potentially Harmful.
Early intervention
Trauma focused intervention delivered within days or weeks of the potentially traumatic event has been found to decrease PTSD symptoms. Similar to psychological debriefing, the goal of early intervention is to lessen the intensity and frequency of stress symptoms, with the aim of preventing new-onset or relapsed mental disorders and further distress later in the healing process.
Risk-targeted interventions
Risk-targeted interventions are those that attempt to mitigate specific formative information or events. It can target modeling normal behaviors, instruction on a task, or giving information on the event.
Management
Reviews of studies have found that combination therapy (psychological and pharmacotherapy) is no more effective than psychological therapy alone.
Counselling
The approaches with the strongest evidence include behavioral and cognitive-behavioral therapies such as prolonged exposure therapy, cognitive processing therapy (CBT), and eye movement desensitization and reprocessing (EMDR). There is some evidence for brief eclectic psychotherapy (BEP), narrative exposure therapy (NET), and written exposure therapy.
A 2019 Cochrane review evaluated couples and family therapies compared to no care and individual and group therapies for the treatment of PTSD. There were too few studies on couples therapies to determine if substantive benefits were derived, but preliminary RCTs suggested that couples therapies may be beneficial for reducing PTSD symptoms.
A meta-analytic comparison of EMDR and CBT found both protocols indistinguishable in terms of effectiveness in treating PTSD; however, "the contribution of the eye movement component in EMDR to treatment outcome" is unclear. A meta-analysis in children and adolescents also found that EMDR was as efficacious as CBT.
Children with PTSD are far more likely to pursue treatment at school (because of its proximity and ease) than at a free clinic.
Cognitive behavioral therapy
CBT seeks to change the way a person feels and acts by changing the patterns of thinking or behavior, or both, responsible for negative emotions. Results from a 2018 systematic review found high strength of evidence that supports CBT-exposure therapy efficacious for a reduction in PTSD and depression symptoms, as well as the loss of PTSD diagnosis. CBT has been proven to be an effective treatment for PTSD and is currently considered the standard of care for PTSD by the United States Department of Defense.
In CBT, individuals learn to identify thoughts that make them feel afraid or upset and replace them with less distressing thoughts. The goal is to understand how certain thoughts about events cause PTSD-related stress. A study assessing an online version of CBT for people with mild-to-moderate PTSD found that the online approach was as effective as, and cheaper than, the same therapy given face-to-face. A 2021 Cochrane review assessed the provision of CBT in an Internet-based format found similar beneficial effects for Internet-based therapy as in face-to-face. However, the quality of the evidence was low due to the small number of trials reviewed.
Exposure therapy is a type of cognitive behavioral therapy that involves assisting trauma survivors to re-experience distressing trauma-related memories and reminders in order to facilitate habituation and successful emotional processing of the trauma memory. Most exposure therapy programs include both imaginal confrontation with the traumatic memories and real-life exposure to trauma reminders; this type of CBT has shown benefit in the treatment of PTSD.
Some organizations have endorsed the need for exposure. The U.S. Department of Veterans Affairs has been actively training mental health treatment staff in prolonged exposure therapy and cognitive processing therapy in an effort to better treat U.S. veterans with PTSD.
Recent research on contextually based third-generation behavior therapies suggests that they may produce results comparable to some of the better validated therapies. Many of these therapy methods have a significant element of exposure and have demonstrated success in treating the primary problems of PTSD and co-occurring depressive symptoms.
Eye movement desensitization and reprocessing
Eye movement desensitization and reprocessing (EMDR) is a form of psychotherapy developed and studied by Francine Shapiro. She had noticed that, when she was thinking about disturbing memories herself, her eyes were moving rapidly. When she brought her eye movements under control while thinking, the thoughts were less distressing.
In 2002, Shapiro and Maxfield published a theory of why this might work, called adaptive information processing. This theory proposes that eye movement can be used to facilitate emotional processing of memories, changing the person's memory to attend to more adaptive information. The therapist initiates voluntary rapid eye movements while the person focuses on memories, feelings or thoughts about a particular trauma. The therapist uses hand movements to get the person to move their eyes backward and forward, but hand-tapping or tones can also be used. EMDR closely resembles cognitive behavior therapy as it combines exposure (re-visiting the traumatic event), working on cognitive processes and relaxation/self-monitoring. However, exposure by way of being asked to think about the experience rather than talk about it has been highlighted as one of the more important distinguishing elements of EMDR.
There have been several small, controlled trials of four to eight weeks of EMDR in adults as well as children and adolescents. There is moderate strength of evidence to support the efficacy of EMDR "for reduction in PTSD symptoms, loss of diagnosis, and reduction in depressive symptoms" according to a 2018 systematic review update. EMDR reduced PTSD symptoms enough in the short term that one in two adults no longer met the criteria for PTSD, but the number of people involved in these trials was small and thus results should be interpreted with caution pending further research. There was not enough evidence to know whether EMDR could eliminate PTSD in adults.
In children and adolescents, a recent meta-analysis of randomized controlled trials using MetaNSUE to avoid biases related to missing information found that EMDR was at least as efficacious as CBT, and superior to waitlist or placebo. There was some evidence that EMDR might prevent depression. There were no studies comparing EMDR to other psychological treatments or to medication. Adverse effects were largely unstudied. The benefits were greater for women with a history of sexual assault compared with people who had experienced other types of traumatizing events (such as accidents, physical assaults and war). There is a small amount of evidence that EMDR may improve re-experiencing symptoms in children and adolescents, but EMDR has not been shown to improve other PTSD symptoms, anxiety, or depression.
The eye movement component of the therapy may not be critical for benefit. As there has been no major, high quality randomized trial of EMDR with eye movements versus EMDR without eye movements, the controversy over effectiveness is likely to continue. Authors of a meta-analysis published in 2013 stated, "We found that people treated with eye movement therapy had greater improvement in their symptoms of post-traumatic stress disorder than people given therapy without eye movements.... Secondly, we found that in laboratory studies the evidence concludes that thinking of upsetting memories and simultaneously doing a task that facilitates eye movements reduces the vividness and distress associated with the upsetting memories."
Interpersonal psychotherapy
Other approaches, in particular involving social supports, may also be important. An open trial of interpersonal psychotherapy reported high rates of remission from PTSD symptoms without using exposure.
Medication
While many medications do not have enough evidence to support their use, four (sertraline, fluoxetine, paroxetine, and venlafaxine) have been shown to have a small to modest benefit over placebo. With many medications, residual PTSD symptoms following treatment is the rule rather than the exception.
Antidepressants
Selective serotonin reuptake inhibitors (SSRIs) and serotonin–norepinephrine reuptake inhibitors (SNRIs) may have some benefit for PTSD symptoms. Tricyclic antidepressants are equally effective, but are less well tolerated. Evidence provides support for a small or modest improvement with sertraline, fluoxetine, paroxetine, and venlafaxine. Thus, these four medications are considered to be first-line medications for PTSD. The SSRIs paroxetine and sertraline are approved by the U.S. Food and Drug Administration (FDA) approved for the treatment of PTSD.
Benzodiazepines
Benzodiazepines are not recommended for the treatment of PTSD due to a lack of evidence of benefit and risk of worsening PTSD symptoms. Some authors believe that the use of benzodiazepines is contraindicated for acute stress, as this group of drugs can cause dissociation. Nevertheless, some use benzodiazepines with caution for short-term anxiety and insomnia. While benzodiazepines can alleviate acute anxiety, there is no consistent evidence that they can stop the development of PTSD and may actually increase the risk of developing PTSD 2–5 times. Benzodiazepines should not be used in the immediate aftermath of a traumatic event as they may increase symptoms related to PTSD.
Benzodiazepines may reduce the effectiveness of psychotherapeutic interventions, and there is some evidence that benzodiazepines may actually contribute to the development and chronification of PTSD. For those who already have PTSD, benzodiazepines may worsen and prolong the course of illness, by worsening psychotherapy outcomes, and causing or exacerbating aggression, depression (including suicidality), and substance use. Drawbacks include the risk of developing a benzodiazepine dependence, tolerance (i.e., short-term benefits wearing off with time), and withdrawal syndrome; additionally, individuals with PTSD (even those without a history of alcohol or drug misuse) are at an increased risk of abusing benzodiazepines.
Due to a number of other treatments with greater efficacy for PTSD and fewer risks, benzodiazepines should be considered relatively contraindicated until all other treatment options are exhausted.
Benzodiazepines also carry a risk of disinhibition (associated with suicidality, aggression and crimes) and their use may delay or inhibit more definitive treatments for PTSD.
Prazosin
Prazosin, an alpha-1 adrenergic antagonist, has been used in veterans with PTSD to reduce nightmares. Studies show variability in the symptom improvement, appropriate dosages, and efficacy in this population.
Glucocorticoids
Glucocorticoids may be useful for short-term therapy to protect against neurodegeneration caused by the extended stress response that characterizes PTSD, but long-term use may actually promote neurodegeneration.
Cannabinoids
Cannabis is not recommended as a treatment for PTSD because scientific evidence does not currently exist demonstrating treatment efficacy for cannabinoids. However, use of cannabis or derived products is widespread among U.S. veterans with PTSD.
The cannabinoid nabilone is sometimes used for nightmares in PTSD. Although some short-term benefit was shown, adverse effects are common and it has not been adequately studied to determine efficacy. An increasing number of states permit and have legalized the use of medical cannabis for the treatment of PTSD.
Other
Exercise, sport and physical activity
Physical activity can influence people's psychological and physical health. The U.S. National Center for PTSD recommends moderate exercise as a way to distract from disturbing emotions, build self-esteem and increase feelings of being in control again. They recommend a discussion with a doctor before starting an exercise program.
Play therapy for children
Play is thought to help children link their inner thoughts with their outer world, connecting real experiences with abstract thought. Repetitive play can also be one way a child relives traumatic events, and that can be a symptom of trauma in a child or young person. Although it is commonly used, there have not been enough studies comparing outcomes in groups of children receiving and not receiving play therapy, so the effects of play therapy are not yet understood.
Military programs
Many veterans of the wars in Iraq and Afghanistan have faced significant physical, emotional, and relational disruptions. In response, the United States Marine Corps has instituted programs to assist them in re-adjusting to civilian life, especially in their relationships with spouses and loved ones, to help them communicate better and understand what the other has gone through. Walter Reed Army Institute of Research (WRAIR) developed the Battlemind program to assist service members avoid or ameliorate PTSD and related problems. Wounded Warrior Project partnered with the US Department of Veterans Affairs to create Warrior Care Network, a national health system of PTSD treatment centers.
Nightmares
In 2020, the United States Food and Drug Administration granted marketing approval for an Apple Watch app call NightWare. The app aims to improve sleep for people suffering from PTSD-related nightmares, by vibrating when it detects a nightmare in progress based on monitoring heart rate and body movement.
The "colour cure"
Toward the end of the First World War art connoisseur Howard Kemp Prossor came up with what he called the "colour cure" – the use of specific colours to ease the suffering of people with shell shock.
Epidemiology
There is debate over the rates of PTSD found in populations, but, despite changes in diagnosis and the criteria used to define PTSD between 1997 and 2013, epidemiological rates have not changed significantly. Most of the current reliable data regarding the epidemiology of PTSD is based on DSM-IV criteria, as the DSM-5 was not introduced until 2013.
The United Nations' World Health Organization publishes estimates of PTSD impact for each of its member states; the latest data available are for 2004. Considering only the 25 most populated countries ranked by overall age-standardized Disability-Adjusted Life Year (DALY) rate, the top half of the ranked list is dominated by Asian/Pacific countries, the US, and Egypt. Ranking the countries by the male-only or female-only rates produces much the same result, but with less meaningfulness, as the score range in the single-sex rankings is much-reduced (4 for women, 3 for men, as compared with 14 for the overall score range), suggesting that the differences between female and male rates, within each country, is what drives the distinctions between the countries.
As of 2017, the cross-national lifetime prevalence of PTSD was 3.9%, based on a survey where 5.6% had been exposed to trauma. The primary factor impacting treatment-seeking behavior, which can help to mitigate PTSD development after trauma was income, while being younger, female, and having less social status (less education, lower individual income, and being unemployed) were all factors associated with less treatment-seeking behavior.
United States
PTSD affects about 5% of the US adult population each year.
The National Comorbidity Survey Replication has estimated that the lifetime prevalence of PTSD among adult Americans is 6.8%, with women (9.7%) more than twice as likely as men (3.6%) to have PTSD at some point in their lives. More than 60% of men and more than 60% of women experience at least one traumatic event in their life. The most frequently reported traumatic events by men are rape, combat, and childhood neglect or physical abuse. Women most frequently report instances of rape, sexual molestation, physical attack, being threatened with a weapon and childhood physical abuse. 88% of men and 79% of women with lifetime PTSD have at least one comorbid psychiatric disorder. Major depressive disorder, 48% of men and 49% of women, and lifetime alcohol use disorder or dependence, 51.9% of men and 27.9% of women, are the most common comorbid disorders.
Military combat
The United States Department of Veterans Affairs estimates that 830,000 Vietnam War veterans had symptoms of PTSD. The National Vietnam Veterans' Readjustment Study (NVVRS) found 15% of male and 9% of female Vietnam veterans had PTSD at the time of the study. Life-time prevalence of PTSD was 31% for males and 27% for females. In a reanalysis of the NVVRS data, along with analysis of the data from the Matsunaga Vietnam Veterans Project, Schnurr, Lunney, Sengupta, and Waelde found that, contrary to the initial analysis of the NVVRS data, a large majority of Vietnam veterans had PTSD symptoms (but not the disorder itself). Four out of five reported recent symptoms when interviewed 20–25 years after Vietnam.
A 2011 study from Georgia State University and San Diego State University found that rates of PTSD diagnosis increased significantly when troops were stationed in combat zones, had tours of longer than a year, experienced combat, or were injured. Military personnel serving in combat zones were 12.1 percentage points more likely to receive a PTSD diagnosis than their active-duty counterparts in non-combat zones. Those serving more than 12 months in a combat zone were 14.3 percentage points more likely to be diagnosed with PTSD than those having served less than one year.
Experiencing an enemy firefight was associated with an 18.3 percentage point increase in the probability of PTSD, while being wounded or injured in combat was associated with a 23.9 percentage point increase in the likelihood of a PTSD diagnosis. For the 2.16 million U.S. troops deployed in combat zones between 2001 and 2010, the total estimated two-year costs of treatment for combat-related PTSD are between $1.54 billion and $2.69 billion.
As of 2013, rates of PTSD have been estimated at up to 20% for veterans returning from Iraq and Afghanistan. As of 2013 13% of veterans returning from Iraq were unemployed.
Human-made disasters
The September 11 attacks took the lives of nearly 3,000 people, leaving 6,000 injured. First responders (police, firefighters, and emergency medical technicians), sanitation workers, and volunteers were all involved in the recovery efforts. The prevalence of probable PTSD in these highly exposed populations was estimated across several studies using in-person, telephone, and online interviews and questionnaires. Overall prevalence of PTSD was highest immediately following the attacks and decreased over time. However, disparities were found among the different types of recovery workers. The rate of probable PTSD for first responders was lowest directly after the attacks and increased from ranges of 4.8–7.8% to 7.4–16.5% between the 5–6 year follow-up and a later assessment.
When comparing traditional responders to non-traditional responders (volunteers), the probable PTSD prevalence 2.5 years after the initial visit was greater in volunteers with estimates of 11.7% and 17.2% respectively. Volunteer participation in tasks atypical to the defined occupational role was a significant risk factor for PTSD. Other risk factors included exposure intensity, earlier start date, duration of time spent on site, and constant, negative reminders of the trauma.
Additional research has been performed to understand the social consequences of the September 11 attacks. Alcohol consumption was assessed in a cohort of World Trade Center workers using the cut-annoyed-guilty-eye (CAGE) questionnaire for alcohol use disorder. Almost 50% of World Trade Center workers who self-identified as alcohol users reported drinking more during the rescue efforts. Nearly a quarter of these individuals reported drinking more following the recovery. If determined to have probable PTSD status, the risk of developing an alcohol problem was double compared to those without psychological morbidity. Social disability was also studied in this cohort as a social consequence of the September 11 attacks. Defined by the disruption of family, work, and social life, the risk of developing social disability increased 17-fold when categorized as having probable PTSD.
Anthropology
Cultural and medical anthropologists have questioned the validity of applying the diagnostic criteria of PTSD cross-culturally.
Trauma (and resulting PTSD) is often experienced through the outermost limits of suffering, pain and fear. The images and experiences relived through PTSD often defy easy description through language. Therefore, the translation of these experiences from one language to another is problematic, and the primarily Euro-American research on trauma is necessarily limited. The Sapir-Whorf hypothesis suggests that people perceive the world differently according to the language they speak: language and the world it exists within reflect back on the perceptions of the speaker.
For example, ethnopsychology studies in Nepal have found that cultural idioms and concepts related to trauma often do not translate to western terminologies: piDaa is a term that may align to trauma/suffering, but also people who suffer from piDaa are considered paagal (mad) and are subject to negative social stigma, indicating the need for culturally appropriate and carefully tailored support interventions. More generally, different cultures remember traumatic experiences within different linguistic and cultural paradigms. As such, cultural and medical anthropologists have questioned the validity of applying the diagnostic criteria of PTSD cross-culturally, as defined in the Diagnostic and Statistical Manual of Mental Disorders (DSM-III), and constructed through the Euro-American paradigm of psychology.
There remains a dearth of studies into the conceptual frameworks that surround trauma in non-Western cultures. There is little evidence to suggest therapeutic benefit in synthesizing local idioms of distress into a culturally constructed disorder of the post-Vietnam era, a practice anthropologist believe contributes to category fallacy. For many cultures there is no single linguistic corollary to PTSD, psychological trauma being a multi-faceted concept with corresponding variances of expression.
Designating the effects of trauma as an affliction of the spirit is common in many non-Western cultures where idioms such as "soul loss" and "weak heart" indicate a preference to confer suffering to a spirit-body or heart-body diametric. These idioms reflect the emphasis that collectivist cultures place on healing trauma through familial, cultural and religious activities while avoiding the stigma that accompanies a mind-body approach. Prescribing PTSD diagnostics within these communities is ineffective and often detrimental. For trauma that extends beyond the individual, such as the effects of war, anthropologists believe applying the term "social suffering" or "cultural bereavement" to be more beneficial.
Every facet of society is affected by conflict; the prolonged exposure to mass violence can lead to a 'continuous suffering' among civilians, soldiers, and bordering countries. Entered into the DSM in 1980, clinicians and psychiatrists based the diagnostic criteria for PTSD around American veterans of the Vietnam War. Though the DSM gets reviewed and updated regularly, it is unable to fully encompass the disorder due to its Americanization (or Westernization). That is, what may be considered characteristics of PTSD in western society, may not directly translate across to other cultures around the world. Displaced people of the African country Burundi experienced symptoms of depression and anxiety, though few symptoms specific to PTSD were noted.
In a similar review, Sudanese refugees relocated in Uganda were 'concerned with material [effects]' (lack of food, shelter, and healthcare), rather than psychological distress. In this case, many refugees did not present symptoms at all, with a minor few developing anxiety and depression. War-related stresses and traumas will be ingrained in the individual, however they will be affected differently from culture to culture, and the "clear-cut" rubric for diagnosing PTSD does not allow for culturally contextual reactions to take place.
Veterans
United States
The United States provides a range of benefits for veterans that the VA has determined have PTSD, which developed during, or as a result of, their military service. These benefits may include tax-free cash payments, free or low-cost mental health treatment and other healthcare, vocational rehabilitation services, employment assistance, and independent living support.
United Kingdom
In the UK, there are various charities and service organisations dedicated to aiding veterans in readjusting to civilian life. The Royal British Legion and the more recently established Help for Heroes are two of Britain's more high-profile veterans' organisations which have actively advocated for veterans over the years. There has been some controversy that the NHS has not done enough in tackling mental health issues and is instead "dumping" veterans on charities such as Combat Stress.
Canada
Veterans Affairs Canada provides assistance to disabled veterans that includes rehabilitation, financial aid, job placement, healthcare, disability compensation, peer support, and family support.
History
Aspects of PTSD in soldiers of ancient Assyria have been identified using written sources from 1300 to 600 BCE. These Assyrian soldiers would undergo a three-year rotation of combat before being allowed to return home, and were reported to have faced immense challenges in reconciling their past actions in war with their civilian lives.
Connections between the actions of Viking berserkers and the hyperarousal of post-traumatic stress disorder have also been drawn.
Psychiatrist Jonathan Shay has proposed that Lady Percy's soliloquy in the William Shakespeare play Henry IV, Part 1 (act 2, scene 3, lines 40–62), written around 1597, represents an unusually accurate description of the symptom constellation of PTSD.
Many historical wartime diagnoses such as railway spine, stress syndrome, nostalgia, soldier's heart, shell shock, battle fatigue, combat stress reaction, and traumatic war neurosis are now associated with PTSD.
The correlations between combat and PTSD are undeniable; according to Stéphane Audoin-Rouzeau and Annette Becker, "One-tenth of mobilized American men were hospitalized for mental disturbances between 1942 and 1945, and, after thirty-five days of uninterrupted combat, 98% of them manifested psychiatric disturbances in varying degrees."
The DSM-I (1952) includes a diagnosis of "gross stress reaction", which has similarities to the modern definition and understanding of PTSD. Gross stress reaction is defined as a normal personality using established patterns of reaction to deal with overwhelming fear as a response to conditions of great stress. The diagnosis includes language which relates the condition to combat as well as to "civilian catastrophe".
The addition of the term to the DSM-III was greatly influenced by the experiences and conditions of U.S. military veterans of the Vietnam War. In fact, much of the available published research regarding PTSD is based on studies done on veterans of the war in Vietnam.
Because of the initial overt focus on PTSD as a combat related disorder when it was first fleshed out in the years following the war in Vietnam, in 1975 Ann Wolbert Burgess and Lynda Lytle Holmstrom defined rape trauma syndrome (RTS) in order to draw attention to the striking similarities between the experiences of soldiers returning from war and of rape victims. This paved the way for a more comprehensive understanding of causes of PTSD.
Early in 1978, the diagnosis term "post-traumatic stress disorder" was first recommended in a working group finding presented to the Committee of Reactive Disorders.
A USAF study carried out in 1979 focused on individuals (civilian and military) who had worked to recover or identify the remains of those who died in Jonestown. The bodies had been dead for several days, and a third of them had been children. The study used the term "dysphoria" to describe PTSD-like symptoms.
After PTSD became an official American psychiatric diagnosis with the publication of DSM-III (1980), the number of personal injury lawsuits (tort claims) asserting the plaintiff had PTSD increased rapidly. However, triers of fact (judges and juries) often regarded the PTSD diagnostic criteria as imprecise, a view shared by legal scholars, trauma specialists, forensic psychologists, and forensic psychiatrists. The condition was termed "posttraumatic stress disorder" in the DSM-III (1980).
Professional discussions and debates in academic journals, at conferences, and between thought leaders, led to a more clearly-defined set of diagnostic criteria in DSM-IV (1994), particularly the definition of a "traumatic event". The DSM-IV classified PTSD under anxiety disorders. In the ICD-10 (first used in 1994), the spelling of the condition was "post-traumatic stress disorder".
In 2012, the researchers from the Grady Trauma Project highlighted the tendency people have to focus on the combat side of PTSD: "less public awareness has focused on civilian PTSD, which results from trauma exposure that is not combat related..." and "much of the research on civilian PTSD has focused on the sequelae of a single, disastrous event, such as the Oklahoma City bombing, September 11th attacks, and Hurricane Katrina". Disparity in the focus of PTSD research affected the already popular perception of the exclusive interconnectedness of combat and PTSD. This is misleading when it comes to understanding the implications and extent of PTSD as a neurological disorder.
The DSM-5 (2013) created a new category called "trauma and stressor-related disorders", in which PTSD is now classified.
America's 2014 National Comorbidity Survey reports that "the traumas most commonly associated with PTSD are combat exposure and witnessing among men and rape and sexual molestation among women."
Terminology
The Diagnostic and Statistical Manual of Mental Disorders does not hyphenate "post" and "traumatic", thus, the DSM-5 lists the disorder as posttraumatic stress disorder. However, many scientific journal articles and other scholarly publications do hyphenate the name of the disorder, viz., "post-traumatic stress disorder". Dictionaries also differ with regard to the preferred spelling of the disorder with the Collins English Dictionary – Complete and Unabridged using the hyphenated spelling, and the American Heritage Dictionary of the English Language, Fifth Edition and the Random House Kernerman Webster's College Dictionary giving the non-hyphenated spelling.
Some authors have used the terms "post-traumatic stress syndrome" or "post-traumatic stress symptoms" ("PTSS"), or simply "post-traumatic stress" ("PTS") in the case of the U.S. Department of Defense, to avoid stigma associated with the word "disorder".
The comedian George Carlin criticized the euphemism treadmill which led to progressive change of the way PTSD was referred to over the course of the 20th century, from "shell shock" in the First World War to the "battle fatigue" in the Second World War, to "operational exhaustion" in the Korean War, to the current "post-traumatic stress disorder", coined during the Vietnam War, which "added a hyphen" and which, he commented, "completely burie[s] [the pain] under jargon". He also stated that the name given to the condition has had a direct effect on the way veteran soldiers with PTSD were treated and perceived by civilian populations over time.
Research
Most knowledge regarding PTSD comes from studies in high-income countries.
To recapitulate some of the neurological and neurobehavioral symptoms experienced by the veteran population of recent conflicts in Iraq and Afghanistan, researchers at the Roskamp Institute and the James A Haley Veteran's Hospital (Tampa) have developed an animal model to study the consequences of mild traumatic brain injury (mTBI) and PTSD. In the laboratory, the researchers exposed mice to a repeated session of unpredictable stressor (i.e. predator odor while restrained), and physical trauma in the form of inescapable foot-shock, and this was also combined with a mTBI. In this study, PTSD animals demonstrated recall of traumatic memories, anxiety, and an impaired social behavior, while animals subject to both mTBI and PTSD had a pattern of disinhibitory-like behavior. mTBI abrogated both contextual fear and impairments in social behavior seen in PTSD animals. In comparison with other animal studies, examination of neuroendocrine and neuroimmune responses in plasma revealed a trend toward increase in corticosterone in PTSD and combination groups.
Stellate ganglion block is an experimental procedure for the treatment of PTSD.
Researchers are investigating a number of experimental FAAH and MAGL-inhibiting drugs in hopes of finding a better treatment for anxiety and stress-related illnesses. In 2016, the FAAH-inhibitor drug BIA 10-2474 was withdrawn from human trials in France due to adverse effects.
Evidence from clinical trials suggests that MDMA-assisted psychotherapy is an effective treatment for PTSD. On August 9, 2024, the FDA issued a letter stating that a further trial was necessary to ascertain that the benefits of MDMA-assisted psychotherapy outweighed the potential harms. Positive findings in clinical trials of MDMA-assisted psychotherapy might be substantially influenced by expectancy effects given the unblinding of participants. To prevent this confounding factor, it has been suggested that future trials compare MDMA against an active placebo. There is a lack of trials comparing MDMA-assisted psychotherapy to existent first-line treatments for PTSD, such as trauma-focused psychological treatments, which seems to achieve similar or even better outcomes than MDMA-assisted psychotherapy.
Psychotherapy
Trauma-focused psychotherapies for PTSD (also known as "exposure-based" or "exposure" psychotherapies), such as prolonged exposure therapy (PE), eye movement desensitization and reprocessing (EMDR), and cognitive-reprocessing therapy (CPT) have the most evidence for efficacy and are recommended as first-line treatment for PTSD by almost all clinical practice guidelines. Exposure-based psychotherapies demonstrate efficacy for PTSD caused by different trauma "types", such as combat, sexual-assault, or natural disasters. At the same time, many trauma-focused psychotherapies evince high drop-out rates.
Most systematic reviews and clinical guidelines indicate that psychotherapies for PTSD, most of which are trauma-focused therapies, are more effective than pharmacotherapy (medication), although there are reviews that suggest exposure-based psychotherapies for PTSD and pharmacotherapy are equally effective. Interpersonal psychotherapy shows preliminary evidence of probable efficacy, but more research is needed to reach definitive conclusions.
| Biology and health sciences | Mental disorder | null |
83124 | https://en.wikipedia.org/wiki/Black%20dwarf | Black dwarf | A black dwarf is a theoretical stellar remnant, specifically a white dwarf that has cooled sufficiently to no longer emit significant heat or light. Because the time required for a white dwarf to reach this state is calculated to be longer than the current age of the universe (13.8 billion years), no black dwarfs are expected to exist in the universe at the present time. The temperature of the coolest white dwarfs is one observational limit on the universe's age.
The name "black dwarf" has also been applied to hypothetical late-stage cooled brown dwarfs substellar objects with insufficient mass (less than approximately 0.07 ) to maintain hydrogen-burning nuclear fusion.
Formation
A white dwarf is what remains of a main sequence star of low or medium mass (below approximately 9 to 10 solar masses ()) after it has either expelled or fused all the elements for which it has sufficient temperature to fuse. What is left is then a dense sphere of electron-degenerate matter that cools slowly by thermal radiation, eventually becoming a black dwarf.
If black dwarfs were to exist, they would be challenging to detect because, by definition, they would emit very little radiation. They would, however, be detectable through their gravitational influence. Various white dwarfs cooled below (equivalent to M0 spectral class) were found in 2012 by astronomers using MDM Observatory's 2.4 meter telescope. They are estimated to be 11 to 12 billion years old.
Because the far-future evolution of stars depends on physical questions which are poorly understood, such as the nature of dark matter and the possibility and rate of proton decay (which is yet to be proven to exist), it is not known precisely how long it would take white dwarfs to cool to blackness. Barrow and Tipler estimate that it would take 1015 years for a white dwarf to cool to ; however, if weakly interacting massive particles (WIMPs) exist, interactions with these particles may keep some white dwarfs much warmer than this for approximately 1025 years. If protons are not stable, white dwarfs will also be kept warm by energy released from proton decay. For a hypothetical proton lifetime of 1037 years, Adams and Laughlin calculate that proton decay will raise the effective surface temperature of an old one-solar-mass white dwarf to approximately . Although cold, this is thought to be hotter than the cosmic microwave background radiation temperature 1037 years in the future.
It is speculated that some massive black dwarfs may eventually produce supernova explosions. These will occur if pycnonuclear (density-based) fusion processes much of the star to nickel-56, which decays into iron via emitting a positron. This would lower the Chandrasekhar limit for some black dwarfs below their actual mass. If this point is reached, it would then collapse and initiate runaway nuclear fusion. The most massive to explode would be just below the Chandrasekhar limit at around 1.41 solar masses and would take of the order of , while the least massive to explode would be about 1.16 solar masses and would take of the order , totaling around 1% of all black dwarfs. One major caveat is that proton decay would decrease the mass of a black dwarf far more rapidly than pycnonuclear processes occur, preventing any supernova explosions.
Future of the Sun
Once the Sun stops fusing helium in its core and ejects its layers in a planetary nebula in about 8 billion years, it will become a white dwarf and also, over trillions of years, eventually will no longer emit any light. After that, the Sun will not be visible to the equivalent of the naked human eye, removing it from optical view even if the gravitational effects are evident. The estimated time for the Sun to cool enough to become a black dwarf is at least 1015 (1 quadrillion) years, though it could take much longer than this, if weakly interacting massive particles (WIMPs) exist, as described above. The described phenomena are considered a promising method of verification for the existence of WIMPs and black dwarfs.
| Physical sciences | Stellar astronomy | Astronomy |
83137 | https://en.wikipedia.org/wiki/Software-defined%20radio | Software-defined radio | Software-defined radio (SDR) is a radio communication system where components that conventionally have been implemented in analog hardware (e.g. mixers, filters, amplifiers, modulators/demodulators, detectors, etc.) are instead implemented by means of software on a computer or embedded system. While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which were once only theoretically possible.
A basic SDR system may consist of a computer equipped with a sound card, or other analog-to-digital converter, preceded by some form of RF front end. Significant amounts of signal processing are handed over to the general-purpose processor, rather than being done in special-purpose hardware (electronic circuits). Such a design produces a radio which can receive and transmit widely different radio protocols (sometimes referred to as waveforms) based solely on the software used.
Software radios have significant utility for the military and cell phone services, both of which must serve a wide variety of changing radio protocols in real time. In the long term, software-defined radios are expected by proponents like the Wireless Innovation Forum to become the dominant technology in radio communications. SDRs, along with software defined antennas are the enablers of cognitive radio.
Operating principles
Superheterodyne receivers use a VFO (variable-frequency oscillator), mixer, and filter to tune the desired signal to a common IF (intermediate frequency) or baseband. Typically in SDR, this signal is then sampled by the analog-to-digital converter. However, in some applications it is not necessary to tune the signal to an intermediate frequency and the radio frequency signal is directly sampled by the analog-to-digital converter (after amplification).
Real analog-to-digital converters lack the dynamic range to pick up sub-microvolt, nanowatt-power radio signals produced by an antenna. Therefore, a low-noise amplifier must precede the conversion step and this device introduces its own problems. For example, if spurious signals are present (which is typical), these compete with the desired signals within the amplifier's dynamic range. They may introduce distortion in the desired signals, or may block them completely. The standard solution is to put band-pass filters between the antenna and the amplifier, but these reduce the radio's flexibility. Real software radios often have two or three analog channel filters with different bandwidths that are switched in and out.
The flexibility of SDR allows for dynamic spectrum usage, alleviating the need to statically assign the scarce spectral resources to a single fixed service.
History
In 1970, a researcher at a United States Department of Defense laboratory coined the term "digital receiver". A laboratory called the Gold Room at TRW in California created a software baseband analysis tool called Midas, which had its operation defined in software.
In 1982, while working under a US Department of Defense contract at RCA, Ulrich L. Rohde's department developed the first SDR, which used the COSMAC (Complementary Symmetry Monolithic Array Computer) chip. Rohde was the first to present on this topic with his February 1984 talk, "Digital HF Radio: A Sampling of Techniques" at the Third International Conference on HF Communication Systems and Techniques in London.
In 1984, a team at the Garland, Texas, Division of E-Systems Inc. (now Raytheon) coined the term "software radio" to refer to a digital baseband receiver, as published in their E-Team company newsletter. A 'Software Radio Proof-of-Concept' laboratory was developed by the E-Systems team that popularized Software Radio within various government agencies. This 1984 Software Radio was a digital baseband receiver that provided programmable interference cancellation and demodulation for broadband signals, typically with thousands of adaptive filter taps, using multiple array processors accessing shared memory.
In 1991, Joe Mitola independently reinvented the term software radio for a plan to build a GSM base station that would combine Ferdensi's digital receiver with E-Systems Melpar's digitally controlled communications jammers for a true software-based transceiver. E-Systems Melpar sold the software radio idea to the US Air Force. Melpar built a prototype commanders' tactical terminal in 1990–1991 that employed Texas Instruments TMS320C30 processors and Harris Corporation digital receiver chip sets with digitally synthesized transmission. The Melpar prototype didn't last long because when E-Systems ECI Division manufactured the first limited production units, they decided to "throw out those useless C30 boards", replacing them with conventional RF filtering on transmit and receive and reverting to a digital baseband radio instead of the SpeakEasy like IF ADC/DACs of Mitola's prototype. The Air Force would not let Mitola publish the technical details of that prototype, nor would they let Diane Wasserman publish related software life cycle lessons learned because they regarded it as a "USAF competitive advantage". So instead, with USAF permission, in 1991, Mitola described the architecture principles without implementation details in a paper, "Software Radio: Survey, Critical Analysis and Future Directions" which became the first IEEE publication to employ the term in 1992. When Mitola presented the paper at the conference, Bob Prill of GEC Marconi began his presentation following Mitola with: "Joe is absolutely right about the theory of a software radio and we are building one." Prill gave a GEC Marconi paper on PAVE PILLAR, a SpeakEasy precursor. SpeakEasy, the military software radio was formulated by Wayne Bonser, then of Rome Air Development Center (RADC), now Rome Labs; by Alan Margulies of MITRE Rome, NY; and then Lt Beth Kaspar, the original DARPA SpeakEasy project manager and by others at Rome including Don Upmal. Although Mitola's IEEE publications resulted in the largest global footprint for software radio, Mitola privately credits that DoD lab of the 1970s with its leaders Carl, Dave, and John with inventing the digital receiver technology on which he based software radio once it was possible to transmit via software.
A few months after the National Telesystems Conference 1992, in an E-Systems corporate program review, a vice-president of E-Systems Garland Division objected to Melpar's (Mitola's) use of the term "software radio" without credit to Garland. Alan Jackson, Melpar VP of marketing at that time, asked the Garland VP if their laboratory or devices included transmitters. The Garland VP said: "No, of course not — ours is a software radio receiver." Al replied: "Then it's a digital receiver but without a transmitter, it's not a software radio." Corporate leadership agreed with Al, so the publication stood. Many amateur radio operators and HF radio engineers had realized the value of digitizing HF at RF and of processing it with Texas Instruments TI C30 digital signal processors (DSPs) and their precursors during the 1980s and early 1990s. Radio engineers at Roke Manor in the UK and at an organization in Germany had recognized the benefits of ADC at the RF in parallel. Mitola's publication of software radio in the IEEE opened the concept to the broad community of radio engineers. His May 1995 special issue of the IEEE Communications Magazine with the cover "Software Radio" was regarded as a watershed event with thousands of academic citations. Mitola was introduced by Joao da Silva in 1997 at the First International Conference on Software Radio as "godfather" of software radio in no small part for his willingness to share such a valuable technology "in the public interest".
Perhaps the first software-based radio transceiver was designed and implemented by Peter Hoeher and Helmuth Lang at the German Aerospace Research Establishment (DLR, formerly DFVLR) in Oberpfaffenhofen, Germany, in 1988. Both transmitter and receiver of an adaptive digital satellite modem were implemented according to the principles of a software radio, and a flexible hardware periphery was proposed.
In 1995, Stephen Blust coined the term "software defined radio", publishing a request for information from Bell South Wireless at the first meeting of the Modular Multifunction Information Transfer Systems (MMITS) forum in 1996 (in 1998 the name was changed to the Software Defined Radio Forum), organized by the USAF and DARPA around the commercialization of their SpeakEasy II program. Mitola objected to Blust's term, but finally accepted it as a pragmatic pathway towards the ideal software radio. Although the concept was first implemented with an IF ADC in the early 1990s, software-defined radios have their origins in the U.S. and European defense sectors of the late 1970s (for example, Walter Tuttlebee described a VLF radio that used an ADC and an 8085 microprocessor), about a year after the First International Conference in Brussels. One of the first public software radio initiatives was the U.S. DARPA-Air Force military project named SpeakEasy. The primary goal of the SpeakEasy project was to use programmable processing to emulate more than 10 existing military radios, operating in frequency bands between 2 and 2000 MHz. Another SpeakEasy design goal was to be able to easily incorporate new coding and modulation standards in the future, so that military communications can keep pace with advances in coding and modulation techniques.
In 1997, Blaupunkt introduced the term "DigiCeiver" for their new range of DSP-based tuners with Sharx in car radios such as the Modena & Lausanne RD 148.
SpeakEasy phase I
From 1990 to 1995, the goal of the SpeakEasy program was to demonstrate a radio for the U.S. Air Force tactical ground air control party that could operate from 2 MHz to 2 GHz, and thus could interoperate with ground force radios (frequency-agile VHF, FM, and SINCGARS), Air Force radios (VHF AM), Naval Radios (VHF AM and HF SSB teleprinters) and satellites (microwave QAM). Some particular goals were to provide a new signal format in two weeks from a standing start, and demonstrate a radio into which multiple contractors could plug parts and software.
The project was demonstrated at TF-XXI Advanced Warfighting Exercise, and demonstrated all of these goals in a non-production radio. There was some discontent with failure of these early software radios to adequately filter out of band emissions, to employ more than the simplest of interoperable modes of the existing radios, and to lose connectivity or crash unexpectedly. Its cryptographic processor could not change context fast enough to keep several radio conversations on the air at once. Its software architecture, though practical enough, bore no resemblance to any other. The SpeakEasy architecture was refined at the MMITS Forum between 1996 and 1999 and inspired the DoD integrated process team (IPT) for programmable modular communications systems (PMCS) to proceed with what became the Joint Tactical Radio System (JTRS).
The basic arrangement of the radio receiver used an antenna feeding an amplifier and down-converter (see Frequency mixer) feeding an automatic gain control, which fed an analog-to-digital converter that was on a computer VMEbus with a lot of digital signal processors (Texas Instruments C40s). The transmitter had digital-to-analog converters on the PCI bus feeding an up converter (mixer) that led to a power amplifier and antenna. The very wide frequency range was divided into a few sub-bands with different analog radio technologies feeding the same analog to digital converters. This has since become a standard design scheme for wideband software radios.
SpeakEasy phase II
The goal was to get a more quickly reconfigurable architecture, i.e., several conversations at once, in an open software architecture, with cross-channel connectivity (the radio can "bridge" different radio protocols). The secondary goals were to make it smaller, cheaper, and weigh less.
The project produced a demonstration radio only fifteen months into a three-year research project. This demonstration was so successful that further development was halted, and the radio went into production with only a 4 MHz to 400 MHz range.
The software architecture identified standard interfaces for different modules of the radio: "radio frequency control" to manage the analog parts of the radio, "modem control" managed resources for modulation and demodulation schemes (FM, AM, SSB, QAM, etc.), "waveform processing" modules actually performed the modem functions, "key processing" and "cryptographic processing" managed the cryptographic functions, a "multimedia" module did voice processing, a "human interface" provided local or remote controls, there was a "routing" module for network services, and a "control" module to keep it all straight.
The modules are said to communicate without a central operating system. Instead, they send messages over the PCI computer bus to each other with a layered protocol.
As a military project, the radio strongly distinguished "red" (unsecured secret data) and "black" (cryptographically-secured data).
The project was the first known to use FPGAs (field programmable gate arrays) for digital processing of radio data. The time to reprogram these was an issue limiting application of the radio. Today, the time to write a program for an FPGA is still significant, but the time to download a stored FPGA program is around 20 milliseconds. This means an SDR could change transmission protocols and frequencies in one fiftieth of a second, probably not an intolerable interruption for that task.
2000s
The SpeakEasy SDR system in the 1994 uses a Texas Instruments TMS320C30 CMOS digital signal processor (DSP), along with several hundred integrated circuit chips, with the radio filling the back of a truck. By the late 2000s, the emergence of RF CMOS technology made it practical to scale down an entire SDR system onto a single mixed-signal system-on-a-chip, which Broadcom demonstrated with the BCM21551 processor in 2007. The Broadcom BCM21551 has practical commercial applications, for use in 3G mobile phones.
Military usage
United States
The Joint Tactical Radio System (JTRS) was a program of the US military to produce radios that provide flexible and interoperable communications. Examples of radio terminals that require support include hand-held, vehicular, airborne and dismounted radios, as well as base-stations (fixed and maritime).
This goal is achieved through the use of SDR systems based on an internationally endorsed open Software Communications Architecture (SCA). This standard uses CORBA on POSIX operating systems to coordinate various software modules.
The program is providing a flexible new approach to meet diverse soldier communications needs through software programmable radio technology. All functionality and expandability is built upon the SCA.
The SCA, despite its military origin, is under evaluation by commercial radio vendors for applicability in their domains. The adoption of general-purpose SDR frameworks outside of military, intelligence, experimental and amateur uses, however, is inherently hampered by the fact that civilian users can more easily settle with a fixed architecture, optimized for a specific function, and as such more economical in mass market applications. Still, software defined radio's inherent flexibility can yield substantial benefits in the longer run, once the fixed costs of implementing it have gone down enough to overtake the cost of iterated redesign of purpose built systems. This then explains the increasing commercial interest in the technology.
SCA-based infrastructure software and rapid development tools for SDR education and research are provided by the Open Source SCA Implementation Embedded (OSSIE) project. The Wireless Innovation Forum funded the SCA Reference Implementation project, an open source implementation of the SCA specification. (SCARI) can be downloaded for free.
Amateur and home use
A typical amateur software radio uses a direct conversion receiver. Unlike direct conversion receivers of the more distant past, the mixer technologies used are based on the quadrature sampling detector and the quadrature sampling exciter.
The receiver performance of this line of SDRs is directly related to the dynamic range of the analog-to-digital converters (ADCs) utilized. Radio frequency signals are down converted to the audio frequency band, which is sampled by a high performance audio frequency ADC. First generation SDRs used a 44 kHz PC sound card to provide ADC functionality. The newer software defined radios use embedded high performance ADCs that provide higher dynamic range and are more resistant to noise and RF interference.
A fast PC performs the digital signal processing (DSP) operations using software specific for the radio hardware. Several software radio implementations use the open source SDR library DttSP.
The SDR software performs all of the demodulation, filtering (both radio frequency and audio frequency), and signal enhancement (equalization and binaural presentation). Uses include every common amateur modulation: morse code, single-sideband modulation, frequency modulation, amplitude modulation, and a variety of digital modes such as radioteletype, slow-scan television, and packet radio. Amateurs also experiment with new modulation methods: for instance, the DREAM open-source project decodes the COFDM technique used by Digital Radio Mondiale.
There is a broad range of hardware solutions for radio amateurs and home use. There are professional-grade transceiver solutions, e.g. the Zeus ZS-1 or FlexRadio, home-brew solutions, e.g. PicAStar transceiver, the SoftRock SDR kit, and starter or professional receiver solutions, e.g. the FiFi SDR for shortwave, or the Quadrus coherent multi-channel SDR receiver for short wave or VHF/UHF in direct digital mode of operation.
RTL-SDR
Eric Fry discovered that some common low-cost DVB-T USB dongles with the Realtek RTL2832U controller and tuner, e.g. the Elonics E4000 or the Rafael Micro R820T, can be used as a wide-band (3 MHz) SDR receiver. Experiments proved the capability of this setup to analyze Perseids meteor shower using Graves radar signals. This project is being maintained at Osmocom.
HPSDR
The HPSDR (High Performance Software Defined Radio) project uses a 16-bit analog-to-digital converter that provides performance over the range 0 to comparable to that of a conventional analogue HF radio. The receiver will also operate in the VHF and UHF range using either mixer image or alias responses. Interface to a PC is provided by a USB 2.0 interface, although Ethernet could be used as well. The project is modular and comprises a backplane onto which other boards plug in. This allows experimentation with new techniques and devices without the need to replace the entire set of boards. An exciter provides of RF over the same range or into the VHF and UHF range using image or alias outputs.
WebSDR
WebSDR is a project initiated by Pieter-Tjerk de Boer providing access via browser to multiple SDR receivers worldwide covering the complete shortwave spectrum. De Boer has analyzed Chirp Transmitter signals using the coupled system of receivers.
KiwiSDR
KiwiSDR is also a via-browser SDR like WebSDR. Unlike WebSDR, the frequency is limited to 3 Hz to 30 MHz (ELF to HF)
Other applications
On account of its increasing accessibility, with lower cost hardware, more software tools and documentation, the applications of SDR have expanded past their primary and historic use cases. SDR is now being used in areas such as wildlife tracking, radio astronomy, medical imaging research, and art.
| Technology | Broadcasting | null |
11383147 | https://en.wikipedia.org/wiki/Room | Room | In a building or a ship, a room is any enclosed space within a number of walls to which entry is possible only via a door or other dividing structure. The entrance connects it to either a passageway, another room, or the outdoors. The space is typically large enough for several people to move about. The size, fixtures, furnishings, and sometimes placement of the room within the building or ship (or sometimes a train) support the activity to be conducted in it.
History
Historically, the use of rooms dates at least to early Minoan cultures about 2200 BC, where excavations at Akrotiri on Santorini reveal clearly defined rooms within certain structures.
In early structures, the different room types could be identified to include bedrooms, kitchens, bathing rooms, closets, reception rooms, and other specialized uses. The aforementioned Akrotiri excavations reveal rooms sometimes built above other rooms connected by staircases, bathrooms with alabaster appliances such as washbasins, bathing tubs, and toilets, all connected to an elaborate twin plumbing systems of ceramic pipes for cold and hot water separately. Ancient Rome manifested very complex building forms with a variety of room types, including some of the earliest examples of rooms for indoor bathing. The Anasazi civilization also had an early complex development of room structures, probably the oldest in North America, while the Maya of Central America had very advanced room configurations as early as several hundred AD. By at least the early Han dynasty in China (e.g. approximately 200 BC), comfort room complex multi-level building forms emerged, particularly for religious and public purposes; these designs featured many roomed structures and included vertical connections of rooms.
Types of rooms
Work rooms
Some rooms were specially designed to support the work of the household, such as kitchens, pantries, and root cellars, all of which were intended for the preparation and storage of food. A home office or study may be used for household paperwork or external business purposes. Some work rooms are designated by the intended activity: for example, a sewing room is used for sewing, and the laundry room is used for washing and ironing laundry.
Other rooms are meant to promote comfort and cleanliness, such as the toilet and bathroom, which may be combined or which may be in separate rooms. The public equivalent is the restroom, which usually features a toilet and handwashing facilities, but not usually a shower or a bathtub. Showers are only available in athletic or aquatic facilities which feature a changing room.
In the 17th, 18th, and 19th centuries, among those who could afford it, these facilities were kept in separate areas. The kitchen was detached from the main part of the house, or later put in the basement, to reduce the risk of fire and keep the heat and smell of cooking away from the main house during the warm months. The toilet, often a simple pit latrine, was put in an outhouse or privy, to keep the smell and insects away from the main house.
Social rooms
A variety of room types have been distinguished over time, the main purpose of which was socializing with other people.
In previous centuries, very large homes often featured a great hall. This room was so named because it was very large, regardless of any excellence in it. It was originally a public room and most likely seen in the main home of a noble estate. In this room, people who had business with the local landowner or his household could meet. As the largest room, it could also be used as a dining room for large banquets, or cleared of tables, provided with music, and turned into a ballroom. Off the side, or in a different part of the house, might be a drawing room, used as a room with greater privacy, for the owner's family and their friends to talk.
A sitting room, living room, or parlour is a place for social visits and entertainment. One decorated to appeal to a man might be called a man cave; in an older style, the cabinet was used by men who wanted a separate room. Some large homes have special rooms for entertainment; these may include a library, a home theater, a billiard room, a game room, or a music room.
Sleeping room
A bedroom is the room where a bed is located, and whose primary purpose is sleeping. A master bedroom may have an en suite bathroom. A guest room is a bedroom used primarily by overnight guests. The nursery is a bedroom for babies or young children. It may be separate from the playroom, which is a room where the children's toys are kept.
Bedrooms may be used for other purposes. A large house might have separate rooms for these other functions, such as a dressing room for changing clothes (also seen in clothing stores and businesses where people need to change clothes, but do not need to sleep). In Tudor times, a bedroom might have a separate closet, for praying and seeking privacy; this architectural idea lives on in the storage closet.
In the United Kingdom, many houses are built to contain a box-room (box room or boxroom) that is easily identifiable, being smaller than the others. The small size of these rooms limits their use, and they tend to be used as a small single bedroom, small child's bedroom, or as a storage room. Other box rooms may house a live-in domestic worker. Traditionally, and often seen in country houses and larger suburban houses up until the 1930s in Britain, the box room was for the storage of boxes, trunks, portmanteaux, and the like, rather than for bedroom use. In Ireland, a return room is a box room added between floors at the turn ("return") of a staircase. Return rooms may be added as extensions, and are sometimes used or converted for other functions such as a kitchen or bathroom.
A sick room is a specialized room, sometimes just large enough to contain a bed, where a family member could be conveniently tended and kept separate from the rest of the household while recuperating from an illness.
Multi-purpose rooms
In smaller homes, most rooms were multi-purpose. In a bedsit, communal apartment, or studio apartment, a single main room may serve most functions, except usually the toilet and bath. Types of multi-purpose rooms include the great room, which removes most walls and doors between the kitchen, dining and living rooms, to create one larger, open area.
In some places, a lady's boudoir was a combination sleeping room and place to entertain small numbers of friends. In others, the boudoir was an anteroom before her bedroom.
En-suite room
An en-suite room is a type of room which includes a private room, private washroom and access to a communal kitchen. The washroom generally includes an en-suite shower, a sink and a toilet. "En-suite" usually indicates a private space, especially if it is student accommodation. En-suite rooms for students are intended to provide study space and a peaceful environment.
| Technology | Architectural elements | null |
9940234 | https://en.wikipedia.org/wiki/Tomato | Tomato | The tomato (, ), Solanum lycopersicum, is a plant whose fruit is an edible berry that is eaten as a vegetable. The tomato is a member of the nightshade family that includes tobacco, potato, and chili peppers. It originated from and was domesticated in western South America. It was introduced to the Old World by the Spanish in the Columbian exchange in the 16th century.
Tomato plants are vines, largely annual and vulnerable to frost, though sometimes living longer in greenhouses. The flowers are able to self-fertilise. Modern varieties have been bred to ripen uniformly red, in a process that has impaired the fruit's sweetness and flavor. There are thousands of cultivars, varying in size, color, shape, and flavor. Tomatoes are attacked by many insect pests and nematodes, and are subject to diseases caused by viruses and by mildew and blight fungi.
The tomato has a strong savoury umami flavor, and is an important ingredient in cuisines around the world. It is used in pizzas, pasta and other sauces, soups such as gazpacho, curries including dhansak and rogan josh, as juice, and in Bloody Mary cocktails. Tomato festivals are held annually in Buñol, Spain, in Reynoldsburg, Ohio, and in Närpes, Finland.
Naming
Etymology
The word tomato comes from the Spanish , which in turn comes from the Nahuatl word . The specific name lycopersicum, meaning 'wolf peach', originated with Galen, who used it to denote a plant that has never been identified. Luigi Anguillara speculated in the 16th century that Galen's lycopersicum might be the tomato, and despite the impossibility of this identification, lycopersicum entered scientific use as a name for the fruit.
Pronunciation
The usual pronunciations of tomato are (in North American English) and (in British English). The word's dual pronunciations were immortalized in Ira and George Gershwin's 1937 song "Let's Call the Whole Thing Off" ("You like and I like / You like and I like ").
History
The likely wild ancestor of the tomato, the red-fruited Solanum pimpinellifolium, is native to western South America, where it was probably first domesticated. The resulting domesticated plant, ancestral to the modern large-fruited tomato varieties, was probably the cherry tomato, S. lycopersicum var. cerasiforme. However, genomic analysis suggests that the domestication process may have been more complex than this. S. lycopersicon var. cerasiforme may have existed before domestication, while traits supposedly typical of domestication may have been reduced in that variety and then reselected (in a case of convergent evolution) in the cultivated tomato. The analysis predicts that var. cerasiforme appeared around 78,000 years ago, while the cultivated tomato originated around 7,000 years ago (5,000 BCE), with substantial uncertainty, making it unclear how humans may have been involved in the process.
The Spanish first introduced tomatoes to Europe, where they became used in Spanish food. Elsewhere in Europe, its first use was ornamental, not least because it was understood to be related to the nightshades and assumed to be poisonous.
Mesoamerica
The exact date of domestication is unknown; by 500 BC, it was already being cultivated in southern Mexico and probably other areas. The Pueblo people believed that tomato seeds could confer powers of divination. The large, lumpy variety of tomato, a mutation from a smoother, smaller fruit, originated in Mesoamerica, and may be the direct ancestor of some modern cultivated tomatoes.
The Aztecs raised several varieties of tomato, with red tomatoes called and green tomatoes (physalis) called (tomatillo). Bernardino de Sahagún reported seeing a great variety of tomatoes in the Aztec market at Tenochtitlán (Mexico City): "large tomatoes, small tomatoes, leaf tomatoes, sweet tomatoes, large serpent tomatoes, nipple-shaped tomatoes", and tomatoes of all colors from the brightest red to the deepest yellow. Sahagún mentioned Aztecs cooking various sauces, some with tomatoes of different sizes, serving them in city markets: "foods sauces, hot sauces; ... with tomatoes, ... sauce of large tomatoes, sauce of ordinary tomatoes, ..."
Spanish distribution
The Spanish conquistador Hernán Cortés's capture of Tenochtitlan in 1521 initiated the widespread cultural and biological interchange called the Columbian exchange; certainly the tomato was being grown in Europe within a few years of that event. The earliest discussion of the tomato in European literature appeared in Pietro Andrea Mattioli's 1544 herbal. He suggested that a new type of eggplant had been brought to Italy. He stated that it was blood red or golden color when mature, and could be divided into segments and eaten like an eggplant—that is, cooked and seasoned with salt, black pepper, and oil. Ten years later Mattioli named the fruits in print as , or "golden apples".
After the Spanish colonization of the Americas, the Spanish distributed the tomato throughout their colonies in the Caribbean. They brought it to the Philippines, from where it spread to southeast Asia and then the whole of Asia.
The Spanish brought the tomato to Europe, where it grew easily in Mediterranean climates; cultivation began in the 1540s. It was probably eaten shortly after it was introduced, and was certainly being used as food by the early 17th century in Spain, as documented in the 1618 play La octava maravilla by Lope de Vega with "lovelier than ... a tomato in season".
China
The tomato was introduced to China, likely via the Philippines or Macau, in the 16th century. It was given the name 番茄 (foreign eggplant), as the Chinese named many foodstuffs introduced from abroad, but referring specifically to early introductions.
Italy
In 1548, the house steward of Cosimo de' Medici, the grand duke of Tuscany, wrote to the Medici private secretary informing him that the basket of tomatoes sent from the grand duke's Florentine estate at Torre del Gallo "had arrived safely". Tomatoes were grown mainly as ornamentals early on after their arrival in Italy. For example, the Florentine aristocrat Giovanvettorio Soderini wrote how they "were to be sought only for their beauty", and were grown only in gardens or flower beds. The tomato's ability to mutate and create new and different varieties helped contribute to its success and spread throughout Italy. However, in areas where the climate supported growing tomatoes, their habit of growing close to the ground suggested low status. They were not adopted as a staple of the peasant population because they were not as filling as other crops. Additionally, both toxic and inedible varieties discouraged many people from attempting to consume or prepare any other varieties. In certain areas of Italy, such as Florence, the fruit was used solely as a tabletop decoration, until it was incorporated into the local cuisine in the late 17th or early 18th century. The earliest discovered cookbook with tomato recipes was published in Naples in 1692, though the author had apparently obtained these recipes from Spanish sources.
Varieties were developed over the following centuries for drying, for sauce, for pizzas, and for long-term storage. These varieties are usually known for their place of origin as much as by a variety name. For example, there is the , the "hanging tomato of Vesuvius", and the well known and highly prized San Marzano tomato grown in that region, with a European protected designation of origin certification.
Britain
Tomatoes were not grown in England until the 1590s. One of the earliest cultivators was John Gerard, a barber-surgeon. Gerard's Herbal, published in 1597, and largely plagiarized from continental sources, is also one of the earliest discussions of the tomato in England. Gerard knew the tomato was eaten in Spain and Italy. Nonetheless, he believed it was poisonous. Gerard's views were influential, and the tomato was considered unfit for eating for many years in Britain and its North American colonies. By 1820, tomatoes were described as "to be seen in great abundance in all our vegetable markets" and to be "used by all our best cooks", reference was made to their cultivation in gardens still "for the singularity of their appearance", while their use in cooking was associated with exotic Italian or Jewish cuisine. For example, in Elizabeth Blackwell's A Curious Herbal, it is described under the name "Love Apple ()" as being consumed with oil and vinegar in Italy, similar to consumption of cucumbers in the UK. In 1963, The New York Times gave an explanation of the name 'Love Apple' as a French misreading of the Italian ("the Moors' apple") as , ("apple of love").
Middle East
The tomato was introduced to cultivation in the Middle East by John Barker, British consul in Aleppo . Nineteenth century descriptions of its consumption are uniformly as an ingredient in a cooked dish. In 1881, it is described as only eaten in the region "within the last forty years".
United States
The earliest reference to tomatoes being grown in British North America is from 1710, when herbalist William Salmon saw them in what is today South Carolina, perhaps introduced from the Caribbean. By the mid-18th century, they were cultivated on some Carolina plantations, and probably in other parts of the Southeast. Thomas Jefferson, who ate tomatoes in Paris, sent some seeds back to America. Some early American advocates of the culinary use of the tomato included Michele Felice Cornè and Robert Gibbon Johnson. Many Americans considered tomatoes to be poisonous at this time and, in general, they were grown more as ornamental plants than as food. In 1897, W. H. Garrison stated, "The belief was once transmitted that the tomato was sinisterly dangerous." He recalled in his youth tomatoes were dubbed "love-apples or wolf-apples" and shunned as "globes of the devil".
When Alexander W. Livingston (1821–1898) began developing the tomato as a commercial crop, his aim had been to grow tomatoes smooth in contour, uniform in size, and sweet in flavor. He eventually developed over seventeen varieties. The U.S. Department of Agriculture's 1937 yearbook declared that "half of the major varieties were a result of the abilities of the Livingstons to evaluate and perpetuate superior material in the tomato." Livingston's first breed of tomato, the Paragon, was introduced in 1870. In 1875, he introduced the Acme, said to be in the parentage of most cultivars for the next twenty-five years. Other early breeders included Henry Tilden in Iowa and a Dr. Hand in Baltimore.
Because of the tomato's need for heat and a long growing season, several states in the Sun Belt became major producers, particularly Florida and California. In California, tomatoes are grown under irrigation for both the fresh market and for canning and processing. The University of California, Davis's C.M. Rick Tomato Genetics Resource Center maintains a gene bank of wild relatives, monogenic mutants and genetic stocks. Research on processing tomatoes is also conducted by the California Tomato Research Institute in Escalon, California. In California, growers have used a method of cultivation called dry-farming, especially with Early Girl tomatoes. This technique encourages the plant to send roots deep to find existing moisture.
Botany
Description
Tomato plants are vines, becoming decumbent, and can grow up to ; bush varieties are generally no more than tall. They are tender perennials, often grown as annuals.
Tomato plants are dicots. They grow as a series of branching stems, with a terminal bud at the tip that does the actual growing. When the tip eventually stops growing, whether because of pruning or flowering, lateral buds take over and grow into new, fully functional, vines.
Tomato vines are typically pubescent, meaning covered with fine short hairs. The hairs facilitate the vining process, turning into roots wherever the plant is in contact with the ground and moisture, especially if the vine's connection to its original root has been damaged or severed.
The leaves are long, odd pinnate, with five to nine leaflets on petioles, each leaflet up to long, with a serrated margin; both the stem and leaves are densely glandular-hairy.
Tomato flowers are bisexual and are able to self fertilize. As tomatoes were moved from their native areas, their traditional pollinators (probably a species of halictid bee) did not move with them. The trait of self-fertility became an advantage, and domestic cultivars of tomato have been selected to maximize this trait. This is not the same as self-pollination, despite the common claim that tomatoes do so. That tomatoes pollinate themselves poorly without outside aid is clearly shown in greenhouse situations, where pollination must be aided by artificial wind, vibration of the plants, or by cultured bumblebees.
The flowers develop on the apical meristem. They have the anthers fused along the edges, forming a column surrounding the pistil's style. The anthers bend into a cone-like structure, surrounding the stigma. The flowers are across, yellow, with five pointed lobes on the corolla; they are borne in a cyme of three to twelve together.
The fruit develops from the ovary of the plant after fertilization, its flesh comprising the pericarp walls. The fruit contains locules, hollow spaces full of seeds. These vary among cultivated varieties. Some smaller varieties have two locules; globe-shaped varieties typically have three to five; beefsteak tomatoes have a great number of small locules; and plum tomatoes have very few, very small locules.
For propagation, the seeds need to come from a mature fruit, and must be lightly fermented to remove the gelatinous outer coating and then dried before use.
The tomato has a mutualistic relationship with arbuscular mycorrhizal fungi such as Rhizophagus irregularis. Scientists use the tomato as a model species for investigating such symbioses.
Phylogeny
Like the potato, tomatoes belong to the genus Solanum, which is a member of the nightshade family, the Solanaceae. That is a diverse family of flowering plants, often poisonous, that includes the mandrake (Mandragora), deadly nightshade (Atropa), and tobacco (Nicotiana), as shown in the outline phylogenetic tree (many branches omitted).
Taxonomy
In 1753, Linnaeus placed the tomato in the genus Solanum (alongside the potato) as Solanum lycopersicum. In 1768, Philip Miller moved it to its own genus, naming it Lycopersicon esculentum. The name came into wide use, but was technically in breach of the plant naming rules because Linnaeus's species name lycopersicum still had priority. Although the name Lycopersicum lycopersicum was suggested by Karsten (1888), it is not used because it violates the International Code of Nomenclature barring the use of tautonyms in botanical nomenclature. The corrected name Lycopersicon lycopersicum (Nicolson 1974) was technically valid, because Miller's genus name and Linnaeus's species name differ in exact spelling. As Lycopersicon esculentum has become so well known, it was officially listed as a nomen conservandum in 1983, and would be the correct name for the tomato in classifications which do not place the tomato in the genus Solanum.
Genetic evidence shows that Linnaeus was correct to put the tomato in the genus Solanum, making S. lycopersicum the correct name. Both names, however, will probably be found in the literature for some time. Two of the major reasons for considering the genera separate are the leaf structure (tomato leaves are markedly different from any other Solanum), and the biochemistry (many of the alkaloids common to other Solanum species are conspicuously absent from the tomato). On the other hand, hybrids of tomato and diploid potato can be created in the lab by somatic fusion, and are partially fertile, providing evidence of the close relationship between these species.
Plant breeding
Genetics
An international consortium of researchers from 10 countries began sequencing the tomato genome in 2004. A prerelease version of the genome was made available in December 2009. The complete genome for the cultivar Heinz 1706 was published on 31 May 2012 in Nature. The latest reference genome published in 2021 had 799 MB and encodes 34,384 (predicted) proteins, spread over 12 chromosomes.
The first commercially available genetically modified food was a tomato called Flavr Savr, which was engineered to have a longer shelf life. It could be vine ripened without compromising shelf life. However, the product was not commercially successful, and was sold only until 1997.
Breeding of modern commercial varieties
The poor taste and lack of sugar in modern garden and commercial tomato varieties resulted from breeding tomatoes to ripen uniformly red. This change occurred after discovery of a mutant "u" phenotype in the mid-20th century, so named because the fruits ripened uniformly. This was widely cross-bred to produce red fruit without the typical green ring around the stem on un-crossbred varieties. Before this, most tomatoes produced more sugar during ripening, and were sweeter and more flavorful.
10–20% of the total carbon fixed in the fruit can be produced by photosynthesis in the developing fruit of the normal U phenotype. The u mutation encodes a factor that produces defective chloroplasts with lower density in developing fruit, making them a lighter green, and reducing sugar in the resulting ripe fruit by 10–15%. Perhaps more importantly, the fruit chloroplasts are remodelled during ripening into chlorophyll-free chromoplasts that synthesize and accumulate the carotenoids lycopene, β-carotene, and other metabolites that are sensory and nutritional assets of the ripe fruit. The potent chloroplasts in the dark-green shoulders of the "U" phenotype are beneficial here, but have the disadvantage of leaving green shoulders near the stems of the ripe fruit, and even cracked yellow shoulders. This is apparently because of oxidative stress due to overload of the photosynthetic chain in direct sunlight at high temperatures. Hence, genetic design of a commercial variety that combines the advantages of types "u" and "U" requires fine tuning, but may be feasible.
Breeders strive to produce tomato plants with improved yield, shelf life, size, and resistance to environmental pressures, including disease. These efforts have yielded unintended negative consequences on various fruit attributes. For instance, linkage drag, the introduction of an undesired trait during backcrossing, has altered the metabolism of the fruit. This trait is physically close to the desired allele along the chromosome. Breeding for traits like larger fruit has thus unintentionally altered nutritional value and flavor.
Breeders have turned to wild tomato species as a source of alleles to introduce beneficial traits into modern varieties. For example, wild relatives may possess higher amounts of fruit solids (associated with greater sugar content), or resistance to diseases such as to the early blight pathogen Alternaria solani. However, this tactic has limitations, since selection for traits such as pathogen resistance can negatively impact other favorable traits such as fruit production.
Cultivation
The tomato is grown worldwide for its edible fruits, with thousands of cultivars.
Hydroponic and greenhouse cultivation
Greenhouse tomato production in large-acreage commercial greenhouses and owner-operator stand-alone or multiple-bay greenhouses is increasing, providing fruit during those times of the year when field-grown fruit is not readily available. Smaller fruit (cherry and grape), or cluster tomatoes (fruit-on-the-vine) are the fruit of choice for the large commercial greenhouse operators while the beefsteak varieties are the choice of owner-operator growers. Tomatoes are also grown using hydroponics.
Picking and ripening
To facilitate transportation and storage, tomatoes are often picked unripe (green) and ripened in storage with the plant hormone ethylene.
At industrial scale, such as for canning, tomatoes are picked mechanically. The machine cuts the whole vine and uses sensors to separate ripe tomatoes from the rest of the plant, which is returned to the farm for use either as green manure or to be grazed by livestock.
Production
In 2022, world production of tomatoes was 186 million tonnes, with China accounting for 37% of the total, followed by India, Turkey, and the United States as major producers (table). The world dedicated 4.8 million hectares in 2012 for tomato cultivation and the total production was about 161.8 million tonnes. The average world farm yield for tomato was 33.6 tonnes per hectare in 2012. Tomato farms in the Netherlands were the most productive in 2012, with a nationwide average of 476 tonnes per hectare, followed by Belgium (463 tonnes per hectare) and Iceland (429 tonnes per hectare).
Pests and diseases
Pests
Common tomato pests include the tomato bug, stink bugs, cutworms, tomato hornworms and tobacco hornworms, aphids, cabbage loopers, whiteflies, tomato fruitworms, flea beetles, red spider mite, slugs, and Colorado potato beetles. The tomato russet mite, Aculops lycopersici, feeds on foliage and young fruit of tomato plants, causing shrivelling and necrosis of leaves, flowers, and fruit, possibly killing the plant.
After an insect attack tomato plants produce systemin, a plant peptide hormone. This activates defensive mechanisms, such as the production of protease inhibitors to slow the growth of insects. The hormone was first identified in tomatoes.
Diseases
Tomato cultivars vary widely in their resistance to disease. Modern hybrids focus on improving disease resistance over the heirloom plants. A common tomato disease is tobacco mosaic virus. Handling cigarettes and other infected tobacco products can transmit the virus to tomato plants.
A serious disease is curly top, carried by the beet leafhopper, which interrupts the lifecycle. As the name implies, it has the symptom of making the top leaves of the plant wrinkle up and grow abnormally.
Bacterial wilt is another common disease impacting yield. Wang et al., 2019 found phage combination therapies to reduce the impact of bacterial wilt, sometimes by reducing bacterial abundance and sometimes by selecting for resistant but slow growing genetics.
As food
Culinary
Tomatoes, with their umami flavor, are extensively used in Mediterranean cuisine as a key ingredient in pizza and many pasta sauces. Tomatoes are used in Spanish gazpacho and Catalan . The tomato is a crucial and ubiquitous part of Middle Eastern cuisine, served fresh in salads (e.g., Arab salad, Israeli salad, Shirazi salad and Turkish salad), grilled with kebabs and other dishes, made into sauces, and so on.
Tomatoes were gradually incorporated into Indian curry dishes after Europeans introduced them. A Kashmiri curry, rogan josh, often contains tomato; it may originally have been colored red with chili pepper, and tomatoes may characterize the Punjabi version of the dish. The modern British curry tikka masala often has a tomato and cream sauce.
Storage
Tomatoes keep best unwashed at room temperature and out of direct sunlight, rather than in a refrigerator. Storing stem down can prolong shelf life. Unripe tomatoes can be kept in a paper bag to ripen. Tomatoes can be preserved by canning, freezing, drying, or cooking down to a paste or puree.
Nutrition
A raw tomato is 95% water, 4% carbohydrates, and less than 1% each of fat and protein (table). In a reference amount of , raw tomatoes supply 18 calories and 16% of the Daily Value of vitamin C, but otherwise have low micronutrient content (table).
Effects on health
The US Food and Drug Administration has determined there is little credible evidence that tomatoes or tomato-based foods reduce the risk of various types of cancer.
In a 2011 scientific review, the European Food Safety Authority concluded that lycopene did not favorably influence DNA, skin exposed to ultraviolet radiation, heart function or vision.
Toxins
The leaves, stem, and green unripe fruit of the tomato plant contain small amounts of the alkaloid tomatine. They contain small amounts of solanine, a toxic alkaloid found in larger amounts in potato leaves and other members of the nightshade family. Tomato plants can be toxic to dogs if they eat large amounts of the fruit, or chew plant material.
Small amounts of tomato foliage are sometimes used for flavoring, and the green fruit of unripe red tomato varieties is sometimes used for cooking, particularly as fried green tomatoes.
Salmonella outbreaks
Tomatoes have been linked to multiple Salmonella food poisoning outbreaks in the US. One in 2008 caused the temporary removal of tomatoes from stores and restaurants across the United States and parts of Canada. In 2022 and 2023, an outbreak of Salmonella Senftenberg ST14 affected the US and 12 countries in Europe.
In popular culture
Celebrations
A massive "tomato tree" in the Walt Disney World Resort's experimental greenhouses in Lake Buena Vista, Florida may have been the largest single tomato plant. It yielded a harvest of more than 32,000 tomatoes, together weighing .
The town of Buñol, Spain, annually celebrates La Tomatina, a festival centered on an enormous tomato fight. On 30 August 2007, as many as 40,000 Spaniards gathered to throw of tomatoes at each other in the festival.
Some US states have adopted the tomato as a state fruit or vegetable. Arkansas took both sides by declaring the South Arkansas Vine Ripe Pink Tomato both the state fruit and the state vegetable in the same law, citing both its culinary and botanical classifications. In 2009, the state of Ohio passed a law making the tomato the state's official fruit, while tomato juice has been the state's official beverage since 1965.
Livingston's plant breeding is commemorated in his home town of Reynoldsburg with an annual Tomato Festival; it calls itself "The Birthplace of the Tomato". In Finland, the Tomatkarnevalen is held annually in the town of Närpes.
Tomatoes are sometimes thrown in public protests. Embracing it for this connotation, the Dutch Socialist party adopted the tomato as their logo. The same meaning is evoked in the name of the American review-aggregation website for film and television, "Rotten Tomatoes", though its founder mentions a scene in the 1992 movie Leolo as the immediate source of the name.
Fruit or vegetable
Although the tomato is cooked and eaten as a vegetable, botanically, a tomato is a fruit, specifically a berry, consisting of the ovary, together with its seeds, of a flowering plant. The issue has led to legal dispute in the United States. In 1887, U.S. tariff laws that imposed a duty on vegetables, but not on fruit, caused the tomato's status to become a matter of legal importance. In Nix v. Hedden, the U.S. Supreme Court settled the controversy on 10 May 1893, by declaring that for the purposes of the Tariff of 1883 only, the tomato is a vegetable, based on the popular definition that classifies vegetables by use—they are generally served with dinner and not dessert.
| Biology and health sciences | Solanales | null |
19006979 | https://en.wikipedia.org/wiki/Mac%20%28computer%29 | Mac (computer) | Mac is a family of personal computers designed and marketed by Apple since 1984. The name is short for Macintosh (its official name until 1999), a reference to a type of apple called McIntosh. The current product lineup includes the MacBook Air and MacBook Pro laptops, and the iMac, Mac Mini, Mac Studio, and Mac Pro desktops. Macs are sold with Apple's proprietary macOS operating system, which is not licensed to other manufacturers and exclusively bundled with Mac computers.
Jef Raskin conceived the Macintosh project in 1979, which was usurped and redefined by Apple co-founder Steve Jobs in 1981. The original Macintosh was launched in January 1984, after Apple's "1984" advertisement during Super Bowl XVIII. A series of incrementally improved models followed, sharing the same integrated case design. In 1987, the Macintosh II brought color graphics, but priced as a professional workstation and not a personal computer. Beginning in 1994 with the Power Macintosh, the Mac transitioned from Motorola 68000 series processors to PowerPC. Macintosh clones by other manufacturers were also briefly sold afterwards. The line was refreshed in 1998 with the launch of iMac G3, reinvigorating the line's competitiveness against commodity IBM PC compatibles. Macs transitioned to Intel x86 processors by 2006 along with new sub-product lines MacBook and Mac Pro. Since 2020, Macs have transitioned to Apple silicon chips based on ARM64.
History
1979–1996: "Macintosh" era
In the late 1970s, the Apple II became one of the most popular computers, especially in education. After IBM introduced the IBM PC in 1981, its sales surpassed the Apple II. In response, Apple introduced the Lisa in 1983. The Lisa's graphical user interface was inspired by strategically licensed demonstrations of the Xerox Star. Lisa surpassed the Star with intuitive direct manipulation, like the ability to drag and drop files, double-click to launch applications, and move or resize windows by clicking and dragging instead of going through a menu. However, hampered by its high price of and lack of available software, the Lisa was commercially unsuccessful.
Parallel to the Lisa's development, a skunkworks team at Apple was working on the Macintosh project. Conceived in 1979 by Jef Raskin, Macintosh was envisioned as an affordable, easy-to-use computer for the masses. Raskin named the computer after his favorite type of apple, the McIntosh. The initial team consisted of Raskin, hardware engineer Burrell Smith, and Apple co-founder Steve Wozniak. In 1981, Steve Jobs was removed from the Lisa team and joined Macintosh, and was able to gradually take control of the project due to Wozniak's temporary absence after an airplane crash. Under Jobs, the Mac grew to resemble the Lisa, with a mouse and a more intuitive graphical interface, at a quarter of the Lisa's price.
Upon its January 1984 launch, the first Macintosh was described as "revolutionary" by The New York Times. Sales initially met projections, but dropped due to the machine's low performance, single floppy disk drive requiring frequent disk swapping, and initial lack of applications. Author Douglas Adams said of it, "…what I (and I think everybody else who bought the machine in the early days) fell in love with was not the machine itself, which was ridiculously slow and underpowered, but a romantic idea of the machine. And that romantic idea had to sustain me through the realities of actually working on the 128K Mac." Most of the original Macintosh team left Apple, and some followed Jobs to found NeXT after he was forced out by CEO John Sculley. The first Macintosh nevertheless generated enthusiasm among buyers and some developers, who rushed to develop entirely new programs for the platform, including PageMaker, MORE, and Excel. Apple soon released the Macintosh 512K with improved performance and an external floppy drive. The Macintosh is credited with popularizing the graphical user interface, Jobs's fascination with typography gave it an unprecedented variety of fonts and type styles like italics, bold, shadow, and outline. It is the first WYSIWYG computer, and due in large part to PageMaker and Apple's LaserWriter printer, it ignited the desktop publishing market, turning the Macintosh from an early let-down into a notable success. Levy called desktop publishing the Mac's "Trojan horse" in the enterprise market, as colleagues and executives tried these Macs and were seduced into requesting one for themselves. PageMaker creator Paul Brainerd said: "You would see the pattern. A large corporation would buy PageMaker and a couple of Macs to do the company newsletter. The next year you'd come back and there would be thirty Macintoshes. The year after that, three hundred."
In late 1985, Bill Atkinson, one of the few remaining employees to have been on the original Macintosh team, proposed that Apple create a Dynabook, Alan Kay's concept for a tablet computer that stores and organizes knowledge. Sculley rebuffed him, so he adapted the idea into a Mac program, HyperCard, whose cards store any information—text, image, audio, video—with the memex-like ability to semantically link cards together. HyperCard was released in 1987 and bundled with every Macintosh.
In the late 1980s, Jean-Louis Gassée, a Sculley protégé who had succeeded Jobs as head of the Macintosh division, made the Mac more expandable and powerful to appeal to tech enthusiasts and enterprise customers. This strategy led to the successful 1987 release of the Macintosh II, which appealed to power users and gave the lineup momentum. However, Gassée's "no-compromise" approach foiled Apple's first laptop, the Macintosh Portable, which has many uncommon power user features, but is almost as heavy as the original Macintosh at twice its price. Soon after its launch, Gassée was fired.
Since the Mac's debut, Sculley had opposed lowering the company's profit margins, and Macintoshes were priced far above entry-level MS-DOS compatible computers. Steven Levy said that though Macintoshes were superior, the cheapest Mac cost almost twice as much as the cheapest IBM PC compatible. Sculley also resisted licensing the Mac OS to competing hardware vendors, who could have undercut Apple on pricing and jeopardized its hardware sales, as IBM PC compatibles had done to IBM. These early strategic steps caused the Macintosh to lose its chance at becoming the dominant personal computer platform. Though senior management demanded high-margin products, a few employees disobeyed and set out to create a computer that would live up to the original Macintosh's slogan, "[a] computer for the rest of us", which the market clamored for. In a pattern typical of Apple's early era, of skunkworks projects like Macintosh and Macintosh II lacking adoption by upper management who were late to realize the projects' merit, this once-renegade project was actually endorsed by senior management following market pressures. In 1990 came the Macintosh LC and the more affordable Macintosh Classic, the first model under . Between 1984 and 1989, Apple had sold one million Macs, and another 10 million over the following five years.
In 1991, the Macintosh Portable was replaced with the smaller and lighter PowerBook 100, the first laptop with a palm rest and trackball in front of the keyboard. The PowerBook brought of revenue within one year, and became a status symbol. By then, the Macintosh represented 10% to 15% of the personal computer market. Fearing a decline in market share, Sculley co-founded the AIM alliance with IBM and Motorola to create a new standardized computing platform, which led to the creation of the PowerPC processor architecture, and the Taligent operating system. In 1992, Apple introduced the Macintosh Performa line, which "grew like ivy" into a disorienting number of barely differentiated models in an attempt to gain market share. This backfired by confusing customers, but the same strategy soon afflicted the PowerBook line. Michael Spindler continued this approach when he succeeded Sculley as CEO in 1993. He oversaw the Mac's transition from Motorola 68000 series to PowerPC and the release of Apple's first PowerPC machine, the well-received Power Macintosh.
Many new Macintoshes suffered from inventory and quality control problems. The 1995 PowerBook 5300 was plagued with quality problems, with several recalls as some units even caught fire. Pessimistic about Apple's future, Spindler repeatedly attempted to sell Apple to other companies, including IBM, Kodak, AT&T, Sun, and Philips. In a last-ditch attempt to fend off Windows, Apple yielded and started a Macintosh clone program, which allowed other manufacturers to make System 7 computers. However, this only cannibalized the sales of Apple's higher-margin machines. Meanwhile, Windows 95 was an instant hit with customers. Apple was struggling financially as its attempts to produce a System 7 successor had all failed with Taligent, Star Trek, and Copland, and its hardware was stagnant. The Mac was no longer competitive, and its sales entered a tailspin. Corporations abandoned Macintosh in droves, replacing it with cheaper and more technically sophisticated Windows NT machines for which far more applications and peripherals existed. Even some Apple loyalists saw no future for the Macintosh. Once the world's second largest computer vendor after IBM, Apple's market share declined precipitously from 9.4% in 1993 to 3.1% in 1997. Bill Gates was ready to abandon Microsoft Office for Mac, which would have slashed any remaining business appeal the Mac had. Gil Amelio, Spindler's successor, failed to negotiate a deal with Gates.
In 1996, Spindler was succeeded by Amelio, who searched for an established operating system to acquire or license for the foundation of a new Macintosh operating system. He considered BeOS, Solaris, Windows NT, and NeXT's NeXTSTEP, eventually choosing the last. Apple acquired NeXT on December 20, 1996, returning its co-founder, Steve Jobs.
1997–2011: Steve Jobs era
NeXT had developed the mature NeXTSTEP operating system with strong multimedia and Internet capabilities. NeXTSTEP was also popular among programmers, financial firms, and academia for its object-oriented programming tools for rapid application development. In an eagerly anticipated speech at the January 1997 Macworld trade show, Steve Jobs previewed Rhapsody, a merger of NeXTSTEP and Mac OS as the foundation of Apple's new operating system strategy. At the time, Jobs only served as advisor, and Amelio was released in July 1997. Jobs was formally appointed interim CEO in September, and permanent CEO in January 2000. To continue turning the company around, Jobs streamlined Apple's operations and began layoffs. He negotiated a deal with Bill Gates in which Microsoft committed to releasing new versions of Office for Mac for five years, investing $150 million in Apple, and settling an ongoing lawsuit in which Apple alleged that Windows had copied the Mac's interface. In exchange, Apple made Internet Explorer the default Mac browser. The deal was closed hours before Jobs announced it at the August 1997 Macworld.
Jobs returned focus to Apple. The Mac lineup had been incomprehensible, with dozens of hard-to-distinguish models. He streamlined it into four quadrants, a laptop and a desktop each for consumers and professionals. Apple also discontinued several Mac accessories, including the StyleWriter printer and the Newton PDA. These changes were meant to refocus Apple's engineering, marketing, and manufacturing efforts so that more care could be dedicated to each product. Jobs also stopped licensing Mac OS to clone manufacturers, which had cost Apple ten times more in lost sales than it received in licensing fees. Jobs made a deal with the largest computer reseller, CompUSA, to carry a store-within-a-store that would better showcase Macs and their software and peripherals. According to Apple, the Mac's share of computer sales in those stores went from 3% to 14%. In November, the online Apple Store launched with built-to-order Mac configurations without a middleman. When Tim Cook was hired as chief operations officer in March 1998, he closed Apple's inefficient factories and outsourced Mac production to Taiwan. Within months, he rolled out a new ERP system and implemented just-in-time manufacturing principles. This practically eliminated Apple's costly unsold inventory, and within one year, Apple had the industry's most efficient inventory turnover.
Jobs's top priority was "to ship a great new product". The first is the iMac G3, an all-in-one computer that was meant to make the Internet intuitive and easy to access. While PCs came in functional beige boxes, Jony Ive gave the iMac a radical and futuristic design, meant to make the product less intimidating. Its oblong case is made of translucent plastic in Bondi blue, later revised with many colors. Ive added a handle on the back to make the computer more approachable. Jobs declared the iMac would be "legacy-free", succeeding ADB and SCSI with an infrared port and cutting-edge USB ports. Though USB had industry backing, it was still absent from most PCs and USB 1.1 was only standardized one month after the iMac's release. He also controversially removed the floppy disk drive and replaced it with a CD drive. The iMac was unveiled in May 1998, and released in August. It was an immediate commercial success and became the fastest-selling computer in Apple's history, with 800,000 units sold before the year ended. Vindicating Jobs on the Internet's appeal to consumers, 32% of iMac buyers had never used a computer before, and 12% were switching from PCs. The iMac reestablished the Mac's reputation as a trendsetter: for the next few years, translucent plastic became the dominant design trend in numerous consumer products.
Apple knew it had lost its chance to compete in the Windows-dominated enterprise market, so it prioritized design and ease of use to make the Mac more appealing to average consumers, and even teens. The "Apple New Product Process" was launched as a more collaborative product development process for the Mac, with concurrent engineering principles. From then, product development was no longer driven primarily by engineering and with design as an afterthought. Instead, Ive and Jobs first defined a new product's "soul", before it was jointly developed by the marketing, engineering, and operations teams. The engineering team was led by the product design group, and Ive's design studio was the dominant voice throughout the development process.
The next two Mac products in 1999, the Power Mac G3 (nicknamed "Blue and White") and the iBook, introduced industrial designs influenced by the iMac, incorporating colorful translucent plastic and carrying handles. The iBook introduced several innovations: a strengthened hinge instead of a mechanical latch to keep it closed, ports on the sides rather than on the back, and the first laptop with built-in Wi-Fi. It became the best selling laptop in the U.S. during the fourth quarter of 1999. The professional-oriented Titanium PowerBook G4 was released in 2001, becoming the lightest and thinnest laptop in its class, and the first laptop with a wide-screen display; it also debuted a magnetic latch that secures the lid elegantly.
The design language of consumer Macs shifted again from colored plastics to white polycarbonate with the introduction of the 2001 Dual USB "Ice" iBook. To increase the iBook's durability, it eliminated doors and handles, and gained a more minimalistic exterior. Ive attempted to go beyond the quadrant with Power Mac G4 Cube, an innovation beyond the computer tower in a professional desktop far smaller than the Power Mac. The Cube failed in the market and was withdrawn from sale after one year. However, Ive considered it beneficial, because it helped Apple gain experience in complex machining and miniaturization.
The development of a successor to the old Mac OS was well underway. Rhapsody had been previewed at WWDC 1997, featuring a Mach kernel and BSD foundations, a virtualization layer for old Mac OS apps (codenamed Blue Box), and an implementation of NeXTSTEP APIs called OpenStep (codenamed Yellow Box). Apple open-sourced the core of Rhapsody as the Darwin operating system. After several developer previews, Apple also introduced the Carbon API, which provided a way for developers to more easily make their apps native to Mac OS X without rewriting them in Yellow Box. Mac OS X was publicly unveiled in January 2000, introducing the modern Aqua graphical user interface, and a far more stable Unix foundation, with memory protection and preemptive multitasking. Blue Box became the Classic environment, and Yellow Box was renamed Cocoa. Following a public beta, the first version of Mac OS X, version 10.0 Cheetah, was released in March 2001.
In 1999, Apple launched its new "digital lifestyle" strategy of which the Mac became a "digital hub" and centerpiece with several new applications. In October 1999, the iMac DV gained FireWire ports, allowing users to connect camcorders and easily create movies with iMovie; the iMac gained a CD burner and iTunes, allowing users to rip CDs, make playlists, and burn them to blank discs. Other applications include iPhoto for organizing and editing photos, and GarageBand for creating and mixing music and other audio. The digital lifestyle strategy entered other markets, with the iTunes Store, iPod, iPhone, iPad, and the 2007 renaming from Apple Computer Inc. to Apple Inc. By January 2007, the iPod was half of Apple's revenues.
New Macs include the white "Sunflower" iMac G4. Ive designed a display to swivel with one finger, so that it "appear[ed] to defy gravity". In 2003, Apple released the aluminum 12-inch and 17-inch PowerBook G4, proclaiming the "Year of the Notebook". With the Microsoft deal expiring, Apple also replaced Internet Explorer with its new browser, Safari. The first Mac Mini was intended to be assembled in the U.S., but domestic manufacturers were slow and had insufficient quality processes, leading Apple to Taiwanese manufacturer Foxconn. The affordably priced Mac Mini desktop was introduced at Macworld 2005, alongside the introduction of the iWork office suite.
Serlet and Tevanian were both initiating the secret project asked by Steve Jobs to propose to Sony executives, in 2001, to sell Mac OS X on Vaio laptops. They showed them a demonstration at a golf party in Hawaii, with the most expensive Vaio laptop they could have acquired. But due to bad timing, Sony refused, arguing their Vaio sales just started to grow after years of difficulties.
Intel transition and "back to the Mac"
With PowerPC chips falling behind in performance, price, and efficiency, Steve Jobs announced in 2005 the Mac transition to Intel processors, because the operating system had been developed for both architectures since the beginning. PowerPC apps run using transparent Rosetta emulation, and Windows boots natively using Boot Camp. This transition helped contribute to a few years of growth in Mac sales.
After the iPhone's 2007 release, Apple began a multi-year effort to bring many iPhone innovations "back to the Mac", including multi-touch gesture support, instant wake from sleep, and fast flash storage. At Macworld 2008, Jobs introduced the first MacBook Air by taking it out of a manila envelope, touting it as the "world's thinnest notebook". The MacBook Air favors wireless technologies over physical ports, and lacks FireWire, an optical drive, or a replaceable battery. The Remote Disc feature accesses discs in other networked computers. A decade after its launch, journalist Tom Warren wrote that the MacBook Air had "immediately changed the future of laptops", starting the ultrabook trend. OS X Lion added new software features first introduced with the iPad, such as FaceTime, full-screen apps, document autosaving and versioning, and a bundled Mac App Store to replace software install discs with online downloads. It gained support for Retina displays, which had been introduced earlier with the iPhone 4. iPhone-like multi-touch technology was progressively added to all MacBook trackpads, and to desktop Macs through the Magic Mouse, and Magic Trackpad. The 2010 MacBook Air added an iPad-inspired standby mode, "instant-on" wake from sleep, and flash memory storage.
After criticism by Greenpeace, Apple improved the ecological performance of its products. The 2008 MacBook Air is free of toxic chemicals like mercury, bromide, and PVC, and with smaller packaging. The enclosures of the iMac and unibody MacBook Pro were redesigned with the more recyclable aluminum and glass.
On February 24, 2011, the MacBook Pro became the first computer to support Intel's new Thunderbolt connector, with two-way transfer speeds of 10 Gbit/s, and backward compatibility with Mini DisplayPort.
2012–present: Tim Cook era
Due to deteriorating health, Steve Jobs resigned as CEO on August 24, 2011, and Tim Cook was named as his successor. Cook's first keynote address launched iCloud, moving the digital hub from the Mac to the cloud. In 2012, the MacBook Pro was refreshed with a Retina display, and the iMac was slimmed and lost its SuperDrive.
During Cook's first few years as CEO, Apple fought media criticisms that it could no longer innovate without Jobs. In 2013, Apple introduced a new cylindrical Mac Pro, with marketing chief Phil Schiller exclaiming "Can't innovate anymore, my ass!". The new model had a miniaturized design with a glossy dark gray cylindrical body and internal components organized around a central cooling system. Tech reviewers praised the 2013 Mac Pro for its power and futuristic design; however, it was poorly received by professional users, who criticized its lack of upgradability and the removal of expansion slots.
The iMac was refreshed with a 5K Retina display in 2014, making it the highest-resolution all-in-one desktop computer. The MacBook was reintroduced in 2015, with a completely redesigned aluminum unibody chassis, a 12-inch Retina display, a fanless low-power Intel Core M processor, a much smaller logic board, a new Butterfly keyboard, a single USB-C port, and a solid-state Force Touch trackpad with pressure sensitivity. It was praised for its portability, but criticized for its lack of performance, the need to use adapters to use most USB peripherals, and a high starting price of . In 2015, Apple started a service program to address a widespread GPU defect in the 15-inch 2011 MacBook Pro, which could cause graphical artifacts or prevent the machine from functioning entirely.
Neglect of professional users
The Touch Bar MacBook Pro was released in October 2016. It was the thinnest MacBook Pro ever made, replaced all ports with four Thunderbolt 3 (USB-C) ports, gained a thinner "Butterfly" keyboard, and replaced function keys with the Touch Bar. The Touch Bar was criticized for making it harder to use the function keys by feel, as it offered no tactile feedback. Many users were also frustrated by the need to buy dongles, particularly professional users who relied on traditional USB-A devices, SD cards, and HDMI for video output. A few months after its release, users reported a problem with stuck keys and letters being skipped or repeated. iFixit attributed this to the ingress of dust or food crumbs under the keys, jamming them. Since the Butterfly keyboard was riveted into the laptop's case, it could only be serviced at an Apple Store or authorized service center. Apple settled a $50m class-action lawsuit over these keyboards in 2022. These same models were afflicted by "flexgate": when users closed and opened the machine, they would risk progressively damaging the cable responsible for the display backlight, which was too short. The $6 cable was soldered to the screen, requiring a $700 repair.
Senior Vice President of Industrial Design Jony Ive continued to guide product designs towards simplicity and minimalism. Critics argued that he had begun to prioritize form over function, and was excessively focused on product thinness. His role in the decisions to switch to fragile Butterfly keyboards, to make the Mac Pro non-expandable, and to remove USB-A, HDMI and the SD card slot from the MacBook Pro were criticized.
The long-standing keyboard issue on MacBook Pros, Apple's abandonment of the Aperture professional photography app, and the lack of Mac Pro upgrades led to declining sales and a widespread belief that Apple was no longer committed to professional users. After several years without any significant updates to the Mac Pro, Apple executives admitted in 2017 that the 2013 Mac Pro had not met expectations, and said that the company had designed themselves into a "thermal corner", preventing them from releasing a planned dual-GPU successor. Apple also unveiled their future product roadmap for professional products, including plans for an iMac Pro as a stopgap and an expandable Mac Pro to be released later. The iMac Pro was revealed at WWDC 2017, featuring updated Intel Xeon W processors and Radeon Pro Vega graphics.
In 2018, Apple released a redesigned MacBook Air with a Retina display, Butterfly keyboard, Force Touch trackpad, and Thunderbolt 3 USB-C ports. The Butterfly keyboard went through three revisions, incorporating silicone gaskets in the key mechanism to prevent keys from being jammed by dust or other particles. However, many users continued to experience reliability issues with these keyboards, leading Apple to launch a program to repair affected keyboards free of charge. Higher-end models of the 15-inch 2018 MacBook Pro faced another issue where the Core i9 processor reached unusually high temperatures, resulting in reduced CPU performance from thermal throttling. Apple issued a patch to address this issue via a macOS supplemental update, blaming a "missing digital key" in the thermal management firmware.
The 2019 16-inch MacBook Pro and 2020 MacBook Air replaced the unreliable Butterfly keyboard with a redesigned scissor-switch Magic Keyboard. On the MacBook Pros, the Touch Bar and Touch ID were made standard, and the Esc key was detached from the Touch Bar and returned to being a physical key. At WWDC 2019, Apple unveiled a new Mac Pro with a larger case design that allows for hardware expandability, and introduced a new expansion module system (MPX) for modules such as the Afterburner card for faster video encoding. Almost every part of the new Mac Pro is user-replaceable, with iFixit praising its high user-repairability. It received positive reviews, with reviewers praising its power, modularity, quiet cooling, and Apple's increased focus on professional workflows.
Apple silicon transition
In April 2018, Bloomberg reported Apple's plan to replace Intel chips with ARM processors similar to those in its phones, causing Intel's shares to drop by 9.2%. The Verge commented on the rumors, that such a decision made sense, as Intel was failing to make significant improvements to its processors, and could not compete with ARM chips on battery life.
At WWDC 2020, Tim Cook announced that the Mac would be transitioning to Apple silicon chips, built upon an ARM architecture, over a two-year timeline. The Rosetta 2 translation layer was also introduced, enabling Apple silicon Macs to run Intel apps. On November 10, 2020, Apple announced their first system-on-a-chip designed for the Mac, the Apple M1, and a series of Macs that would ship with the M1: the MacBook Air, Mac Mini, and the 13-inch MacBook Pro. These new Macs received highly positive reviews, with reviewers highlighting significant improvements in battery life, performance, and heat management compared to previous generations.
The iMac Pro was discontinued on March 6, 2021. On April 20, 2021, a new 24-inch iMac was revealed, featuring the M1 chip, seven new colors, thinner white bezels, a higher-resolution 1080p webcam, and an enclosure made entirely from recycled aluminum.
On October 18, 2021, Apple announced new 14-inch and 16-inch MacBook Pros, featuring the more powerful M1 Pro and M1 Max chips, a bezel-less mini-LED 120 Hz ProMotion display, and the return of MagSafe and HDMI ports, and the SD card slot.
On March 8, 2022, the Mac Studio was unveiled, also featuring the M1 Max chip and the new M1 Ultra chip in a similar form factor to the Mac Mini. It drew highly positive reviews for its flexibility and wide range of available ports. Its performance was deemed "impressive", beating the highest-end Mac Pro with a 28-core Intel Xeon chip, while being significantly more power efficient and compact. It was introduced alongside the Studio Display, meant to replace the 27-inch iMac, which was discontinued on the same day.
Post-Apple silicon transition
At WWDC 2022, Apple announced an updated MacBook Air based on a new M2 chip. It incorporates several changes from the 14-inch MacBook Pro, such as a flat, slab-shaped design, full-sized function keys, MagSafe charging, and a Liquid Retina display, with rounded corners and a display cutout incorporating a 1080p webcam.
The Mac Studio with M2 Max and M2 Ultra chips and the Mac Pro with M2 Ultra chip was unveiled at WWDC 2023, and the Intel-based Mac Pro was discontinued on the same day, completing the Mac transition to Apple silicon chips. The Mac Studio was received positively as a modest upgrade over the previous generation, albeit similarly priced PCs could be equipped with faster GPUs. However, the Apple silicon-based Mac Pro was criticized for several regressions, including memory capacity and a complete lack of CPU or GPU expansion options. A 15-inch MacBook Air was also introduced, and is the largest display included on a consumer-level Apple laptop.
The MacBook Pro was updated on October 30, 2023, with updated M3 Pro and M3 Max chips using a 3 nm process node, as well as the standard M3 chip in a refreshed iMac and a new base model MacBook Pro. Reviewers lamented the base memory configuration of 8 GB on the standard M3 MacBook Pro. In March 2024, the MacBook Air was also updated to include the M3 chip. In October 2024, several Macs were announced with the M4 series of chips, including the iMac, a redesigned Mac Mini, and the MacBook Pro; all of which included 16 GB of memory as standard. The MacBook Air was also upgraded with 16 GB for the same price.
Current Mac models
Marketing
The original Macintosh was marketed at Super Bowl XVIII with the highly acclaimed "1984" ad, directed by Ridley Scott. The ad alluded to George Orwell's novel Nineteen Eighty-Four, and symbolized Apple's desire to "rescue" humanity from the conformity of computer industry giant IBM. The ad is now considered a "watershed event" and a "masterpiece." Before the Macintosh, high-tech marketing catered to industry insiders rather than consumers, so journalists covered technology like the "steel or automobiles" industries, with articles written for a highly technical audience. The Macintosh launch event pioneered event marketing techniques that have since become "widely emulated" in Silicon Valley, by creating a mystique about the product and giving an inside look into its creation. Apple took a new "multiple exclusives" approach regarding the press, giving "over one hundred interviews to journalists that lasted over six hours apiece", and introduced a new "Test Drive a Macintosh" campaign.
Apple's brand, which established a "heartfelt connection with consumers", is cited as one of the keys to the Mac's success. After Steve Jobs's return to the company, he launched the Think different ad campaign, positioning the Mac as the best computer for "creative people who believe that one person can change the world". The campaign featured black-and-white photographs of luminaries like Albert Einstein, Gandhi, and Martin Luther King Jr., with Jobs saying: "if they ever used a computer, it would have been a Mac". The ad campaign was critically acclaimed and won several awards, including a Primetime Emmy. In the 2000s, Apple continued to use successful marketing campaigns to promote the Mac line, including the Switch and Get a Mac campaigns.
Apple's focus on design and build quality has helped establish the Mac as a high-end, premium brand. The company's emphasis on creating iconic and visually appealing designs for its computers has given them a "human face" and made them stand out in a crowded market. Apple has long made product placements in high-profile movies and television shows to showcase Mac computers, like Mission: Impossible, Legally Blonde, and Sex and the City. Apple is known for not allowing producers to show villains using Apple products. Its own shows produced for the Apple TV+ streaming service feature prominent use of MacBooks.
The Mac is known for its highly loyal customer base. In 2022, the American Customer Satisfaction Index gave the Mac the highest customer satisfaction score of any personal computer, at 82 out of 100. In that year, Apple was the fourth largest vendor of personal computers, with a market share of 8.9%.
Hardware
Apple outsources the production of its hardware to Asian manufacturers like Foxconn and Pegatron. As a highly vertically integrated company developing its own operating system and chips, it has tight control over all aspects of its products and deep integration between hardware and software.
All Macs in production use ARM-based Apple silicon processors and have been praised for their performance and power efficiency. They can run Intel apps through the Rosetta 2 translation layer, and iOS and iPadOS apps distributed via the App Store. These Mac models come equipped with high-speed Thunderbolt 4 or USB 4 connectivity, with speeds up to 40 Gbit/s. Apple silicon Macs have custom integrated graphics rather than graphics cards. MacBooks are recharged with either USB-C or MagSafe connectors, depending on the model.
Apple sells accessories for the Mac, including the Studio Display and Pro Display XDR external monitors, the AirPods line of wireless headphones, and keyboards and mice such as the Magic Keyboard, Magic Trackpad, and Magic Mouse.
Software
Macs run the macOS operating system, which is the second most widely used desktop OS according to StatCounter. Macs can also run Windows, Linux, or other operating systems through virtualization, emulation, or multi-booting.
macOS is the successor of the classic Mac OS, which had nine releases between 1984 and 1999. The last version of classic Mac OS, Mac OS 9, was introduced in 1999. Mac OS 9 was succeeded by Mac OS X in 2001. Over the years, Mac OS X was rebranded first to OS X and later to macOS.
macOS is a derivative of NextSTEP and FreeBSD. It uses the XNU kernel, and the core of macOS has been open-sourced as the Darwin operating system. macOS features the Aqua user interface, the Cocoa set of frameworks, and the Objective-C and Swift programming languages. Macs are deeply integrated with other Apple devices, including the iPhone and iPad, through Continuity features like Handoff, Sidecar, Universal Control, and Universal Clipboard.
The first version of Mac OS X, version 10.0, was released in March 2001. Subsequent releases introduced major changes and features to the operating system. 10.4 Tiger added Spotlight search; 10.6 Snow Leopard brought refinements, stability, and full 64-bit support; 10.7 Lion introduced many iPad-inspired features; 10.10 Yosemite introduced a complete user interface revamp, replacing skeuomorphic designs with iOS 7-esque flat designs; 10.12 Sierra added the Siri voice assistant and Apple File System (APFS) support; 10.14 Mojave added a dark user interface mode; 10.15 Catalina dropped support for 32-bit apps; 11 Big Sur introduced an iOS-inspired redesign of the user interface, 12 Monterey added the Shortcuts app, Low Power Mode, and AirPlay to Mac; and 13 Ventura added Stage Manager, Continuity Camera, and passkeys.
The Mac has a variety of apps available, including cross-platform apps like Google Chrome, Microsoft Office, Adobe Creative Cloud, Mathematica, Visual Studio Code, Ableton Live, and Cinema 4D. Apple has also developed several apps for the Mac, including Final Cut Pro, Logic Pro, iWork, GarageBand, and iMovie. A large amount of open-source software applications run natively on macOS, such as LibreOffice, VLC, and GIMP, and command-line programs, which can be installed through Macports and Homebrew. Many applications for Linux or BSD also run on macOS, often using X11. Apple's official integrated development environment (IDE) is Xcode, allowing developers to create apps for the Mac and other Apple platforms.
The latest release of macOS is macOS 15 Sequoia, released on September 16, 2024.
Timeline
| Technology | Computer hardware | null |
19008673 | https://en.wikipedia.org/wiki/Conic%20section | Conic section | A conic section, conic or a quadratic curve is a curve obtained from a cone's surface intersecting a plane. The three types of conic section are the hyperbola, the parabola, and the ellipse; the circle is a special case of the ellipse, though it was sometimes considered a fourth type. The ancient Greek mathematicians studied conic sections, culminating around 200 BC with Apollonius of Perga's systematic work on their properties.
The conic sections in the Euclidean plane have various distinguishing properties, many of which can be used as alternative definitions. One such property defines a non-circular conic to be the set of those points whose distances to some particular point, called a focus, and some particular line, called a directrix, are in a fixed ratio, called the eccentricity. The type of conic is determined by the value of the eccentricity. In analytic geometry, a conic may be defined as a plane algebraic curve of degree 2; that is, as the set of points whose coordinates satisfy a quadratic equation in two variables which can be written in the form The geometric properties of the conic can be deduced from its equation.
In the Euclidean plane, the three types of conic sections appear quite different, but share many properties. By extending the Euclidean plane to include a line at infinity, obtaining a projective plane, the apparent difference vanishes: the branches of a hyperbola meet in two points at infinity, making it a single closed curve; and the two ends of a parabola meet to make it a closed curve tangent to the line at infinity. Further extension, by expanding the real coordinates to admit complex coordinates, provides the means to see this unification algebraically.
Euclidean geometry
The conic sections have been studied for thousands of years and have provided a rich source of interesting and beautiful results in Euclidean geometry.
Definition
A conic is the curve obtained as the intersection of a plane, called the cutting plane, with the surface of a double cone (a cone with two nappes). It is usually assumed that the cone is a right circular cone for the purpose of easy description, but this is not required; any double cone with some circular cross-section will suffice. Planes that pass through the vertex of the cone will intersect the cone in a point, a line or a pair of intersecting lines. These are called degenerate conics and some authors do not consider them to be conics at all. Unless otherwise stated, "conic" in this article will refer to a non-degenerate conic.
There are three types of conics: the ellipse, parabola, and hyperbola. The circle is a special kind of ellipse, although historically Apollonius considered it a fourth type. Ellipses arise when the intersection of the cone and plane is a closed curve. The circle is obtained when the cutting plane is parallel to the plane of the generating circle of the cone; for a right cone, this means the cutting plane is perpendicular to the axis. If the cutting plane is parallel to exactly one generating line of the cone, then the conic is unbounded and is called a parabola. In the remaining case, the figure is a hyperbola: the plane intersects both halves of the cone, producing two separate unbounded curves.
Compare also spheric section (intersection of a plane with a sphere, producing a circle or point), and spherical conic (intersection of an elliptic cone with a concentric sphere).
Eccentricity, focus and directrix
Alternatively, one can define a conic section purely in terms of plane geometry: it is the locus of all points whose distance to a fixed point (called the focus) is a constant multiple (called the eccentricity) of the distance from to a fixed line (called the directrix).
For we obtain an ellipse, for a parabola, and for a hyperbola.
A circle is a limiting case and is not defined by a focus and directrix in the Euclidean plane. The eccentricity of a circle is defined to be zero and its focus is the center of the circle, but its directrix can only be taken as the line at infinity in the projective plane.
The eccentricity of an ellipse can be seen as a measure of how far the ellipse deviates from being circular.
If the angle between the surface of the cone and its axis is and the angle between the cutting plane and the axis is the eccentricity is
A proof that the above curves defined by the focus-directrix property are the same as those obtained by planes intersecting a cone is facilitated by the use of Dandelin spheres.
Alternatively, an ellipse can be defined in terms of two focus points, as the locus of points for which the sum of the distances to the two foci is ; while a hyperbola is the locus for which the difference of distances is . (Here is the semi-major axis defined below.) A parabola may also be defined in terms of its focus and latus rectum line (parallel to the directrix and passing through the focus): it is the locus of points whose distance to the focus plus or minus the distance to the line is equal to ; plus if the point is between the directrix and the latus rectum, minus otherwise.
Conic parameters
In addition to the eccentricity (), foci, and directrix, various geometric features and lengths are associated with a conic section.
The principal axis is the line joining the foci of an ellipse or hyperbola, and its midpoint is the curve's center. A parabola has no center.
The linear eccentricity () is the distance between the center and a focus.
The latus rectum is the chord parallel to the directrix and passing through a focus; its half-length is the semi-latus rectum ().
The focal parameter () is the distance from a focus to the corresponding directrix.
The major axis is the chord between the two vertices: the longest chord of an ellipse, the shortest chord between the branches of a hyperbola. Its half-length is the semi-major axis (). When an ellipse or hyperbola are in standard position as in the equations below, with foci on the -axis and center at the origin, the vertices of the conic have coordinates and , with non-negative.
The minor axis is the shortest diameter of an ellipse, and its half-length is the semi-minor axis (), the same value as in the standard equation below. By analogy, for a hyperbola the parameter in the standard equation is also called the semi-minor axis.
The following relations hold:
For conics in standard position, these parameters have the following values, taking .
Standard forms in Cartesian coordinates
After introducing Cartesian coordinates, the focus-directrix property can be used to produce the equations satisfied by the points of the conic section. By means of a change of coordinates (rotation and translation of axes) these equations can be put into standard forms. For ellipses and hyperbolas a standard form has the -axis as principal axis and the origin (0,0) as center. The vertices are and the foci . Define by the equations for an ellipse and for a hyperbola. For a circle, so , with radius . For the parabola, the standard form has the focus on the -axis at the point and the directrix the line with equation . In standard form the parabola will always pass through the origin.
For a rectangular or equilateral hyperbola, one whose asymptotes are perpendicular, there is an alternative standard form in which the asymptotes are the coordinate axes and the line is the principal axis. The foci then have coordinates and .
Circle:
Ellipse:
Parabola:
Hyperbola:
Rectangular hyperbola:
The first four of these forms are symmetric about both the -axis and -axis (for the circle, ellipse and hyperbola), or about the -axis only (for the parabola). The rectangular hyperbola, however, is instead symmetric about the lines and .
These standard forms can be written parametrically as,
Circle:
Ellipse:
Parabola:
Hyperbola:
,
Rectangular hyperbola:
General Cartesian form
In the Cartesian coordinate system, the graph of a quadratic equation in two variables is always a conic section (though it may be degenerate), and all conic sections arise in this way. The most general equation is of the form
with all coefficients real numbers and not all zero.
Matrix notation
The above equation can be written in matrix notation as
The general equation can also be written as
This form is a specialization of the homogeneous form used in the more general setting of projective geometry (see below).
Discriminant
The conic sections described by this equation can be classified in terms of the value , called the discriminant of the equation.
Thus, the discriminant is where is the matrix determinant
If the conic is non-degenerate, then:
if , the equation represents an ellipse;
if and , the equation represents a circle, which is a special case of an ellipse;
if , the equation represents a parabola;
if , the equation represents a hyperbola;
if , the equation represents a rectangular hyperbola.
In the notation used here, and are polynomial coefficients, in contrast to some sources that denote the semimajor and semiminor axes as and .
Invariants
The discriminant of the conic section's quadratic equation (or equivalently the determinant of the 2 × 2 matrix) and the quantity (the trace of the 2 × 2 matrix) are invariant under arbitrary rotations and translations of the coordinate axes, as is the determinant of the 3 × 3 matrix above. The constant term and the sum are invariant under rotation only.
Eccentricity in terms of coefficients
When the conic section is written algebraically as
the eccentricity can be written as a function of the coefficients of the quadratic equation. If the conic is a parabola and its eccentricity equals 1 (provided it is non-degenerate). Otherwise, assuming the equation represents either a non-degenerate hyperbola or ellipse, the eccentricity is given by
where if the determinant of the 3 × 3 matrix above is negative and if that determinant is positive.
It can also be shown that the eccentricity is a positive solution of the equation
where again This has precisely one positive solution—the eccentricity— in the case of a parabola or ellipse, while in the case of a hyperbola it has two positive solutions, one of which is the eccentricity.
Conversion to canonical form
In the case of an ellipse or hyperbola, the equation
can be converted to canonical form in transformed variables as
or equivalently
where and are the eigenvalues of the matrix — that is, the solutions of the equation
— and is the determinant of the 3 × 3 matrix above, and is again the determinant of the 2 × 2 matrix. In the case of an ellipse the squares of the two semi-axes are given by the denominators in the canonical form.
Polar coordinates
In polar coordinates, a conic section with one focus at the origin and, if any, the other at a negative value (for an ellipse) or a positive value (for a hyperbola) on the -axis, is given by the equation
where is the eccentricity and is the semi-latus rectum.
As above, for , the graph is a circle, for the graph is an ellipse, for a parabola, and for a hyperbola.
The polar form of the equation of a conic is often used in dynamics; for instance, determining the orbits of objects revolving about the Sun.
Properties
Just as two (distinct) points determine a line, five points determine a conic. Formally, given any five points in the plane in general linear position, meaning no three collinear, there is a unique conic passing through them, which will be non-degenerate; this is true in both the Euclidean plane and its extension, the real projective plane. Indeed, given any five points there is a conic passing through them, but if three of the points are collinear the conic will be degenerate (reducible, because it contains a line), and may not be unique; see further discussion.
Four points in the plane in general linear position determine a unique conic passing through the first three points and having the fourth point as its center. Thus knowing the center is equivalent to knowing two points on the conic for the purpose of determining the curve.
Furthermore, a conic is determined by any combination of k points in general position that it passes through and 5 – k lines that are tangent to it, for 0≤k≤5.
Any point in the plane is on either zero, one or two tangent lines of a conic. A point on just one tangent line is on the conic. A point on no tangent line is said to be an interior point (or inner point) of the conic, while a point on two tangent lines is an exterior point (or outer point).
All the conic sections share a reflection property that can be stated as: All mirrors in the shape of a non-degenerate conic section reflect light coming from or going toward one focus toward or away from the other focus. In the case of the parabola, the second focus needs to be thought of as infinitely far away, so that the light rays going toward or coming from the second focus are parallel.
Pascal's theorem concerns the collinearity of three points that are constructed from a set of six points on any non-degenerate conic. The theorem also holds for degenerate conics consisting of two lines, but in that case it is known as Pappus's theorem.
Non-degenerate conic sections are always "smooth". This is important for many applications, such as aerodynamics, where a smooth surface is required to ensure laminar flow and to prevent turbulence.
History
Menaechmus and early works
It is believed that the first definition of a conic section was given by Menaechmus (died 320 BC) as part of his solution of the Delian problem (Duplicating the cube). His work did not survive, not even the names he used for these curves, and is only known through secondary accounts. The definition used at that time differs from the one commonly used today. Cones were constructed by rotating a right triangle about one of its legs so the hypotenuse generates the surface of the cone (such a line is called a generatrix). Three types of cones were determined by their vertex angles (measured by twice the angle formed by the hypotenuse and the leg being rotated about in the right triangle). The conic section was then determined by intersecting one of these cones with a plane drawn perpendicular to a generatrix. The type of the conic is determined by the type of cone, that is, by the angle formed at the vertex of the cone: If the angle is acute then the conic is an ellipse; if the angle is right then the conic is a parabola; and if the angle is obtuse then the conic is a hyperbola (but only one branch of the curve).
Euclid (fl. 300 BC) is said to have written four books on conics but these were lost as well. Archimedes (died BC) is known to have studied conics, having determined the area bounded by a parabola and a chord in Quadrature of the Parabola. His main interest was in terms of measuring areas and volumes of figures related to the conics and part of this work survives in his book on the solids of revolution of conics, On Conoids and Spheroids.
Apollonius of Perga
The greatest progress in the study of conics by the ancient Greeks is due to Apollonius of Perga (died BC), whose eight-volume Conic Sections or Conics summarized and greatly extended existing knowledge. Apollonius's study of the properties of these curves made it possible to show that any plane cutting a fixed double cone (two napped), regardless of its angle, will produce a conic according to the earlier definition, leading to the definition commonly used today. Circles, not constructible by the earlier method, are also obtainable in this way. This may account for why Apollonius considered circles a fourth type of conic section, a distinction that is no longer made. Apollonius used the names 'ellipse', 'parabola' and 'hyperbola' for these curves, borrowing the terminology from earlier Pythagorean work on areas.
Pappus of Alexandria (died AD) is credited with expounding on the importance of the concept of a conic's focus, and detailing the related concept of a directrix, including the case of the parabola (which is lacking in Apollonius's known works).
Islamic world
Apollonius's work was translated into Arabic, and much of his work only survives through the Arabic version. Islamic mathematicians found applications of the theory, most notably the Persian mathematician and poet Omar Khayyám, who found a geometrical method of solving cubic equations using conic sections.
A century before the more famous work of Khayyam, Abu al-Jud used conics to solve quartic and cubic equations, although his solution did not deal with all the cases.
An instrument for drawing conic sections was first described in 1000 AD by Al-Kuhi.
Europe
Johannes Kepler extended the theory of conics through the "principle of continuity", a precursor to the concept of limits. Kepler first used the term 'foci' in 1604.
Girard Desargues and Blaise Pascal developed a theory of conics using an early form of projective geometry and this helped to provide impetus for the study of this new field. In particular, Pascal discovered a theorem known as the hexagrammum mysticum from which many other properties of conics can be deduced.
René Descartes and Pierre Fermat both applied their newly discovered analytic geometry to the study of conics. This had the effect of reducing the geometrical problems of conics to problems in algebra. However, it was John Wallis in his 1655 treatise who first defined the conic sections as instances of equations of second degree. Written earlier, but published later, Jan de Witt's starts with Kepler's kinematic construction of the conics and then develops the algebraic equations. This work, which uses Fermat's methodology and Descartes' notation has been described as the first textbook on the subject. De Witt invented the term 'directrix'.
Applications
Conic sections are important in astronomy: the orbits of two massive objects that interact according to Newton's law of universal gravitation are conic sections if their common center of mass is considered to be at rest. If they are bound together, they will both trace out ellipses; if they are moving apart, they will both follow parabolas or hyperbolas. See two-body problem.
The reflective properties of the conic sections are used in the design of searchlights, radio-telescopes and some optical telescopes. A searchlight uses a parabolic mirror as the reflector, with a bulb at the focus; and a similar construction is used for a parabolic microphone. The 4.2 meter Herschel optical telescope on La Palma, in the Canary islands, uses a primary parabolic mirror to reflect light towards a secondary hyperbolic mirror, which reflects it again to a focus behind the first mirror.
In the real projective plane
The conic sections have some very similar properties in the Euclidean plane and the reasons for this become clearer when the conics are viewed from the perspective of a larger geometry. The Euclidean plane may be embedded in the real projective plane and the conics may be considered as objects in this projective geometry. One way to do this is to introduce homogeneous coordinates and define a conic to be the set of points whose coordinates satisfy an irreducible quadratic equation in three variables (or equivalently, the zeros of an irreducible quadratic form). More technically, the set of points that are zeros of a quadratic form (in any number of variables) is called a quadric, and the irreducible quadrics in a two dimensional projective space (that is, having three variables) are traditionally called conics.
The Euclidean plane is embedded in the real projective plane by adjoining a line at infinity (and its corresponding points at infinity) so that all the lines of a parallel class meet on this line. On the other hand, starting with the real projective plane, a Euclidean plane is obtained by distinguishing some line as the line at infinity and removing it and all its points.
Intersection at infinity
In a projective space over any division ring, but in particular over either the real or complex numbers, all non-degenerate conics are equivalent, and thus in projective geometry one speaks of "a conic" without specifying a type. That is, there is a projective transformation that will map any non-degenerate conic to any other non-degenerate conic.
The three types of conic sections will reappear in the affine plane obtained by choosing a line of the projective space to be the line at infinity. The three types are then determined by how this line at infinity intersects the conic in the projective space. In the corresponding affine space, one obtains an ellipse if the conic does not intersect the line at infinity, a parabola if the conic intersects the line at infinity in one double point corresponding to the axis, and a hyperbola if the conic intersects the line at infinity in two points corresponding to the asymptotes.
Homogeneous coordinates
In homogeneous coordinates a conic section can be represented as:
Or in matrix notation
The 3 × 3 matrix above is called the matrix of the conic section.
Some authors prefer to write the general homogeneous equation as
(or some variation of this) so that the matrix of the conic section has the simpler form,
but this notation is not used in this article.
If the determinant of the matrix of the conic section is zero, the conic section is degenerate.
As multiplying all six coefficients by the same non-zero scalar yields an equation with the same set of zeros, one can consider conics, represented by as points in the five-dimensional projective space
Projective definition of a circle
Metrical concepts of Euclidean geometry (concepts concerned with measuring lengths and angles) can not be immediately extended to the real projective plane. They must be redefined (and generalized) in this new geometry. This can be done for arbitrary projective planes, but to obtain the real projective plane as the extended Euclidean plane, some specific choices have to be made.
Fix an arbitrary line in a projective plane that shall be referred to as the absolute line. Select two distinct points on the absolute line and refer to them as absolute points. Several metrical concepts can be defined with reference to these choices. For instance, given a line containing the points and , the midpoint of line segment is defined as the point which is the projective harmonic conjugate of the point of intersection of and the absolute line, with respect to and .
A conic in a projective plane that contains the two absolute points is called a circle. Since five points determine a conic, a circle (which may be degenerate) is determined by three points. To obtain the extended Euclidean plane, the absolute line is chosen to be the line at infinity of the Euclidean plane and the absolute points are two special points on that line called the circular points at infinity. Lines containing two points with real coordinates do not pass through the circular points at infinity, so in the Euclidean plane a circle, under this definition, is determined by three points that are not collinear.
It has been mentioned that circles in the Euclidean plane can not be defined by the focus-directrix property. However, if one were to consider the line at infinity as the directrix, then by taking the eccentricity to be a circle will have the focus-directrix property, but it is still not defined by that property. One must be careful in this situation to correctly use the definition of eccentricity as the ratio of the distance of a point on the circle to the focus (length of a radius) to the distance of that point to the directrix (this distance is infinite) which gives the limiting value of zero.
Steiner's projective conic definition
A synthetic (coordinate-free) approach to defining the conic sections in a projective plane was given by Jakob Steiner in 1867.
Given two pencils of lines at two points (all lines containing and resp.) and a projective but not perspective mapping of onto . Then the intersection points of corresponding lines form a non-degenerate projective conic section.
A perspective mapping of a pencil onto a pencil is a bijection (1-1 correspondence) such that corresponding lines intersect on a fixed line , which is called the axis of the perspectivity .
A projective mapping is a finite sequence of perspective mappings.
As a projective mapping in a projective plane over a field (pappian plane) is uniquely determined by prescribing the images of three lines, for the Steiner generation of a conic section, besides two points only the images of 3 lines have to be given. These 5 items (2 points, 3 lines) uniquely determine the conic section.
Line conics
By the Principle of Duality in a projective plane, the dual of each point is a line, and the dual of a locus of points (a set of points satisfying some condition) is called an envelope of lines. Using Steiner's definition of a conic (this locus of points will now be referred to as a point conic) as the meet of corresponding rays of two related pencils, it is easy to dualize and obtain the corresponding envelope consisting of the joins of corresponding points of two related ranges (points on a line) on different bases (the lines the points are on). Such an envelope is called a line conic (or dual conic).
In the real projective plane, a point conic has the property that every line meets it in two points (which may coincide, or may be complex) and any set of points with this property is a point conic. It follows dually that a line conic has two of its lines through every point and any envelope of lines with this property is a line conic. At every point of a point conic there is a unique tangent line, and dually, on every line of a line conic there is a unique point called a point of contact. An important theorem states that the tangent lines of a point conic form a line conic, and dually, the points of contact of a line conic form a point conic.
Von Staudt's definition
Karl Georg Christian von Staudt defined a conic as the point set given by all the absolute points of a polarity that has absolute points. Von Staudt introduced this definition in Geometrie der Lage (1847) as part of his attempt to remove all metrical concepts from projective geometry.
A polarity, , of a projective plane is an involutory bijection between the points and the lines of that preserves the incidence relation. Thus, a polarity associates a point with a line by and . Following Gergonne, is called the polar of and the pole of . An absolute point (or line) of a polarity is one which is incident with its polar (pole).
A von Staudt conic in the real projective plane is equivalent to a Steiner conic.
Constructions
No continuous arc of a conic can be constructed with straightedge and compass. However, there are several straightedge-and-compass constructions for any number of individual points on an arc.
One of them is based on the converse of Pascal's theorem, namely, if the points of intersection of opposite sides of a hexagon are collinear, then the six vertices lie on a conic. Specifically, given five points, and a line passing through , say , a point that lies on this line and is on the conic determined by the five points can be constructed. Let meet in , meet in and let meet at . Then meets at the required point . By varying the line through , as many additional points on the conic as desired can be constructed.
Another method, based on Steiner's construction and which is useful in engineering applications, is the parallelogram method, where a conic is constructed point by point by means of connecting certain equally spaced points on a horizontal line and a vertical line. Specifically, to construct the ellipse with equation , first construct the rectangle with vertices and . Divide the side into equal segments and use parallel projection, with respect to the diagonal , to form equal segments on side (the lengths of these segments will be times the length of the segments on ). On the side label the left-hand endpoints of the segments with to starting at and going towards . On the side label the upper endpoints to starting at and going towards . The points of intersection, for will be points of the ellipse between and . The labeling associates the lines of the pencil through with the lines of the pencil through projectively but not perspectively. The sought for conic is obtained by this construction since three points and and two tangents (the vertical lines at and ) uniquely determine the conic. If another diameter (and its conjugate diameter) are used instead of the major and minor axes of the ellipse, a parallelogram that is not a rectangle is used in the construction, giving the name of the method. The association of lines of the pencils can be extended to obtain other points on the ellipse. The constructions for hyperbolas and parabolas are similar.
Yet another general method uses the polarity property to construct the tangent envelope of a conic (a line conic).
In the complex geometry
In the complex coordinate plane , ellipses and hyperbolas are not distinct: one may consider a hyperbola as an ellipse with an imaginary axis length. For example, the ellipse becomes a hyperbola under the substitution geometrically a complex rotation, yielding . Thus there is a 2-way classification: ellipse/hyperbola and parabola. Extending the curves to the complex projective plane, this corresponds to intersecting the line at infinity in either 2 distinct points (corresponding to two asymptotes) or in 1 double point (corresponding to the axis of a parabola); thus the real hyperbola is a more suggestive real image for the complex ellipse/hyperbola, as it also has 2 (real) intersections with the line at infinity.
Further unification occurs in the complex projective plane : the non-degenerate conics cannot be distinguished from one another, since any can be taken to any other by a projective linear transformation.
It can be proven that in , two conic sections have four points in common (if one accounts for multiplicity), so there are between 1 and 4 intersection points. The intersection possibilities are: four distinct points, two singular points and one double point, two double points, one singular point and one with multiplicity 3, one point with multiplicity 4. If any intersection point has multiplicity > 1, the two curves are said to be tangent. If there is an intersection point of multiplicity at least 3, the two curves are said to be osculating. If there is only one intersection point, which has multiplicity 4, the two curves are said to be superosculating.
Furthermore, each straight line intersects each conic section twice. If the intersection point is double, the line is a tangent line.
Intersecting with the line at infinity, each conic section has two points at infinity. If these points are real, the curve is a hyperbola; if they are imaginary conjugates, it is an ellipse; if there is only one double point, it is a parabola. If the points at infinity are the cyclic points and , the conic section is a circle. If the coefficients of a conic section are real, the points at infinity are either real or complex conjugate.
Degenerate cases
What should be considered as a degenerate case of a conic depends on the definition being used and the geometric setting for the conic section. There are some authors who define a conic as a two-dimensional nondegenerate quadric. With this terminology there are no degenerate conics (only degenerate quadrics), but we shall use the more traditional terminology and avoid that definition.
In the Euclidean plane, using the geometric definition, a degenerate case arises when the cutting plane passes through the apex of the cone.
The degenerate conic is either: a point, when the plane intersects the cone only at the apex; a straight line, when the plane is tangent to the cone (it contains exactly one generator of the cone); or a pair of intersecting lines (two generators of the cone). These correspond respectively to the limiting forms of an ellipse, parabola, and a hyperbola.
If a conic in the Euclidean plane is being defined by the zeros of a quadratic equation (that is, as a quadric), then the degenerate conics are: the empty set, a point, or a pair of lines which may be parallel, intersect at a point, or coincide. The empty set case may correspond either to a pair of complex conjugate parallel lines such as with the equation or to an imaginary ellipse, such as with the equation An imaginary ellipse does not satisfy the general definition of a degeneracy, and is thus not normally considered as degenerated. The two lines case occurs when the quadratic expression factors into two linear factors, the zeros of each giving a line. In the case that the factors are the same, the corresponding lines coincide and we refer to the line as
a double line (a line with multiplicity 2) and this is the previous case of a tangent cutting plane.
In the real projective plane, since parallel lines meet at a point on the line at infinity, the parallel line case of the Euclidean plane can be viewed as intersecting lines. However, as the point of intersection is the apex of the cone, the cone itself degenerates to a cylinder, i.e. with the apex at infinity. Other sections in this case are called cylindric sections. The non-degenerate cylindrical sections are ellipses (or circles).
When viewed from the perspective of the complex projective plane, the degenerate cases of a real quadric (i.e., the quadratic equation has real coefficients) can all be considered as a pair of lines, possibly coinciding. The empty set may be the line at infinity considered as a double line, a (real) point is the intersection of two complex conjugate lines and the other cases as previously mentioned.
To distinguish the degenerate cases from the non-degenerate cases (including the empty set with the latter) using matrix notation, let be the determinant of the 3 × 3 matrix of the conic section—that is, ; and let be the discriminant. Then the conic section is non-degenerate if and only if . If we have a point when , two parallel lines (possibly coinciding) when , or two intersecting lines when .
Pencil of conics
A (non-degenerate) conic is completely determined by five points in general position (no three collinear) in a plane and the system of conics which pass through a fixed set of four points (again in a plane and no three collinear) is called a pencil of conics. The four common points are called the base points of the pencil. Through any point other than a base point, there passes a single conic of the pencil. This concept generalizes a pencil of circles.
Intersecting two conics
The solutions to a system of two second degree equations in two variables may be viewed as the coordinates of the points of intersection of two generic conic sections.
In particular two conics may possess none, two or four possibly coincident intersection points.
An efficient method of locating these solutions exploits the homogeneous matrix representation of conic sections, i.e. a 3 × 3 symmetric matrix which depends on six parameters.
The procedure to locate the intersection points follows these steps, where the conics are represented by matrices:
given the two conics and , consider the pencil of conics given by their linear combination
identify the homogeneous parameters which correspond to the degenerate conic of the pencil. This can be done by imposing the condition that and solving for and . These turn out to be the solutions of a third degree equation.
given the degenerate conic , identify the two, possibly coincident, lines constituting it.
intersect each identified line with either one of the two original conics.
the points of intersection will represent the solutions to the initial equation system.
Generalizations
Conics may be defined over other fields (that is, in other pappian geometries). However, some care must be used when the field has characteristic 2, as some formulas can not be used. For example, the matrix representations used above require division by 2.
A generalization of a non-degenerate conic in a projective plane is an oval. An oval is a point set that has the following properties, which are held by conics: 1) any line intersects an oval in none, one or two points, 2) at any point of the oval there exists a unique tangent line.
Generalizing the focus properties of conics to the case where there are more than two foci produces sets called generalized conics.
The intersection of an elliptic cone with a sphere is a spherical conic, which shares many properties with planar conics.
In other areas of mathematics
The classification into elliptic, parabolic, and hyperbolic is pervasive in mathematics, and often divides a field into sharply distinct subfields. The classification mostly arises due to the presence of a quadratic form (in two variables this corresponds to the associated discriminant), but can also correspond to eccentricity.
Quadratic form classifications:
Quadratic forms Quadratic forms over the reals are classified by Sylvester's law of inertia, namely by their positive index, zero index, and negative index: a quadratic form in variables can be converted to a diagonal form, as where the number of +1 coefficients, is the positive index, the number of −1 coefficients, is the negative index, and the remaining variables are the zero index so In two variables the non-zero quadratic forms are classified as:
— positive-definite (the negative is also included), corresponding to ellipses,
— degenerate, corresponding to parabolas, and
— indefinite, corresponding to hyperbolas.
In two variables quadratic forms are classified by discriminant, analogously to conics, but in higher dimensions the more useful classification is as definite, (all positive or all negative), degenerate, (some zeros), or indefinite (mix of positive and negative but no zeros). This classification underlies many that follow.
Curvature The Gaussian curvature of a surface describes the infinitesimal geometry, and may at each point be either positive – elliptic geometry, zero – Euclidean geometry (flat, parabola), or negative – hyperbolic geometry; infinitesimally, to second order the surface looks like the graph of (or 0), or . Indeed, by the uniformization theorem every surface can be taken to be globally (at every point) positively curved, flat, or negatively curved. In higher dimensions the Riemann curvature tensor is a more complicated object, but manifolds with constant sectional curvature are interesting objects of study, and have strikingly different properties, as discussed at sectional curvature.
Second order PDEs Partial differential equations (PDEs) of second order are classified at each point as elliptic, parabolic, or hyperbolic, accordingly as their second order terms correspond to an elliptic, parabolic, or hyperbolic quadratic form. The behavior and theory of these different types of PDEs are strikingly different – representative examples is that the Poisson equation is elliptic, the heat equation is parabolic, and the wave equation is hyperbolic.
Eccentricity classifications include:
Möbius transformations Real Möbius transformations (elements of or its 2-fold cover, ) are classified as elliptic, parabolic, or hyperbolic accordingly as their half-trace is or mirroring the classification by eccentricity.
Variance-to-mean ratio The variance-to-mean ratio classifies several important families of discrete probability distributions: the constant distribution as circular (eccentricity 0), binomial distributions as elliptical, Poisson distributions as parabolic, and negative binomial distributions as hyperbolic. This is elaborated at cumulants of some discrete probability distributions.
| Mathematics | Geometry | null |
3261205 | https://en.wikipedia.org/wiki/Landspout | Landspout |
Landspout is a term created by atmospheric scientist Howard B. Bluestein in 1985 for a tornado not associated with a mesocyclone. The Glossary of Meteorology defines a landspout:
"Colloquial expression describing tornadoes occurring with a parent cloud in its growth stage and with its vorticity originating in the boundary layer.
The parent cloud does not contain a preexisting mid-level mesocyclone. The landspout was so named because it looks like "a weak Florida Keys waterspout over land."
Landspouts are typically weaker than mesocyclone-associated tornadoes spawned within supercell thunderstorms, in which the strongest tornadoes form.
Characteristics
Landspouts are a type of tornado that forms during the growth stage of a cumulus congestus or occasionally a cumulonimbus cloud when an updraft stretches boundary layer vorticity upward into a vertical axis and tightens it into a strong vortex. Landspouts can also occur due to interactions from outflow boundaries, as they can occasionally cause enhanced convergence and vorticity at the surface. These generally are smaller and weaker than supercell tornadoes and do not form from a mesocyclone or pre-existing rotation in the cloud. Because of this lower depth, smaller size, and weaker intensity, landspouts are rarely detected by Doppler weather radar (NWS).
Landspouts share a strong resemblance and development process to that of waterspouts, usually taking the form of a translucent and highly laminar helical tube. "They are typically narrow, rope-like condensation funnels that form while the thunderstorm cloud is still growing and there is no rotating updraft", according to the National Weather Service. Landspouts are considered tornadoes since a rapidly rotating column of air is in contact with both the surface and a cumuliform cloud. Not all landspouts are visible, and many are first sighted as debris swirling at the surface before eventually filling in with condensation and dust.
Orography can influence landspout (and even mesocyclone tornado) formation. A notable example is the propensity for landspout occurrence in the Denver Convergence Vorticity Zone (DCVZ).
Life cycle
Forming in relation to mesocyclones and under updrafts, a landspout generally lasts for less than 15 minutes; however, they can persist substantially longer, and produce significant damage. Landspouts tend to progress through recognizable stages of formation, maturation, and dissipation, and usually decay when a downdraft or significant precipitation (outflow) occur nearby. They may form in lines or groups of multiple landspouts.
Damage
Landspouts are usually at EF0 level where the intensity of winds is low and weak. However, winds inside a Landspout tornado can reach 100 miles per hour (MPH).
| Physical sciences | Storms | Earth science |
3263679 | https://en.wikipedia.org/wiki/Microgeneration | Microgeneration | Microgeneration is the small-scale production of heat or electric power from a "low carbon source," as an alternative or supplement to traditional centralized grid-connected power.
Microgeneration technologies include small-scale wind turbines, micro hydro, solar PV systems, microbial fuel cells, ground source heat pumps, and micro combined heat and power installations. These technologies are often combined to form a hybrid power solution that can offer superior performance and lower cost than a system based on one generator.
History
In the United States, Microgeneration had its roots in the 1973 oil crisis and the Yom Kippur War which prompted innovation.
on June 20, 1979, 32 solar panels were installed at the White House. The solar cells were dismantled 7 years later during the Reagan administration.
The use of Solar water heating dates back before 1900 with "the first practical solar cell being developed by Bell Labs in 1954." The "University of Delaware is credited with creating one of the first solar buildings, “Solar One,” in 1973. The construction ran on a combination of solar thermal and solar photovoltaic power. The building didn't use solar panels; instead, solar was integrated into the rooftop."
Technologies and set-up
Power plant
In addition to the electricity production plant (e.g. wind turbine and solar panel), infrastructure for energy storage and power conversion and a hook-up to the regular electricity grid is usually needed and/or foreseen. Although a hookup to the regular electricity grid is not essential, it helps to decrease costs by allowing financial recompensation schemes. In the developing world however, the start-up cost for this equipment is generally too high, thus leaving no choice but to opt for alternative set-ups.
Extra equipment needed besides the power plant
The whole of the equipment required to set up a working system and for an off-the-grid generation and/or a hook up to the electricity grid herefore is termed a balance of system and is composed of the following parts with PV-systems:
Energy storage apparatus
A major issue with off-grid solar and wind systems is that the power is often needed when the sun is not shining or when the wind is calm, this is generally not required for purely grid-connected systems:
a series of deep cycle, stationary or sealed maintenance free batteries (the most common solution)
or other means of energy storage (e.g. hydrogen fuel cells, Flywheel energy storage, pumped-storage hydroelectricity, compressed air tanks, ...)
a charge controller for charging the batteries or other energy storage
For converting DC battery power into AC as required for many appliances, or for feeding excess power into a commercial power grid:
an inverter or grid-interactive inverter. The whole is also sometimes referred to as "power conditioning equipment"
Safety equipment
Groundings, transfer switches or isolator switches and surge protectors. The whole is also sometimes referred to as "safety equipment"
Usually, in microgeneration for homes in the developing world, prefabricated house-wiring systems (as wiring harnesses or prefabricated distribution units) are used instead . Simplified house-wiring boxes and cables, known as wiring harnesses, can simply be bought and mounted into the building without requiring much knowledge about the wiring itself. As such, even people without technical expertise are able to install them. They are also comparatively cheap and offer safety advantages.
battery meters (for charging rate and voltage), and meters for power consumption and electricity provision to the regular power grid
Wind turbine specific
With wind turbines, hydroelectric plants, ... the extra equipment needed is more or less the same as with PV-systems (depending on the type of wind turbine used), yet also include:
a manual disconnect switch
foundation for the tower
grounding system
shutoff and/or dummy-load devices for use in high wind when power generated exceeds current needs and storage system capacity.
Vibro-wind power
A new wind energy technology is being developed that converts energy from wind energy vibrations to electricity. This energy, called Vibro-Wind technology, can use winds of less strength than normal wind turbines, and can be placed in almost any location.
A prototype consisted of a panel mounted with oscillators made out of pieces of foam. The conversion from mechanical to electrical energy is done using a piezoelectric transducer, a device made of a ceramic or polymer that emits electrons when stressed. The building of this prototype was led by Francis Moon, professor of mechanical and aerospace engineering at Cornell University. Moon's work in Vibro-Wind Technology was funded by the Atkinson Center for a Sustainable Future at Cornell. Vibro-wind power is not yet commercially viable and in early development stages. Significant progress will be needed to commercialize this early stage venture.
Possible set-ups
Several microgeneration set-ups are possible. These are:
Off-the-grid set-ups which include:
Off-the grid set-ups without energy storage (e.g., battery, ...)
Off-the grid set-ups with energy storage (e.g., battery, ...)
Battery charging stations
Grid-connected set-ups which include:
Grid connected with backup to power critical loads
Grid-connected set-ups without financial recompensation scheme
Grid-connected set-ups with net metering
Grid connected set-ups with net purchase and sale
All set-ups mentioned can work either on a single power plant or a combination of power plants (in which case it is called a hybrid power system).
For safety, grid-connected set-ups must automatically switch off or enter an "anti-islanding mode" when there is a failure of the mains power supply. For more about this, see the article on the condition of islanding.
Costs
Depending on the set-up chosen (financial recompensation scheme, power plant, extra equipment), prices may vary. According to Practical Action, microgeneration at home which uses the latest in cost saving-technology (wiring harnesses, ready boards, cheap DIY-power plants, e.g. DIY wind turbines) the household expenditure can be extremely low-cost. In fact, Practical Action mentions that many households in farming communities in the developing world spend less than $1 on electricity per month. . However, if matters are handled less economically (using more commercial systems/approaches), costs will be dramatically higher. In most cases however, financial advantage will still be done using microgeneration on renewable power plants; often in the range of 50-90% as local production has no electricity transportation losses on long distance power lines or energy losses from the Joule effect in transformers where in general 8-15% of the energy is lost.
In the UK, the government offers both grants and feedback payments to help businesses, communities and private homes to install these technologies. Businesses can write the full cost of installation off against taxable profits whilst homeowners receive a flat-rate grant or payments per kW h of electricity generated and paid back into the national grid. Community organizations can also receive up to £200,000 in grant funding.
In the UK, the Microgeneration Certification Scheme provides approval for Microgeneration Installers and Products which is a mandatory requirement of funding schemes such as the Feed in Tariffs and Renewable Heat Incentive.
Grid parity
Grid parity (or socket parity) occurs when an alternative energy source can generate electricity at a levelized cost of energy (LCOE) that is less than or equal to the price of purchasing power from the electricity grid. Reaching grid parity is considered to be the point at which an energy source becomes a contender for widespread development without subsidies or government support. It is widely believed that a wholesale shift in a generation to these forms of energy will take place when they reach grid parity.
Grid parity has been reached in some locations with on-shore wind power around 2000, and with solar power it was achieved for the first time in Spain in 2013.
Comparison with large-scale generation
Most forms of microgeneration can dynamically balance the supply and demand for electric power, by producing more power during periods of high demand and high grid prices, and less power during periods of low demand and low grid prices. This "hybridized grid" allows both microgeneration systems and large power plants to operate with greater energy efficiency and cost effectiveness than either could alone.
Domestic self-sufficiency
Microgeneration can be integrated as part of a self-sufficient house and is typically complemented with other technologies such as domestic food production systems (permaculture and agroecosystem), rainwater harvesting, composting toilets or even complete greywater treatment systems. Domestic microgeneration technologies include: photovoltaic solar systems, small-scale wind turbines, micro combined heat and power installations, biodiesel and biogas.
Private generation decentralizes the generation of electricity and may also centralize the pooling of surplus energy. While they have to be purchased, solar shingles and panels are both available. Capital cost is high, but saves in the long run. With appropriate power conversion, solar PV panels can run the same electric appliances as electricity from other sources.
Passive solar water heating is another effective method of utilizing solar power. The simplest method is the solar (or a black plastic) bag. Set between out in the sun and allow to heat. Perfect for a quick warm shower.
The ‘breadbox’ heater can be constructed easily with recycled materials and basic building experience. Consisting of a single or array of black tanks mounted inside a sturdy box insulated on the bottom and sides. The lid, either horizontal or angled to catch the most sun, should be well sealed and of a transparent glazing material (glass, fiberglass, or high temp resistant molded plastic). Cold water enters the tank near the bottom, heats and rises to the top where it is piped back into the home.
Ground source heat pumps exploit stable ground temperatures by benefiting from the thermal energy storage capacity of the ground. Typically ground source heat pumps have a high initial cost and are difficult to install by the average homeowner. They use electric motors to transfer heat from the ground with a high level of efficiency. The electricity may come from renewable sources or from external non-renewable sources.
Fuel
Biodiesel is an alternative fuel that can power diesel engines and can be used for domestic heating. Numerous forms of biomass, including soybeans, peanuts, and algae (which has the highest yield), can be used to make biodiesel. Recycled vegetable oil (from restaurants) can also be converted into biodiesel.
Biogas is another alternative fuel, created from the waste product of animals. Though less practical for most homes, a farm environment provides a perfect place to implement the process. By mixing the waste and water in a tank with space left for air, methane produces naturally in the airspace. This methane can be piped out and burned, and used for a cookfire.
Government policy
Policymakers were accustomed to an energy system based on big, centralised projects like nuclear or gas-fired power stations. A change of mindsets and incentives are bringing microgeneration into the mainstream. Planning regulations may also require streamlining to facilitate the retrofitting of microgenerating facilities onto homes and buildings.
Most of developed countries, including Canada (Alberta), the United Kingdom, Germany, Poland, Israel and USA have laws allowing microgenerated electricity to be sold into the national grid.
Alberta, Canada
In January 2009, the Government of Alberta's Micro-Generation Regulation came into effect, setting rules that allow Albertans to generate their own environmentally friendly electricity and receive credit for any power they send into the electricity grid.
Poland
In December 2014, the Polish government will vote on a bill which calls for microgeneration, as well as large scale wind farms in the Baltic Sea as a solution to cut back on emissions from the country's coal plants as well as to reduce Polish dependence on Russian gas. Under the terms of the new bill, individuals and small businesses which generate up to 40 kW of 'green' energy will receive 100% of market price for any electricity they feed back into the grid, and businesses who set up large-scale offshore wind farms in the Baltic will be eligible for subsidization by the state. Costs of implementing these new policies will be offset by the creation of a new tax on non-sustainable energy use.
United States
The United States has inconsistent energy generation policies across its 50 states. State energy policies and laws may vary significantly with location. Some states have imposed requirements on utilities that a certain percentage of total power generation be from renewable sources. For this purpose, renewable sources include wind, hydroelectric, and solar power whether from large or microgeneration projects. Further, in some areas transferable "renewable source energy" credits are needed by power companies to meet these mandates. As a result, in some portions of the United States, power companies will pay a portion of the cost of renewable source microgeneration projects in their service areas. These rebates are in addition to any Federal or State renewable-energy income-tax credits that may be applicable. In other areas, such rebates may differ or may not be available.
United Kingdom
The UK Government published its Microgeneration Strategy in March 2006, although it was seen as a disappointment by many commentators. In contrast, the Climate Change and Sustainable Energy Act 2006 has been viewed as a positive step. To replace earlier schemes, the Department of Trade and Industry (DTI) launched the Low Carbon Buildings Programme in April 2006, which provided grants to individuals, communities and businesses wishing to invest in microgenerating technologies. These schemes have been replaced in turn by new proposals from the Department for Energy and Climate Change (DECC) for clean energy cashback via Feed-In Tariffs for generating electricity from April 2010 and the Renewable Heat Incentive for generating renewable heat from 28 November 2011.
Feed-In Tariffs are intended to incentivise small-scale (less than 5MW), low-carbon electricity generation. These feed-in tariffs work alongside the Renewables Obligation (RO), which will remain the primary mechanism to incentivise deployment of large-scale renewable electricity generation. The Renewable Heat Incentive (RHI) in intended to incentivise the generation of heat from renewable sources. They also currently offer up to 21p per kWh from December 2011 in the Tariff for photovoltaics plus another 3p for the Export Tariff - an overall figure which could see a household earning back double what they currently pay for their electricity.
On 31 October 2011, the government announced a sudden cut in the feed-in tariff from 43.3p/kWh to 21p/kWh with the new tariff to apply to all new solar PV installations with an eligibility date on or after 12 December 2011.
Prominent British politicians who have announced they are fitting microgenerating facilities to their homes include the Conservative party leader, David Cameron, and the Labour Science Minister, Malcolm Wicks. These plans included small domestic sized wind turbines. Cameron, before becoming Prime Minister in the 2010 general elections, had been asked during an interview on BBC One's The Politics Show on October 29, 2006, if he would do the same should he get to 10 Downing Street. “If they’d let me, yes,” he replied.
In the December 2006 Pre-Budget Report the government announced that the sale of surplus electricity from installations designed for personal use, would not be subject to Income Tax. Legislation to this effect has been included in the Finance Bill 2007.
In popular culture
Several movies and TV shows such as The Mosquito Coast, Jericho, The Time Machine and Beverly Hills Family Robinson have done a great deal in raising interest in microgeneration among the general public. Websites such as Instructables and Practical Action propose DIY solutions that can lower the cost of microgeneration, thus increasing its popularity. Specialised magazines such as OtherPower and Home Power also provide practical advice and guidance.
| Technology | Power generation | null |
3263946 | https://en.wikipedia.org/wiki/Swamphen | Swamphen | Porphyrio is the swamphen or swamp hen bird genus in the rail family. It includes some smaller species of gallinules which are sometimes separated as genus Porphyrula or united with the gallinules proper (or "moorhens") in Gallinula. The Porphyrio gallinules are distributed in the warmer regions of the world. The group probably originated in Africa in the Middle Miocene, before spreading across the world in waves from the Late Miocene to Pleistocene.
The genus Porphyrio was introduced by the French zoologist Mathurin Jacques Brisson in 1760 with the western swamphen (Porphyrio porphyrio) as the type species. The genus name Porphyrio is the Latin name for "swamphen", meaning "purple".
Species
The genus contains ten extant species and two that have become extinct in historical times:
Extant species
Purple swamphen complex
Western swamphen, Porphyrio porphyrio
African swamphen, Porphyrio madagascariensis
Grey-headed swamphen, Porphyrio poliocephalus
Black-backed swamphen, Porphyrio indicus
Philippine swamphen, Porphyrio pulverulentus
Australasian swamphen, Porphyrio melanotus
South Island takahē, Porphyrio hochstetteri
Allen's gallinule, also known as lesser gallinule, Porphyrio alleni (formerly Porphyrula alleni)
American purple gallinule, Porphyrio martinica (formerly Porphyrula martinica)
Azure gallinule, Porphyrio flavirostris
Extinct species
White swamphen, or Lord Howe swamphen Porphyrio albus (early 19th century)
Réunion swamphen, or , Porphyrio coerulescens (18th century, hypothetical species)
Marquesas swamphen, Porphyrio paepae (prehistoric or )
North Island takahē, or , Porphyrio mantelli (prehistoric or 1890s)
New Caledonian swamphen, Porphyrio kukwiedei (prehistoric or more recent)
Huahine swamphen, Porphyrio mcnabi (prehistoric)
Buka swamphen, Porphyrio sp. (prehistoric)
Giant swamphen, Porphyrio sp. (prehistoric)
New Ireland swamphen, Porphyrio sp. (prehistoric)
Norfolk Island swamphen, Porphyrio sp. (prehistoric)
Rota swamphen, Porphyrio sp. (prehistoric)
Mangaia swamphen/woodhen, ?Porphyrio sp. (prehistoric) - would belong into Porphyrula, Gallinula or Pareudiastes
| Biology and health sciences | Gruiformes | Animals |
3264579 | https://en.wikipedia.org/wiki/Initial%20mass%20function | Initial mass function | In astronomy, the initial mass function (IMF) is an empirical function that describes the initial distribution of masses for a population of stars during star formation. IMF not only describes the formation and evolution of individual stars, it also serves as an important link that describes the formation and evolution of galaxies.
The IMF is often given as a probability density function (PDF) that describes the probability of a star that has a certain mass during its formation. It differs from the present-day mass function (PDMF), which describes the current distribution of masses of stars, such as red giants, white dwarfs, neutron stars, and black holes, after some time of evolution away from the main sequence stars and after a certain amount of mass loss. Since there are not enough young clusters of stars available for the calculation of IMF, PDMF is used instead and the results are extrapolated back to IMF. IMF and PDMF can be linked through the "stellar creation function". Stellar creation function is defined as the number of stars per unit volume of space in a mass range and a time interval. In the case that all the main sequence stars have greater lifetimes than the galaxy, IMF and PDMF are equivalent. Similarly, IMF and PDMF are equivalent in brown dwarfs due to their unlimited lifetimes.
The properties and evolution of a star are closely related to its mass, so the IMF is an important diagnostic tool for astronomers studying large quantities of stars. For example, the initial mass of a star is the primary factor of determining its colour, luminosity, radius, radiation spectrum, and quantity of materials and energy it emitted into interstellar space during its lifetime. At low masses, the IMF sets the Milky Way Galaxy mass budget and the number of substellar objects that form. At intermediate masses, the IMF controls chemical enrichment of the interstellar medium. At high masses, the IMF sets the number of core collapse supernovae that occur and therefore the kinetic energy feedback.
The IMF is relatively invariant from one group of stars to another, though some observations suggest that the IMF is different in different environments, and potentially dramatically different in early galaxies.
Development
The mass of a star can only be directly determined by applying Kepler's third law to a binary star system. However, the number of binary systems that can be directly observed is low, thus not enough samples to estimate the initial mass function. Therefore, the stellar luminosity function is used to derive a mass function (a present-day mass function, PDMF) by applying mass–luminosity relation. The luminosity function requires accurate determination of distances, and the most straightforward way is by measuring stellar parallax within 20 parsecs from the earth. Although short distances yield a smaller number of samples with greater uncertainty of distances for stars with faint magnitudes (with a magnitude > 12 in the visual band), it reduces the error of distances for nearby stars, and allows accurate determination of binary star systems. Since the magnitude of a star varies with its age, the determination of mass-luminosity relation should also take into account its age. For stars with masses above , it takes more than 10 billion years for their magnitude to increase substantially. For low-mass stars with below , it takes 5 × 108 years to reach main sequence stars.
The IMF is often stated in terms of a series of power laws, where (sometimes also represented as ), the number of stars with masses in the range to within a specified volume of space, is proportional to , where is a dimensionless exponent.
Commonly used forms of the IMF are the Kroupa (2001) broken power law and the Chabrier (2003) log-normal.
Salpeter (1955)
Edwin E. Salpeter is the first astrophysicist who attempted to quantify IMF by applying power law into his equations. His work is based upon the sun-like stars that can be easily observed with great accuracy. Salpeter defined the mass function as the number of stars in a volume of space observed at a time as per logarithmic mass interval. His work enabled a large number of theoretical parameters to be included in the equation while converging all these parameters into an exponent of . The Salpeter IMF is
where is a constant relating to the local stellar density.
Miller–Scalo (1979)
Glenn E. Miller and John M. Scalo extended the work of Salpeter, by suggesting that the IMF "flattened" () when stellar masses fell below .
Kroupa (2002)
Pavel Kroupa kept between , but introduced between and below . Above , correcting for unresolved binary stars also adds a fourth domain with .
Chabrier (2003)
Gilles Chabrier gave the following expression for the density of individual stars in the Galactic disk, in units of pc:
This expression is log-normal, meaning that the logarithm of the mass follows a Gaussian distribution up to .
For stellar systems (namely binaries), he gave:
Slope
The initial mass function is typically graphed on a logarithm scale of log(N) vs log(m). Such plots give approximately straight lines with a slope Γ equal to 1–α. Hence Γ is often called the slope of the initial mass function. The present-day mass function, for coeval formation, has the same slope except that it rolls off at higher masses which have evolved away from the main sequence.
Uncertainties
There are large uncertainties concerning the substellar region. In particular, the classical assumption of a single IMF covering the whole substellar and stellar mass range is being questioned, in favor of a two-component IMF to account for possible different formation modes for substellar objects—one IMF covering brown dwarfs and very-low-mass stars, and another ranging from the higher-mass brown dwarfs to the most massive stars. This leads to an overlap region approximately between where both formation modes may account for bodies in this mass range.
Variation
The possible variation of the IMF affects our interpretation of the galaxy signals and the estimation of cosmic star formation history thus is important to consider.
In theory, the IMF should vary with different star-forming conditions. Higher ambient temperature increases the mass of collapsing gas clouds (Jeans mass); lower gas metallicity reduces the radiation pressure thus make the accretion of the gas easier, both lead to more massive stars being formed in a star cluster. The galaxy-wide IMF can be different from the star-cluster scale IMF and may systematically change with the galaxy star formation history.
Measurements of the local universe where single stars can be resolved are consistent with an invariant IMF but the conclusion suffers from large measurement uncertainty due to the small number of massive stars and difficulties in distinguishing binary systems from the single stars. Thus IMF variation effect is not prominent enough to be observed in the local universe. However, recent photometric survey across cosmic time does suggest a potentially systematic variation of the IMF at high redshift.
Systems formed at much earlier times or further from the galactic neighborhood, where star formation activity can be hundreds or even thousands time stronger than the current Milky Way, may give a better understanding. It has been consistently reported both for star clusters and galaxies that there seems to be a systematic variation of the IMF. However, the measurements are less direct. For star clusters the IMF may change over time due to complicated dynamical evolution.
Origin of the Stellar IMF
Recent studies have suggested that filamentary structures in molecular clouds play a crucial role in the initial conditions of star formation and the origin of the stellar IMF. Herschel observations of the California giant molecular cloud show that both the prestellar core mass function (CMF) and the filament line mass function (FLMF) follow power-law distributions at the high-mass end, consistent with the Salpeter power-law IMF. Specifically, the CMF follows for masses greater than , and the FLMF follows for filament line masses greater than . Recent research suggests that the global prestellar CMF in molecular clouds is the result of the integration of CMFs generated by individual thermally supercritical filaments, which indicates a tight connection between the FLMF and the CMF/IMF, supporting the idea that filamentary structures are a critical evolutionary step in establishing a Salpeter-like mass function.
| Physical sciences | Stellar astronomy | Astronomy |
4428626 | https://en.wikipedia.org/wiki/Rhyniophyte | Rhyniophyte | The rhyniophytes are a group of extinct early vascular plants that are considered to be similar to the genus Rhynia, found in the Early Devonian (around ). Sources vary in the name and rank used for this group, some treating it as the class Rhyniopsida, others as the subdivision Rhyniophytina or the division Rhyniophyta. The first definition of the group, under the name Rhyniophytina, was by Banks, since when there have been many redefinitions, including by Banks himself. "As a result, the Rhyniophytina have slowly dissolved into a heterogeneous collection of plants ... the group contains only one species on which all authors agree: the type species Rhynia gwynne-vaughanii". When defined very broadly, the group consists of plants with dichotomously branched, naked aerial axes ("stems") with terminal spore-bearing structures (sporangia). The rhyniophytes are considered to be stem group tracheophytes (vascular plants).
Definitions
The group was described as a subdivision of the division Tracheophyta by Harlan Parker Banks in 1968 under the name Rhyniophytina. The original definition was: "plants with naked (lacking emergences), dichotomizing axes bearing sporangia that are terminal, usually fusiform and may dehisce longitudinally; they are diminutive plants and, in so far as is known, have a small terete xylem strand with a central protoxylem." With this definition, they are polysporangiophytes, since their sporophytes consisted of branched stems bearing sporangia (spore-forming organs). They lacked leaves or true roots but did have simple vascular tissue. Informally, they are often called rhyniophytes or, as mentioned below, rhyniophytoids.
However, as originally circumscribed, the group was found not to be monophyletic since some of its members are now known to lack vascular tissue. The definition that seems to be used most often now is that of D. Edwards and D.S. Edwards: "plants with smooth axes, lacking well-defined spines or leaves, showing a variety of branching patterns that may be isotomous, anisotomous, pseudomonopodial or adventitious. Elongate to globose sporangia were terminal on main axes or on lateral systems showing limited branching. It seems probable that the xylem, comprising a solid strand of tracheids, was centrarch." However, Edwards and Edwards also decided to include rhyniophytoids, plants which "look like rhyniophytes, but cannot be assigned unequivocally to that group because of inadequate anatomical preservation", but exclude plants like Aglaophyton and Horneophyton which definitely do not possess tracheids.
In 1966, slightly before Banks created the subdivision, the group was treated as a division under the name Rhyniophyta. Taylor et al. in their book Paleobotany use Rhyniophyta as a formal taxon, but with a loose definition: plants "characterized by dichotomously branched, naked aerial axes with terminal sporangia". They thus include under "other rhyniophytes" plants apparently without vascular tissue.
In 2010, the name paratracheophytes was suggested, to distinguish such plants from 'true' tracheophytes or eutracheophytes.
In 2013, Hao and Xue returned to the earlier definition. Their class Rhyniopsida (rhyniopsids) is defined by the presence of sporangia that terminate isotomous branching systems (i.e. the plants have branching patterns in which the branches are equally sized, rather than one branch dominating, like the trunk of a tree). The shape and symmetry of the sporangia was then used to divide up the group. Rhynialeans (order Rhyniales), such as Rhynia gwynne-vaughanii, Stockmansella and Huvenia, had radially symmetrical sporangia that were longer than wide and possessed vascular tissue with S-type tracheids. Cooksonioids, such as Cooksonia pertoni, C. paranensis and C. hemisphaerica, had radially symmetrical or trumpet-shaped sporangia, without clear evidence of vascular tissue. Renalioids, such as Aberlemnia, Cooksonia crassiparietilis and Renalia had bilaterally symmetrical sporangia and protosteles.
Taxonomy
There is no agreement on the formal classification to be used for the rhyniophytes. The following are some of the names which may be used:
Division Rhyniophyta
Subdivision Rhyniophytina Banks (1968)
Class Rhyniopsida Kryshtofovich (1925)
Order Rhyniales Němejc (1950)
Family Rhyniaceae Kidston & Lang (1920)
Phylogeny
In 2004, Crane et al. published a cladogram for the polysporangiophytes in which the Rhyniaceae are shown as the sister group of all other tracheophytes (vascular plants). Some other former "rhyniophytes", such as Horneophyton and Aglaophyton, are placed outside the tracheophyte clade, as they did not possess true vascular tissue (in particular did not have tracheids). However, both Horneophyton and Aglaophyton have been tentatively classified as tracheophytes in at least one recent cladistic analysis of Early Devonian land plants.
Partial cladogram by Crane et al. including the more certain rhyniophytes:
(See the Polysporangiophyte article for the expanded cladogram.)
Genera
The taxon and informal terms corresponding to it have been used in different ways. Hao and Xue in 2013 circumscribed their Rhyniopsida quite broadly, dividing it into rhynialeans, cooksonioids and renalioids. Genera included by Hao and Xue are listed below, with assignments to their three subgroups where these are given.
Aberlemnia (renalioids)
Aglaophyton (rhynialeans)
Caia
Cooksonia (cooksonioids + renalioids)
Culullitheca
Eogaspesiea (= Eogaspesia) (rhynialeans)
Eorhynia
Filiformorama
Fusitheca (= Fusiformitheca)
Grisellatheca
Hsua (=Hsüa) (renalioids)
Huia
Huvenia (rhynialeans)
Junggaria (= Cooksonella, Eocooksonia)
Pertonella
Renalia (renalioids)
Resilitheca
Rhynia (rhynialeans)
Salopella (rhynialeans?)
Sartilmania
Sennicaulis
Sporathylacium
Steganotheca
Stockmansella (rhynialeans)
Tarrantia (rhynialeans?)
Tortilicaulis
Uskiella (rhynialeans)
It has been suggested that the poorly preserved Eohostimella, found in deposits of Early Silurian age (Llandovery, around ), may also be a rhyniophyte. Others have placed some of these genera in different groups. For example, Tortilicaulis has been considered to be a horneophyte.
Rhynie flora
The general term "rhyniophytes" or "rhyniophytoids" is sometimes used for the assemblage of plants found in the Rhynie chert Lagerstätte - rich fossil beds in Aberdeenshire, Scotland, and roughly coeval sites with similar flora. Used in this way, these terms refer to a floristic assemblage of more or less related early land plants, not a taxon. Though the rhyniophytes are well represented, plants with simpler anatomy, like Aglaophyton, are also common; there are also more complex plants, like Asteroxylon, which has a very early form of leaves.
| Biology and health sciences | Pteridophytes | Plants |
4432524 | https://en.wikipedia.org/wiki/Waterborne%20disease | Waterborne disease | Waterborne diseases are conditions (meaning adverse effects on human health, such as death, disability, illness or disorders) caused by pathogenic micro-organisms that are transmitted by water. These diseases can be spread while bathing, washing, drinking water, or by eating food exposed to contaminated water. They are a pressing issue in rural areas amongst developing countries all over the world. While diarrhea and vomiting are the most commonly reported symptoms of waterborne illness, other symptoms can include skin, ear, respiratory, or eye problems. Lack of clean water supply, sanitation and hygiene (WASH) are major causes for the spread of waterborne diseases in a community. Therefore, reliable access to clean drinking water and sanitation is the main method to prevent waterborne diseases.
Microorganisms causing diseases that characteristically are waterborne prominently include protozoa and bacteria, many of which are intestinal parasites, or invade the tissues or circulatory system through walls of the digestive tract. Various other waterborne diseases are caused by viruses.
Yet other important classes of waterborne diseases are caused by metazoan parasites. Typical examples include certain Nematoda, that is to say "roundworms". As an example of waterborne Nematode infections, one important waterborne nematode disease is Dracunculiasis. It is acquired by swallowing water in which certain copepoda occur that act as vectors for the Nematoda. Anyone swallowing a copepod that happens to be infected with Nematode larvae in the genus Dracunculus, becomes liable to infection. The larvae cause guinea worm disease.
Another class of waterborne metazoan pathogens are certain members of the Schistosomatidae, a family of blood flukes. They usually infect people that make skin contact with the water. Blood flukes are pathogens that cause Schistosomiasis of various forms, more or less seriously affecting hundreds of millions of people worldwide.
Terminology
The term waterborne disease is reserved largely for infections that predominantly are transmitted through contact with or consumption of microbially polluted water. Many infections may be transmitted by microbes or parasites that accidentally, possibly as a result of exceptional circumstances, have entered the water. However, the fact that there might be an occasional infection need not mean that it is useful to categorize the resulting disease as "waterborne". Nor is it common practice to refer to diseases such as malaria as "waterborne" just because mosquitoes have aquatic phases in their life cycles, or because treating the water they inhabit happens to be an effective strategy in control of the mosquitoes that are the vectors.
A related term is "water-related disease" which is defined as "any significant or widespread adverse effects on human health, such as death, disability, illness or disorders, caused directly or indirectly by the condition, or changes in the quantity or quality of any water". Water-related diseases are grouped according to their transmission mechanism: water borne, water hygiene, water based, water related. The main transmission mode for waterborne diseases is ingestion of contaminated water.
Causes
Lack of clean water supply, sanitation and hygiene (WASH) are major causes for the spread of waterborne diseases in a community. The fecal–oral route is a disease transmission pathway for waterborne diseases. Poverty also increases the risk of communities to be affected by waterborne diseases. For example, the economic level of a community impacts their ability to have access to clean water. Less developed countries might be more at risk for potential outbreaks of waterborne diseases but more developed regions also are at risk to waterborne disease outbreaks.
Influence of climate change
Diseases by type of pathogen
Protozoa
Bacteria
Viruses
Algae
Parasitic worms
Prevention
Reliable access to clean drinking water and sanitation is the main method to prevent waterborne diseases. The aim is to break the fecal–oral route of disease transmission.
Epidemiology
According to the World Health Organization, waterborne diseases account for an estimated 3.6% of the total DALY (disability- adjusted life year) global burden of disease, and cause about 1.5 million human deaths annually. The World Health Organization estimates that 58% of that burden, or 842,000 deaths per year, is attributable to a lack of safe drinking water supply, sanitation and hygiene (summarized as WASH).
United States
The Waterborne Disease and Outbreak Surveillance System (WBDOSS) is the principal database used to identify the causative agents, deficiencies, water systems, and sources associated with waterborne disease and outbreaks in the United States. Since 1971, the Centers for Disease Control and Prevention (CDC), the Council of State and Territorial Epidemiologists (CSTE), and the US Environmental Protection Agency (EPA) have maintained this surveillance system for collecting and reporting data on "waterborne disease and outbreaks associated with recreational water, drinking water, environmental, and undetermined exposures to water." "Data from WBDOSS have supported EPA efforts to develop drinking water regulations and have provided guidance for CDC's recreational water activities."
WBDOSS relies on complete and accurate data from public health departments in individual states, territories, and other U.S. jurisdictions regarding waterborne disease and outbreak activity. In 2009, reporting to the WBDOSS transitioned from a paper form to the electronic National Outbreak Reporting System (NORS). Annual or biennial surveillance reports of the data collected by the WBDOSS have been published in CDC reports from 1971 to 1984; since 1985, surveillance data have been published in the Morbidity and Mortality Weekly Report (MMWR).
WBDOSS and the public health community work together to look into the causes of contaminated water leading to waterborne disease outbreaks and maintaining those outbreaks. They do so by having the public health community investigating the outbreaks and WBDOSS receiving the reports.
Society and culture
Socioeconomic impact
Waterborne diseases can have a significant impact on the economy. People who are infected by a waterborne disease are usually confronted with related healthcare costs. This is especially the case in developing countries. On average, a family spends about 10% of the monthly households income per person infected.
History
Waterborne diseases were once wrongly explained by the miasma theory, the theory that bad air causes the spread of diseases. However, people started to find a correlation between water quality and waterborne diseases, which led to different water purification methods, such as sand filtering and chlorinating their drinking water. Founders of microscopy, Antonie van Leeuwenhoek and Robert Hooke, used the newly invented microscope to observe for the first time small material particles that were suspended in the water, laying the groundwork for the future understanding of waterborne pathogens and waterborne diseases.
| Biology and health sciences | Concepts | Health |
7682300 | https://en.wikipedia.org/wiki/Nothofagus%20fusca | Nothofagus fusca | Nothofagus fusca, commonly known as red beech (Māori: tawhai raunui) is a species of southern beech, endemic to New Zealand, occurring on both the North and South Island. It is generally found on lower hills and inland valley floors where soil is fertile and well drained. In New Zealand the species is called Fuscospora fusca.
It is a medium-sized evergreen tree growing to 35 m tall. The leaves are alternately arranged, broad ovoid, 2 to 4 cm long and 1.5 to 3 cm broad, the margin distinctively double-toothed with each lobe bearing two teeth. The fruit is a small cupule containing three seeds.
Pollen from the tree was found near the Antarctic Peninsula, showing that it formerly grew in Antarctica since the Eocene period. Red beech is not currently considered threatened.
Uses
Red beech is the only known plant source, apart from rooibos (Aspalathus linearis), of the C-linked dihydrochalcone glycoside nothofagin.
It is also grown as an ornamental tree in regions with a mild oceanic climate due to its attractive leaf shape. It has been planted in Scotland and the North Coast of the Pacific of the United States. The red beech's wood is the most durable of all the New Zealand beeches. It was often used in flooring in many parts of New Zealand. The timber is exceptionally stable when dried to appropriate moisture values. The average density of red beech at 12 percent moisture content is 630 kilograms per cubic metre.
Hybrids
Red beech hybridises with mountain beech (Nothofagus cliffortioides) to form the hybrid species Nothofagus ×blairii.
Red beech hybridises with black beech (Nothofagus solandri) to form the hybrid species Nothofagus ×dubia.
Red beech hybridises with the ruil tree (Nothofagus alessandrii) from Chile to form the hybrid species Nothofagus ×eugenananus.
| Biology and health sciences | Fagales | Plants |
7685074 | https://en.wikipedia.org/wiki/Chachalaca | Chachalaca | Chachalacas are galliform birds from the genus Ortalis. These birds are found in wooded habitats in the far southern United States (Texas), Mexico, and Central and South America. They are social, can be very noisy and often remain fairly common even near humans, as their relatively small size makes them less desirable to hunters than their larger relatives. As agricultural pests, they have a ravenous appetite for tomatoes, melons, beans, and radishes and can ravage a small garden in short order. They travel in packs of six to twelve. Their nests are made of sticks, twigs, leaves, or moss and are generally frail, flat structures only a few feet above the ground. During April, they lay from three to five buffy white eggs, the shell of which is very rough and hard. They somewhat resemble the guans, and the two have commonly been placed in a subfamily together, though the chachalacas are probably closer to the curassows.
Taxonomy
The genus Ortalis was introduced (as Ortalida) by the German naturalist Blasius Merrem in 1786 with the little chachalaca (Ortalis motmot) as the type species. The generic name is derived from the Ancient Greek word όρταλις, meaning "pullet" or "domestic hen." The common name derives from the Nahuatl verb chachalaca, meaning "to chatter." With a glottal stop at the end, chachalacah was an alternate name for the bird known as the chachalahtli. All these words likely arose as an onomatopoeia for the four-noted cackle of the plain chachalaca (O. vetula). The genus contains 16 species.
Mitochondrial and nuclear DNA sequence data tentatively suggest that the chachalacas emerged as a distinct lineage during the Oligocene, somewhere around 40–20 mya, possibly being the first lineage of modern cracids to evolve; this does agree with the known fossil record – including indeterminate, cracid-like birds – which very cautiously favors a north-to-south expansion of the family.
Species
Prehistoric species
The cracids have a very poor fossil record, essentially being limited to a few chachalacas. The prehistoric species of the present genus, however, indicate that chachalacas most likely evolved in North or northern Central America:
Ortalis tantala (Early Miocene of Nebraska, USA)
Ortalis pollicaris (Flint Hill Middle Miocene of South Dakota, USA)
Ortalis affinis (Ogallala Early Pliocene of Trego County, Kansas, USA)
Ortalis phengites (Snake Creek Early Pliocene of Sioux County, Nebraska, USA)
The Early Miocene fossil Boreortalis from Florida is also a chachalaca; it may actually be referrable to the extant genus.
| Biology and health sciences | Galliformes | Animals |
674489 | https://en.wikipedia.org/wiki/No-till%20farming | No-till farming | No-till farming (also known as zero tillage or direct drilling) is an agricultural technique for growing crops or pasture without disturbing the soil through tillage. No-till farming decreases the amount of soil erosion tillage causes in certain soils, especially in sandy and dry soils on sloping terrain. Other possible benefits include an increase in the amount of water that infiltrates into the soil, soil retention of organic matter, and nutrient cycling. These methods may increase the amount and variety of life in and on the soil. While conventional no-tillage systems use herbicides to control weeds, organic systems use a combination of strategies, such as planting cover crops as mulch to suppress weeds.
There are three basic methods of no-till farming. "Sod seeding" is when crops are sown with seeding machinery into a sod produced by applying herbicides on a cover crop (killing that vegetation). "Direct seeding" is when crops are sown through the residue of previous crop. "Surface seeding" or "direct seeding" is when seeds are left on the surface of the soil; on flatlands, this requires no machinery and minimal labor.
While no-till is agronomically advantageous and results in higher yields, farmers wishing to adapt the system face a number of challenges. Established farms may have to face a learning curve, buy new equipment, and deal with new field conditions. Perhaps the biggest impediment, especially for grains, is that farmers can no longer rely on the mechanical pest and weed control that occurs when crop residue is buried to significant depths. No-till farmers must rely on chemicals, biological pest control, cover cropping, and more intensive management of fields.
Tillage is dominant in agriculture today, but no-till methods may have success in some contexts. In some cases minimum tillage or "low-till" methods combine till and no-till methods. For example, some approaches may use shallow cultivation (i.e. using a disc harrow) but no plowing or use strip tillage.
Background
Tillage is the agricultural preparation of soil by mechanical agitation, typically removing weeds established in the previous season. Tilling can create a flat seed bed or one that has formed areas, such as rows or raised beds, to enhance the growth of desired plants. It is an ancient technique with clear evidence of its use since at least 3000 B.C.
No-till farming is not equivalent to conservation tillage or strip tillage. Conservation tillage is a group of practices that reduce the amount of tillage needed. No-till and strip tillage are both forms of conservation tillage. No-till is the practice of never tilling a field. Tilling every other year is called rotational tillage.
The effects of tillage can include soil compaction; loss of organic matter; degradation of soil aggregates; death or disruption of soil microbes and other organisms including mycorrhizae, arthropods, and earthworms; and soil erosion where topsoil is washed or blown away.
Origin
The practice of no-till farming is a combination of different ideas developed over time, many techniques and principles used in no-till farming are a continuation of traditional market gardening found in various regions like France. A formalized opposition to plowing started in the 1940s with Edward H. Faulkner, author of Plowman's Folly. In that book, however, Faulkner only criticizes the deeper moldboard plow and its action, not surface tillage. It was not until the development after WWII of powerful herbicides such as paraquat that various researchers and farmers started to try out the idea. The first adopters of no-till include Klingman (North Carolina), Edward Faulkner, L. A. Porter (New Zealand), Harry and Lawrence Young (Herndon, Kentucky), and the Instituto de Pesquisas Agropecuarias Meridional (1971 in Brazil) with Herbert Bartz.
Adoption across the world
Land under no-till farming has increased across the world. In 1999, about was under no-till farming worldwide, which increased to in 2003 and to in 2009.
Australia
Per figures from the Australian Bureau of Statistics (ABS) Agricultural Resource Management Survey, in Australia the percentage of agricultural land under No-till farming methods was 26% in 2000–01, which more than doubled to 57% in 2007–08. As at 30 June 2017, of the of crop land cultivated 79% (or 16 million hectares) received no cultivation. Similarly, 70% (or 2 million hectares) of the 3 million hectares of pasture land cultivated received no cultivation, apart from sowing.
South America
South America had the highest adoption of No-till farming in the world, which in 2014 constituted 47% of the total global area under no-till farming.
The countries with highest adoption are Argentina (80%), Brazil (50%), Paraguay (90%), and Uruguay (82%).
In Argentina the usage of no-till resulted in reduction of soil erosion losses by 80%, cost reductions by more than 50% and increased farm incomes.
In Brazil the usage of no-till resulted in reduction of soil erosion losses by 97%, higher farm productivity and income increase by 57% five years after the starting of no-till farming.
In Paraguay, net farm incomes increased by 77% after adoption of no-till farming.
United States
No-till farming is used in the United States and the area managed in this way continues to grow. This growth is supported by a decrease in costs. No-till management results in fewer passes with equipment, and the crop residue prevents evaporation of rainfall and increases water infiltration into the soil.
In 2017, no-till farming was being used in about 21% of the cultivated cropland in the US. By 2023, farmland with strict no-tillage principles comprise roughly 30% of the cropland in the U.S.
Benefits and issues
Profit, economics, yield
Some studies have found that no-till farming can be more profitable in some cases.
In some cases it may reduce labour, fuel, irrigation and machinery costs. No-till can increase yield because of higher water infiltration and storage capacity, and less erosion. Another possible benefit is that because of the higher water content, instead of leaving a field fallow it can make economic sense to plant another crop instead.
A problem with no-till farming is that the soil warms and dries more slowly in spring, which may delay planting. Harvest can thus occur later than in a conventionally tilled field. The slower warming is due to crop residue being a lighter color than the soil exposed in conventional tillage, which absorbs less solar energy. But in the meantime, this can be managed by using row cleaners on a planter.
Another problem with no-till farming is that if production is impacted negatively by the implemented process, the practice's profitability may decrease with increasing fuel prices and high labor costs. As the prices for fuel and labor continue to rise, it may be more practical for farms and farming productions to turn toward a no-till operation. In spring, poor draining clay soil may have lower production due to a cold and wet year.
The economic and ecological benefits of implementing no-till practices can require sixteen to nineteen years. The first decade of no-till implementation often will show trends of revenue decrease. Implementation periods over ten years usually show a profit gain rather than a decrease in profitability.
Costs and management
No-till farming requires some different skills from those of conventional agriculture. A combination of techniques, equipment, pesticides, crop rotation, fertilization, and irrigation have to be used for local conditions.
Equipment
On some crops, like continuous no-till corn, the residue's thickness on the field's surface can become problematic without proper preparation and equipment. No-till farming requires specialized seeding equipment, such as heavier seed drill, to penetrate the residue. Ploughing requires more powerful tractors, so tractors can be smaller with no-tillage. Costs can be offset by selling ploughs and tractors, but farmers often keep their old equipment while trying out no-till farming. This results in a higher investment in equipment.
Increased herbicide use
One of the purposes of tilling is to remove weeds. With no-till farming, residue from the previous year's crops lie on the surface of the field, which can cause different, greater, or more frequent disease or weed problems compared to tillage farming. Faster growing weeds can be reduced by increased competition with eventual growth of perennials, shrubs and trees. Herbicides such as glyphosate are commonly used in place of tillage for seedbed preparation, which leads to more herbicide use in comparison to conventional tillage. Alternatives include winter cover crops, soil solarization, or burning.
The use of herbicides is not strictly necessary, as demonstrated in natural farming, permaculture, and other practices related to sustainable agriculture.
The use of cover crops to help control weeds also increases organic residue in the soil (and nutrients, when using legumes). Cover crops then need to be killed so that the newly planted crops can get enough light, water, nutrients, etc. This can be done by rollers, crimpers, choppers and other ways. The residue is then planted through, and left as a mulch. Cover crops typically must be crimped when they enter the flowering stage.
Fertilizer
One of the most common yield reducers is nitrogen being immobilized in the crop residue, which can take a few months to several years to decompose, depending on the crop's C to N ratio and the local environment. Fertilizer needs to be applied at a higher rate. An innovative solution to this problem is to integrate animal husbandry in various ways to aid in decomposition. After a transition period (4–5 years for Kansas, USA) the soil may build up in organic matter. Nutrients in the organic matter are eventually released into the soil.
Environmental Policy
A legislative bill, H.R.2508 of the 117th Congress, also known as the NO EMITS act, has been proposed to amend the Food Security Act of 1985, that was introduced by Representative Rodney Davis of Illinois in 2021. Davis is a member of the House Committee on Agriculture. This bill proposes suggestions for offsetting emissions that are focused in agricultural means, doing so by implementing new strategies such as minimal tillage or no tillage. H.R.2508 is currently under reference by the House Committee of Agriculture. H.R.2508 is also backed by two other representatives from high agricultural states, Rep. Eric A. Crawford of Arkansas and Rep. Don Bacon of Nebraska. H.R.2508 is proposing to set up incentive programs to provide financial and mechanical assistance to farmers and agriculture plots that transition their production processes, as well as providing contacts to lower risk for producers. Funding has also been proposed for Conservation Innovation Trails.
Farmers within the U.S. are encouraged through subsidies and other programs provided by the government to meet a defined level of tillage conservation. Such subsidies and programs provided by the U.S. government include: Environmental Quality Incentives Program (EQIP) and Conservation Stewardship Program (CSP). The EQIP is a voluntary program that attempts to assists farmers and other participants help through conservation and not financially suffer from doing so. Efforts are put out to help reduce the amount of contamination from the agricultural industry as well as increasing the health of the soil. The CSP attempts to assist those looking to implement conservation efforts into their practices by suggesting what might be done for their circumstances and needs.
Environmental
Greenhouse gases
No-till farming has been claimed to increase soil organic matter, and thus increase carbon sequestration. While many studies report soil organic carbon increases in no-till systems, others conclude that these effects may not be observed in all systems, depending on factors, such as climate and topsoil carbon content. A 2020 study demonstrated that the combination of no-till and cover cropping could be an effective approach to climate change mitigation by sequestering more carbon than either practice alone, suggesting that the two practices have a synergistic effect in carbon capture.
There is debate over whether the increased sequestration sometimes detected is actually occurring or is due to flawed testing methods or other factors. A 2014 study claimed that certain no-till systems may sequester less carbon than conventional tillage systems, saying that the “no-till subsurface layer is often losing more soil organic carbon stock over time than is gained in the surface layer.” The study also highlighted the need for a uniform definition of soil organic carbon sequestration among researchers. The study concludes, "Additional investments in soil organic carbon (SOC) research is needed to understand better the agricultural management practices that are most likely to sequester SOC or at least retain more net SOC stocks."
No-till farming reduces nitrous oxide (N2O) emissions by 40-70%, depending on rotation. Nitrous oxide is a potent greenhouse gas, 300 times stronger than , and stays in the atmosphere for 120 years.
Soil and desertification
No-till farming improves aggregates and reduces erosion. Soil erosion might be reduced almost to soil production rates.
Research from over 19 years of tillage studies at the United States Department of Agriculture Agricultural Research Service found that no-till farming makes soil less erodible than ploughed soil in areas of the Great Plains. The first inch of no-till soil contains more aggregates and is two to seven times less vulnerable than that of ploughed soil. More organic matter in this layer is thought to help hold soil particles together.
As per the Food and Agriculture Organization (FAO) of the United Nations, no-till farming can stop desertification by maintaining soil organic matter and reducing wind and water erosion.
No ploughing also means less airborne dust.
Water
No-till farming improves water retention: crop residues help water from natural precipitation and irrigation to infiltrate the soil. Residue limits evaporation, conserving water. Evaporation from tilling increases the amount of water by around 1/3 to 3/4 inches (0.85 to 1.9 cm) per pass.
Gully formation can cause soil erosion in some crops, such as soybeans with no-tillage, although models of other crops under no-tillage show less erosion than conventional tillage. Grass waterways can be a solution. Any gullies that form in fields not being tilled get deeper each year instead of being smoothed out by regular plowing.
A problem in some fields is water saturation in soils. Switching to no-till farming may increase drainage because the soil under continuous no-till includes a higher water infiltration rate.
Biota and wildlife
No-tilled fields often have more annelids, invertebrates and wildlife such as deer mice.
Albedo
Tillage lowers the albedo of croplands. The potential for global cooling as a result of increased albedo in no-till croplands is similar in magnitude to other biogeochemical carbon sequestration processes.
| Technology | Soil and soil management | null |
674776 | https://en.wikipedia.org/wiki/Pelvic%20floor | Pelvic floor | The pelvic floor or pelvic diaphragm is an anatomical location in the human body, which has an important role in urinary and anal continence, sexual function and support of the pelvic organs. The pelvic floor includes muscles, both skeletal and smooth, ligaments and fascia. and separates between the pelvic cavity from above, and the perineum from below. It is formed by the levator ani muscle and coccygeus muscle, and associated connective tissue.
The pelvic floor has two hiatuses (gaps): (anteriorly) the urogenital hiatus through which urethra and vagina pass, and (posteriorly) the rectal hiatus through which the anal canal passes.
Structure
Definition
Some sources do not consider "pelvic floor" and "pelvic diaphragm" to be identical, with the "diaphragm" consisting of only the levator ani and coccygeus, while the "floor" also includes the perineal membrane and deep perineal pouch. However, other sources include the fascia as part of the diaphragm. In practice, the two terms are often used interchangeably.
Relations
The pelvic cavity of the true pelvis has the pelvic floor as its inferior boundary (and the pelvic brim as its superior boundary). The perineum has the pelvic floor as its superior boundary.
Posteriorly, the pelvic floor extends into the anal triangle.
Function
It is important in providing support for pelvic viscera (organs), e.g. the bladder, intestines, the uterus (in females), and in maintenance of continence as part of the urinary and anal sphincters. It facilitates birth by resisting the descent of the presenting part, causing the fetus to rotate forwards to navigate through the pelvic girdle. It helps maintain optimal intra-abdominal pressure.
Clinical significance
The pelvic floor is subject to clinically relevant changes that can result in:
Anterior vaginal wall prolapse
Cystocele (bladder into vagina)
Urethrocele (urethra into vagina)
Cystourethrocele (both bladder and urethra)
Posterior vaginal wall prolapse
Enterocele (small intestine into vagina)
Rectocele (rectum into vagina)
Apical vaginal prolapse
Uterine prolapse (uterus into vagina)
Vaginal vault prolapse (roof of vagina) - after hysterectomy
Pelvic floor dysfunction can result after treatment for gynecological cancers.
Damage to the pelvic floor not only contributes to urinary incontinence but can lead to pelvic organ prolapse. Pelvic organ prolapse occurs in women when pelvic organs (e.g. the vagina, bladder, rectum, or uterus) protrude into or outside of the vagina. The causes of pelvic organ prolapse are not unlike those that also contribute to urinary incontinence. These include inappropriate (asymmetrical, excessive, insufficient) muscle tone and asymmetries caused by trauma to the pelvis. Age, pregnancy, family history, and hormonal status all contribute to the development of pelvic organ prolapse. The vagina is suspended by attachments to the perineum, pelvic side wall and sacrum via attachments that include collagen, elastin, and smooth muscle. Surgery can be performed to repair pelvic floor muscles. The pelvic floor muscles can be strengthened with Kegel exercises.
Disorders of the posterior pelvic floor include rectal prolapse, rectocele, perineal hernia, and a number of functional disorders including anismus. Constipation due to any of these disorders is called "functional constipation" and is identifiable by clinical diagnostic criteria.
Pelvic floor exercise (PFE), also known as Kegel exercises, may improve the tone and function of the pelvic floor muscles, which is of particular benefit for women (and less commonly men) who experience stress urinary incontinence. However, compliance with PFE programs often is poor, PFE generally is ineffective for urinary incontinence unless performed with biofeedback and trained supervision, and in severe cases it may have no benefit. Pelvic floor muscle tone may be estimated using a perineometer, which measures the pressure within the vagina. Medication may also be used to improve continence. In severe cases, surgery may be used to repair or even to reconstruct the pelvic floor. One surgery which interrupts pelvic floor musculature in males is a radical prostatectomy. With the removal of the prostate, many males experience urinary incontinence post operation; pelvic floor exercises may be used to counteract this pre and post operation. Pre-operative pelvic floor exercising significantly decreases the prevalence of urinary incontinence post radical prostatectomy. Prostatitis and prostatectomies are two contributors to erectile dysfunction; following a radical prostatectomy studies show that erectile dysfunction is improved by pelvic floor muscle training under the supervision of physical therapists certified in pelvic floor rehabilitation .
Perineology or pelviperineology is a specialty dealing with the functional troubles of the three axes (urological, gynecological and coloproctological) of the pelvic floor.
Additional images
| Biology and health sciences | Human anatomy | Health |
675130 | https://en.wikipedia.org/wiki/Molecular%20physics | Molecular physics | Molecular physics is the study of the physical properties of molecules and molecular dynamics. The field overlaps significantly with physical chemistry, chemical physics, and quantum chemistry. It is often considered as a sub-field of atomic, molecular, and optical physics. Research groups studying molecular physics are typically designated as one of these other fields. Molecular physics addresses phenomena due to both molecular structure and individual atomic processes within molecules. Like atomic physics, it relies on a combination of classical and quantum mechanics to describe interactions between electromagnetic radiation and matter. Experiments in the field often rely heavily on techniques borrowed from atomic physics, such as spectroscopy and scattering.
Molecular structure
In a molecule, both the electrons and nuclei experience similar-scale forces from the Coulomb interaction. However, the nuclei remain at nearly fixed locations in the molecule while the electrons move significantly. This picture of a molecule is based on the idea that nucleons are much heavier than electrons, so will move much less in response to the same force. Neutron scattering experiments on molecules have been used to verify this description.
Molecular energy levels and spectra
When atoms join into molecules, their inner electrons remain bound to their original nucleus while the outer valence electrons are distributed around the molecule. The charge distribution of these valence electrons determines the electronic energy level of a molecule, and can be described by molecular orbital theory, which closely follows the atomic orbital theory used for single atoms. Assuming that the momenta of the electrons are on the order of ħ/a (where ħ is the reduced Planck constant and a is the average internuclear distance within a molecule, ~ 1 Å), the magnitude of the energy spacing for electronic states can be estimated at a few electron volts. This is the case for most low-lying molecular energy states, and corresponds to transitions in the visible and ultraviolet regions of the electromagnetic spectrum.
In addition to the electronic energy levels shared with atoms, molecules have additional quantized energy levels corresponding to vibrational and rotational states. Vibrational energy levels refer to motion of the nuclei about their equilibrium positions in the molecule. The approximate energy spacing of these levels can be estimated by treating each nucleus as a quantum harmonic oscillator in the potential produced by the molecule, and comparing its associated frequency to that of an electron experiencing the same potential. The result is an energy spacing about 100× smaller than that for electronic levels. In agreement with this estimate, vibrational spectra show transitions in the near infrared (about ). Finally, rotational energy states describe semi-rigid rotation of the entire molecule and produce transition wavelengths in the far infrared and microwave regions (about 100-10,000 μm in wavelength). These are the smallest energy spacings, and their size can be understood by comparing the energy of a diatomic molecule with internuclear spacing ~ 1 Å to the energy of a valence electron (estimated above as ~ ħ/a).
Actual molecular spectra also show transitions which simultaneously couple electronic, vibrational, and rotational states. For example, transitions involving both rotational and vibrational states are often referred to as rotational-vibrational or rovibrational transitions. Vibronic transitions combine electronic and vibrational transitions, and rovibronic transitions combine electronic, rotational, and vibrational transitions. Due to the very different frequencies associated with each type of transition, the wavelengths associated with these mixed transitions vary across the electromagnetic spectrum.
Experiments
In general, the goals of molecular physics experiments are to characterize shape and size, electric and magnetic properties, internal energy levels, and ionization and dissociation energies for molecules. In terms of shape and size, rotational spectra and vibrational spectra allow for the determination of molecular moments of inertia, which allows for calculations of internuclear distances in molecules. X-ray diffraction allows determination of internuclear spacing directly, especially for molecules containing heavy elements. All branches of spectroscopy contribute to determination of molecular energy levels due to the wide range of applicable energies (ultraviolet to microwave regimes).
Current research
Within atomic, molecular, and optical physics, there are numerous studies using molecules to verify fundamental constants and probe for physics beyond the Standard Model. Certain molecular structures are predicted to be sensitive to new physics phenomena, such as parity and time-reversal violation. Molecules are also considered a potential future platform for trapped ion quantum computing, as their more complex energy level structure could facilitate higher efficiency encoding of quantum information than individual atoms. From a chemical physics perspective, intramolecular vibrational energy redistribution experiments use vibrational spectra to determine how energy is redistributed between different quantum states of a vibrationally excited molecule.
| Physical sciences | Molecular physics | Physics |
675231 | https://en.wikipedia.org/wiki/Line%20graph | Line graph | In the mathematical discipline of graph theory, the line graph of an undirected graph is another graph that represents the adjacencies between edges of . is constructed in the following way: for each edge in , make a vertex in ; for every two edges in that have a vertex in common, make an edge between their corresponding vertices in .
The name line graph comes from a paper by although both and used the construction before this. Other terms used for the line graph include the covering graph, the derivative, the edge-to-vertex dual, the conjugate, the representative graph, and the θ-obrazom, as well as the edge graph, the interchange graph, the adjoint graph, and the derived graph.
proved that with one exceptional case the structure of a connected graph can be recovered completely from its line graph. Many other properties of line graphs follow by translating the properties of the underlying graph from vertices into edges, and by Whitney's theorem the same translation can also be done in the other direction. Line graphs are claw-free, and the line graphs of bipartite graphs are perfect. Line graphs are characterized by nine forbidden subgraphs and can be recognized in linear time.
Various extensions of the concept of a line graph have been studied, including line graphs of line graphs, line graphs of multigraphs, line graphs of hypergraphs, and line graphs of weighted graphs.
Formal definition
Given a graph , its line graph is a graph such that
each vertex of represents an edge of ; and
two vertices of are adjacent if and only if their corresponding edges share a common endpoint ("are incident") in .
That is, it is the intersection graph of the edges of , representing each edge by the set of its two endpoints.
Example
The following figures show a graph (left, with blue vertices) and its line graph (right, with green vertices). Each vertex of the line graph is shown labeled with the pair of endpoints of the corresponding edge in the original graph. For instance, the green vertex on the right labeled 1,3 corresponds to the edge on the left between the blue vertices 1 and 3. Green vertex 1,3 is adjacent to three other green vertices: 1,4 and 1,2 (corresponding to edges sharing the endpoint 1 in the blue graph) and 4,3 (corresponding to an edge sharing the endpoint 3 in the blue graph).
Properties
Translated properties of the underlying graph
Properties of a graph that depend only on adjacency between edges may be translated into equivalent properties in that depend on adjacency between vertices. For instance, a matching in is a set of edges no two of which are adjacent, and corresponds to a set of vertices in no two of which are adjacent, that is, an independent set.
Thus,
The line graph of a connected graph is connected. If is connected, it contains a path connecting any two of its edges, which translates into a path in containing any two of the vertices of . However, a graph that has some isolated vertices, and is therefore disconnected, may nevertheless have a connected line graph.
A line graph has an articulation point if and only if the underlying graph has a bridge for which neither endpoint has degree one.
For a graph with vertices and edges, the number of vertices of the line graph is , and the number of edges of is half the sum of the squares of the degrees of the vertices in , minus .
An independent set in corresponds to a matching in . In particular, a maximum independent set in corresponds to maximum matching in . Since maximum matchings may be found in polynomial time, so may the maximum independent sets of line graphs, despite the hardness of the maximum independent set problem for more general families of graphs. Similarly, a rainbow-independent set in corresponds to a rainbow matching in .
The edge chromatic number of a graph is equal to the vertex chromatic number of its line graph .
The line graph of an edge-transitive graph is vertex-transitive. This property can be used to generate families of graphs that (like the Petersen graph) are vertex-transitive but are not Cayley graphs: if is an edge-transitive graph that has at least five vertices, is not bipartite, and has odd vertex degrees, then is a vertex-transitive non-Cayley graph.
If a graph has an Euler cycle, that is, if is connected and has an even number of edges at each vertex, then the line graph of is Hamiltonian. However, not all Hamiltonian cycles in line graphs come from Euler cycles in this way; for instance, the line graph of a Hamiltonian graph is itself Hamiltonian, regardless of whether is also Eulerian.
If two simple graphs are isomorphic then their line graphs are also isomorphic. The Whitney graph isomorphism theorem provides a converse to this for all but one pair of connected graphs.
In the context of complex network theory, the line graph of a random network preserves many of the properties of the network such as the small-world property (the existence of short paths between all pairs of vertices) and the shape of its degree distribution. observe that any method for finding vertex clusters in a complex network can be applied to the line graph and used to cluster its edges instead.
Whitney isomorphism theorem
If the line graphs of two connected graphs are isomorphic, then the underlying graphs are isomorphic, except in the case of the triangle graph and the claw , which have isomorphic line graphs but are not themselves isomorphic.
As well as and , there are some other exceptional small graphs with the property that their line graph has a higher degree of symmetry than the graph itself. For instance, the diamond graph (two triangles sharing an edge) has four graph automorphisms but its line graph has eight. In the illustration of the diamond graph shown, rotating the graph by 90 degrees is not a symmetry of the graph, but is a symmetry of its line graph. However, all such exceptional cases have at most four vertices. A strengthened version of the Whitney isomorphism theorem states that, for connected graphs with more than four vertices, there is a one-to-one correspondence between isomorphisms of the graphs and isomorphisms of their line graphs.
Analogues of the Whitney isomorphism theorem have been proven for the line graphs of multigraphs, but are more complicated in this case.
Strongly regular and perfect line graphs
The line graph of the complete graph is also known as the triangular graph, the Johnson graph , or the complement of the Kneser graph . Triangular graphs are characterized by their spectra, except for . They may also be characterized (again with the exception of ) as the strongly regular graphs with parameters . The three strongly regular graphs with the same parameters and spectrum as are the Chang graphs, which may be obtained by graph switching from .
The line graph of a bipartite graph is perfect (see Kőnig's theorem), but need not be bipartite as the example of the claw graph shows. The line graphs of bipartite graphs form one of the key building blocks of perfect graphs, used in the proof of the strong perfect graph theorem. A special case of these graphs are the rook's graphs, line graphs of complete bipartite graphs. Like the line graphs of complete graphs, they can be characterized with one exception by their numbers of vertices, numbers of edges, and number of shared neighbors for adjacent and non-adjacent points. The one exceptional case is , which shares its parameters with the Shrikhande graph. When both sides of the bipartition have the same number of vertices, these graphs are again strongly regular. It has been shown that, except for , , and , all connected strongly regular graphs can be made non-strongly regular within two line graph transformations. The extension to disconnected graphs would require that the graph is not a disjoint union of .
More generally, a graph is said to be a line perfect graph if is a perfect graph. The line perfect graphs are exactly the graphs that do not contain a simple cycle of odd length greater than three. Equivalently, a graph is line perfect if and only if each of its biconnected components is either bipartite or of the form (the tetrahedron) or (a book of one or more triangles all sharing a common edge). Every line perfect graph is itself perfect.
Other related graph families
All line graphs are claw-free graphs, graphs without an induced subgraph in the form of a three-leaf tree. As with claw-free graphs more generally, every connected line graph with an even number of edges has a perfect matching; equivalently, this means that if the underlying graph has an even number of edges, its edges can be partitioned into two-edge paths.
The line graphs of trees are exactly the claw-free block graphs. These graphs have been used to solve a problem in extremal graph theory, of constructing a graph with a given number of edges and vertices whose largest tree induced as a subgraph is as small as possible.
All eigenvalues of the adjacency matrix of a line graph are at least −2. The reason for this is that can be written as , where is the signless incidence matrix of the pre-line graph and is the identity. In particular, is the Gramian matrix of a system of vectors: all graphs with this property have been called generalized line graphs.
Characterization and recognition
Clique partition
For an arbitrary graph , and an arbitrary vertex in , the set of edges incident to corresponds to a clique in the line graph . The cliques formed in this way partition the edges of . Each vertex of belongs to exactly two of them (the two cliques corresponding to the two endpoints of the corresponding edge in ).
The existence of such a partition into cliques can be used to characterize the line graphs: A graph is the line graph of some other graph or multigraph if and only if it is possible to find a collection of cliques in (allowing some of the cliques to be single vertices) that partition the edges of , such that each vertex of belongs to exactly two of the cliques. It is the line graph of a graph (rather than a multigraph) if this set of cliques satisfies the additional condition that no two vertices of are both in the same two cliques. Given such a family of cliques, the underlying graph for which is the line graph can be recovered by making one vertex in for each clique, and an edge in for each vertex in with its endpoints being the two cliques containing the vertex in . By the strong version of Whitney's isomorphism theorem, if the underlying graph has more than four vertices, there can be only one partition of this type.
For example, this characterization can be used to show that the following graph is not a line graph:
In this example, the edges going upward, to the left, and to the right from the central degree-four vertex do not have any cliques in common. Therefore, any partition of the graph's edges into cliques would have to have at least one clique for each of these three edges, and these three cliques would all intersect in that central vertex, violating the requirement that each vertex appear in exactly two cliques. Thus, the graph shown is not a line graph.
Forbidden subgraphs
Another characterization of line graphs was proven in (and reported earlier without proof by ). He showed that there are nine minimal graphs that are not line graphs, such that any graph that is not a line graph has one of these nine graphs as an induced subgraph. That is, a graph is a line graph if and only if no subset of its vertices induces one of these nine graphs. In the example above, the four topmost vertices induce a claw (that is, a complete bipartite graph ), shown on the top left of the illustration of forbidden subgraphs. Therefore, by Beineke's characterization, this example cannot be a line graph. For graphs with minimum degree at least 5, only the six subgraphs in the left and right columns of the figure are needed in the characterization.
Algorithms
and described linear time algorithms for recognizing line graphs and reconstructing their original graphs. generalized these methods to directed graphs. described an efficient data structure for maintaining a dynamic graph, subject to vertex insertions and deletions, and maintaining a representation of the input as a line graph (when it exists) in time proportional to the number of changed edges at each step.
The algorithms of and are based on characterizations of line graphs involving odd triangles (triangles in the line graph with the property that there exists another vertex adjacent to an odd number of triangle vertices). However, the algorithm of uses only Whitney's isomorphism theorem. It is complicated by the need to recognize deletions that cause the remaining graph to become a line graph, but when specialized to the static recognition problem only insertions need to be performed, and the algorithm performs the following steps:
Construct the input graph by adding vertices one at a time, at each step choosing a vertex to add that is adjacent to at least one previously-added vertex. While adding vertices to , maintain a graph for which ; if the algorithm ever fails to find an appropriate graph , then the input is not a line graph and the algorithm terminates.
When adding a vertex to a graph for which has four or fewer vertices, it might be the case that the line graph representation is not unique. But in this case, the augmented graph is small enough that a representation of it as a line graph can be found by a brute force search in constant time.
When adding a vertex to a larger graph that equals the line graph of another graph , let be the subgraph of formed by the edges that correspond to the neighbors of in . Check that has a vertex cover consisting of one vertex or two non-adjacent vertices. If there are two vertices in the cover, augment by adding an edge (corresponding to ) that connects these two vertices. If there is only one vertex in the cover, then add a new vertex to , adjacent to this vertex.
Each step either takes constant time, or involves finding a vertex cover of constant size within a graph whose size is proportional to the number of neighbors of . Thus, the total time for the whole algorithm is proportional to the sum of the numbers of neighbors of all vertices, which (by the handshaking lemma) is proportional to the number of input edges.
Iterating the line graph operator
consider the sequence of graphs
They show that, when is a finite connected graph, only four behaviors are possible for this sequence:
If is a cycle graph then and each subsequent graph in this sequence are isomorphic to itself. These are the only connected graphs for which is isomorphic to .
If is a claw , then and all subsequent graphs in the sequence are triangles.
If is a path graph then each subsequent graph in the sequence is a shorter path until eventually the sequence terminates with an empty graph.
In all remaining cases, the sizes of the graphs in this sequence eventually increase without bound.
If is not connected, this classification applies separately to each component of .
For connected graphs that are not paths, all sufficiently high numbers of iteration of the line graph operation produce graphs that are Hamiltonian.
Generalizations
Medial graphs and convex polyhedra
When a planar graph has maximum vertex degree three, its line graph is planar, and every planar embedding of can be extended to an embedding of . However, there exist planar graphs with higher degree whose line graphs are nonplanar. These include, for example, the 5-star , the gem graph formed by adding two non-crossing diagonals within a regular pentagon, and all convex polyhedra with a vertex of degree four or more.
An alternative construction, the medial graph, coincides with the line graph for planar graphs with maximum degree three, but is always planar. It has the same vertices as the line graph, but potentially fewer edges: two vertices of the medial graph are adjacent if and only if the corresponding two edges are consecutive on some face of the planar embedding. The medial graph of the dual graph of a plane graph is the same as the medial graph of the original plane graph.
For regular polyhedra or simple polyhedra, the medial graph operation can be represented geometrically by the operation of cutting off each vertex of the polyhedron by a plane through the midpoints of all its incident edges. This operation is known variously as the second truncation, degenerate truncation, or rectification.
Total graphs
The total graph of a graph has as its vertices the elements (vertices or edges) of , and has an edge between two elements whenever they are either incident or adjacent. The total graph may also be obtained by subdividing each edge of and then taking the square of the subdivided graph.
Multigraphs
The concept of the line graph of may naturally be extended to the case where is a multigraph. In this case, the characterizations of these graphs can be simplified: the characterization in terms of clique partitions no longer needs to prevent two vertices from belonging to the same to cliques, and the characterization by forbidden graphs has seven forbidden graphs instead of nine.
However, for multigraphs, there are larger numbers of pairs of non-isomorphic graphs that have the same line graphs. For instance a complete bipartite graph has the same line graph as the dipole graph and Shannon multigraph with the same number of edges. Nevertheless, analogues to Whitney's isomorphism theorem can still be derived in this case.
Line digraphs
It is also possible to generalize line graphs to directed graphs. If is a directed graph, its directed line graph or line digraph has one vertex for each edge of . Two vertices representing directed edges from to and from to in are connected by an edge from to in the line digraph when . That is, each edge in the line digraph of represents a length-two directed path in . The de Bruijn graphs may be formed by repeating this process of forming directed line graphs, starting from a complete directed graph.
Weighted line graphs
In a line graph , each vertex of degree in the original graph creates edges in the line graph. For many types of analysis this means high-degree nodes in are over-represented in the line graph . For instance, consider a random walk on the vertices of the original graph . This will pass along some edge with some frequency . On the other hand, this edge is mapped to a unique vertex, say , in the line graph . If we now perform the same type of random walk on the vertices of the line graph, the frequency with which is visited can be completely different from f. If our edge in was connected to nodes of degree , it will be traversed more frequently in the line graph . Put another way, the Whitney graph isomorphism theorem guarantees that the line graph almost always encodes the topology of the original graph faithfully but it does not guarantee that dynamics on these two graphs have a simple relationship. One solution is to construct a weighted line graph, that is, a line graph with weighted edges. There are several natural ways to do this. For instance if edges and in the graph are incident at a vertex with degree , then in the line graph the edge connecting the two vertices and can be given weight . In this way every edge in (provided neither end is connected to a vertex of degree 1) will have strength 2 in the line graph corresponding to the two ends that the edge has in . It is straightforward to extend this definition of a weighted line graph to cases where the original graph was directed or even weighted. The principle in all cases is to ensure the line graph reflects the dynamics as well as the topology of the original graph .
Line graphs of hypergraphs
The edges of a hypergraph may form an arbitrary family of sets, so the line graph of a hypergraph is the same as the intersection graph of the sets from the family.
Disjointness graph
The disjointness graph of , denoted , is constructed in the following way: for each edge in , make a vertex in ; for every two edges in that do not have a vertex in common, make an edge between their corresponding vertices in . In other words, is the complement graph of . A clique in corresponds to an independent set in , and vice versa.
| Mathematics | Graph theory | null |
676328 | https://en.wikipedia.org/wiki/Graph%20homomorphism | Graph homomorphism | In the mathematical field of graph theory, a graph homomorphism is a mapping between two graphs that respects their structure. More concretely, it is a function between the vertex sets of two graphs that maps adjacent vertices to adjacent vertices.
Homomorphisms generalize various notions of graph colorings and allow the expression of an important class of constraint satisfaction problems, such as certain scheduling or frequency assignment problems.
The fact that homomorphisms can be composed leads to rich algebraic structures: a preorder on graphs, a distributive lattice, and a category (one for undirected graphs and one for directed graphs).
The computational complexity of finding a homomorphism between given graphs is prohibitive in general, but a lot is known about special cases that are solvable in polynomial time. Boundaries between tractable and intractable cases have been an active area of research.
Definitions
In this article, unless stated otherwise, graphs are finite, undirected graphs with loops allowed, but multiple edges (parallel edges) disallowed.
A graph homomorphism from a graph to a graph , written
is a function from to that preserves edges. Formally, implies , for all pairs of vertices in .
If there exists any homomorphism from G to H, then G is said to be homomorphic to H or H-colorable. This is often denoted as just
The above definition is extended to directed graphs. Then, for a homomorphism f : G → H, (f(u),f(v)) is an arc (directed edge) of H whenever (u,v) is an arc of G.
There is an injective homomorphism from G to H (i.e., one that maps distinct vertices in G to distinct vertices in H) if and only if G is isomorphic to a subgraph of H.
If a homomorphism f : G → H is a bijection, and its inverse function is also a graph homomorphism, then f is a graph isomorphism.
Covering maps are a special kind of homomorphisms that mirror the definition and many properties of covering maps in topology.
They are defined as surjective homomorphisms (i.e., something maps to each vertex) that are also locally bijective, that is, a bijection on the neighbourhood of each vertex.
An example is the bipartite double cover, formed from a graph by splitting each vertex v into v0 and v1 and replacing each edge u,v with edges u0,v1 and v0,u1. The function mapping v0 and v1 in the cover to v in the original graph is a homomorphism and a covering map.
Graph homeomorphism is a different notion, not related directly to homomorphisms. Roughly speaking, it requires injectivity, but allows mapping edges to paths (not just to edges). Graph minors are a still more relaxed notion.
Cores and retracts
Two graphs G and H are homomorphically equivalent if
G → H and H → G. The maps are not necessarily surjective nor injective. For instance, the complete bipartite graphs K2,2 and K3,3 are homomorphically equivalent: each map can be defined as taking the left (resp. right) half of the domain graph and mapping to just one vertex in the left (resp. right) half of the image graph.
A retraction is a homomorphism r from a graph G to a subgraph H of G such that r(v) = v for each vertex v of H.
In this case the subgraph H is called a retract of G.
A core is a graph with no homomorphism to any proper subgraph. Equivalently, a core can be defined as a graph that does not retract to any proper subgraph.
Every graph G is homomorphically equivalent to a unique core (up to isomorphism), called the core of G. Notably, this is not true in general for infinite graphs.
However, the same definitions apply to directed graphs and a directed graph is also equivalent to a unique core.
Every graph and every directed graph contains its core as a retract and as an induced subgraph.
For example, all complete graphs Kn and all odd cycles (cycle graphs of odd length) are cores.
Every 3-colorable graph G that contains a triangle (that is, has the complete graph K3 as a subgraph) is homomorphically equivalent to K3. This is because, on one hand, a 3-coloring of G is the same as a homomorphism G → K3, as explained below. On the other hand, every subgraph of G trivially admits a homomorphism into G, implying K3 → G. This also means that K3 is the core of any such graph G. Similarly, every bipartite graph that has at least one edge is equivalent to K2.
Connection to colorings
A k-coloring, for some integer k, is an assignment of one of k colors to each vertex of a graph G such that the endpoints of each edge get different colors. The k-colorings of G correspond exactly to homomorphisms from G to the complete graph Kk. Indeed, the vertices of Kk correspond to the k colors, and two colors are adjacent as vertices of Kk if and only if they are different. Hence a function defines a homomorphism to Kk if and only if it maps adjacent vertices of G to different colors (i.e., it is a k-coloring). In particular, G is k-colorable if and only if it is Kk-colorable.
If there are two homomorphisms G → H and H → Kk, then their composition G → Kk is also a homomorphism. In other words, if a graph H can be colored with k colors, and there is a homomorphism from G to H, then G can also be k-colored. Therefore, G → H implies χ(G) ≤ χ(H), where χ denotes the chromatic number of a graph (the least k for which it is k-colorable).
Variants
General homomorphisms can also be thought of as a kind of coloring: if the vertices of a fixed graph H are the available colors and edges of H describe which colors are compatible, then an H-coloring of G is an assignment of colors to vertices of G such that adjacent vertices get compatible colors.
Many notions of graph coloring fit into this pattern and can be expressed as graph homomorphisms into different families of graphs.
Circular colorings can be defined using homomorphisms into circular complete graphs, refining the usual notion of colorings.
Fractional and b-fold coloring can be defined using homomorphisms into Kneser graphs.
T-colorings correspond to homomorphisms into certain infinite graphs.
An oriented coloring of a directed graph is a homomorphism into any oriented graph.
An L(2,1)-coloring is a homomorphism into the complement of the path graph that is locally injective, meaning it is required to be injective on the neighbourhood of every vertex.
Orientations without long paths
Another interesting connection concerns orientations of graphs.
An orientation of an undirected graph G is any directed graph obtained by choosing one of the two possible orientations for each edge.
An example of an orientation of the complete graph Kk is the transitive tournament k with vertices 1,2,…,k and arcs from i to j whenever i < j.
A homomorphism between orientations of graphs G and H yields a homomorphism between the undirected graphs G and H, simply by disregarding the orientations.
On the other hand, given a homomorphism G → H between undirected graphs, any orientation of H can be pulled back to an orientation of G so that has a homomorphism to .
Therefore, a graph G is k-colorable (has a homomorphism to Kk) if and only if some orientation of G has a homomorphism to k.
A folklore theorem states that for all k, a directed graph G has a homomorphism to k if and only if it admits no homomorphism from the directed path k+1.
Here n is the directed graph with vertices 1, 2, …, n and edges from i to i + 1, for i = 1, 2, …, n − 1.
Therefore, a graph is k-colorable if and only if it has an orientation that admits no homomorphism from k+1.
This statement can be strengthened slightly to say that a graph is k-colorable if and only if some orientation contains no directed path of length k (no k+1 as a subgraph).
This is the Gallai–Hasse–Roy–Vitaver theorem.
Connection to constraint satisfaction problems
Examples
Some scheduling problems can be modeled as a question about finding graph homomorphisms. As an example, one might want to assign workshop courses to time slots in a calendar so that two courses attended by the same student are not too close to each other in time. The courses form a graph G, with an edge between any two courses that are attended by some common student. The time slots form a graph H, with an edge between any two slots that are distant enough in time. For instance, if one wants a cyclical, weekly schedule, such that each student gets their workshop courses on non-consecutive days, then H would be the complement graph of C7. A graph homomorphism from G to H is then a schedule assigning courses to time slots, as specified. To add a requirement saying that, e.g., no single student has courses on both Friday and Monday, it suffices to remove the corresponding edge from H.
A simple frequency allocation problem can be specified as follows: a number of transmitters in a wireless network must choose a frequency channel on which they will transmit data. To avoid interference, transmitters that are geographically close should use channels with frequencies that are far apart. If this condition is approximated with a single threshold to define 'geographically close' and 'far apart', then a valid channel choice again corresponds to a graph homomorphism. It should go from the graph of transmitters G, with edges between pairs that are geographically close, to the graph of channels H, with edges between channels that are far apart. While this model is rather simplified, it does admit some flexibility: transmitter pairs that are not close but could interfere because of geographical features can be added to the edges of G. Those that do not communicate at the same time can be removed from it. Similarly, channel pairs that are far apart but exhibit harmonic interference can be removed from the edge set of H.
In each case, these simplified models display many of the issues that have to be handled in practice. Constraint satisfaction problems, which generalize graph homomorphism problems, can express various additional types of conditions (such as individual preferences, or bounds on the number of coinciding assignments). This allows the models to be made more realistic and practical.
Formal view
Graphs and directed graphs can be viewed as a special case of the far more general notion called relational structures (defined as a set with a tuple of relations on it). Directed graphs are structures with a single binary relation (adjacency) on the domain (the vertex set). Under this view, homomorphisms of such structures are exactly graph homomorphisms.
In general, the question of finding a homomorphism from one relational structure to another is a constraint satisfaction problem (CSP).
The case of graphs gives a concrete first step that helps to understand more complicated CSPs.
Many algorithmic methods for finding graph homomorphisms, like backtracking, constraint propagation and local search, apply to all CSPs.
For graphs G and H, the question of whether G has a homomorphism to H corresponds to a CSP instance with only one kind of constraint, as follows. The variables are the vertices of G and the domain for each variable is the vertex set of H. An evaluation is a function that assigns to each variable an element of the domain, so a function f from V(G) to V(H). Each edge or arc (u,v) of G then corresponds to the constraint ((u,v), E(H)). This is a constraint expressing that the evaluation should map the arc (u,v) to a pair (f(u),f(v)) that is in the relation E(H), that is, to an arc of H. A solution to the CSP is an evaluation that respects all constraints, so it is exactly a homomorphism from G to H.
Structure of homomorphisms
Compositions of homomorphisms are homomorphisms.
In particular, the relation → on graphs is transitive (and reflexive, trivially), so it is a preorder on graphs.
Let the equivalence class of a graph G under homomorphic equivalence be [G].
The equivalence class can also be represented by the unique core in [G].
The relation → is a partial order on those equivalence classes; it defines a poset.
Let G < H denote that there is a homomorphism from G to H, but no homomorphism from H to G.
The relation → is a dense order, meaning that for all (undirected) graphs G, H such that G < H, there is a graph K such that G < K < H (this holds except for the trivial cases G = K0 or K1).
For example, between any two complete graphs (except K0, K1, K2) there are infinitely many circular complete graphs, corresponding to rational numbers between natural numbers.
The poset of equivalence classes of graphs under homomorphisms is a distributive lattice, with the join of [G] and [H] defined as (the equivalence class of) the disjoint union [G ∪ H], and the meet of [G] and [H] defined as the tensor product [G × H] (the choice of graphs G and H representing the equivalence classes [G] and [H] does not matter).
The join-irreducible elements of this lattice are exactly connected graphs. This can be shown using the fact that a homomorphism maps a connected graph into one connected component of the target graph.
The meet-irreducible elements of this lattice are exactly the multiplicative graphs. These are the graphs K such that a product G × H has a homomorphism to K only when one of G or H also does. Identifying multiplicative graphs lies at the heart of Hedetniemi's conjecture.
Graph homomorphisms also form a category, with graphs as objects and homomorphisms as arrows.
The initial object is the empty graph, while the terminal object is the graph with one vertex and one loop at that vertex.
The tensor product of graphs is the category-theoretic product and
the exponential graph is the exponential object for this category.
Since these two operations are always defined, the category of graphs is a cartesian closed category.
For the same reason, the lattice of equivalence classes of graphs under homomorphisms is in fact a Heyting algebra.
For directed graphs the same definitions apply. In particular → is a partial order on equivalence classes of directed graphs. It is distinct from the order → on equivalence classes of undirected graphs, but contains it as a suborder. This is because every undirected graph can be thought of as a directed graph where every arc (u,v) appears together with its inverse arc (v,u), and this does not change the definition of homomorphism. The order → for directed graphs is again a distributive lattice and a Heyting algebra, with join and meet operations defined as before. However, it is not dense. There is also a category with directed graphs as objects and homomorphisms as arrows, which is again a cartesian closed category.
Incomparable graphs
There are many incomparable graphs with respect to the homomorphism preorder, that is, pairs of graphs such that neither admits a homomorphism into the other.
One way to construct them is to consider the odd girth of a graph G, the length of its shortest odd-length cycle.
The odd girth is, equivalently, the smallest odd number g for which there exists a homomorphism from the cycle graph on g vertices to G. For this reason, if G → H, then the odd girth of G is greater than or equal to the odd girth of H.
On the other hand, if G → H, then the chromatic number of G is less than or equal to the chromatic number of H.
Therefore, if G has strictly larger odd girth than H and strictly larger chromatic number than H, then G and H are incomparable.
For example, the Grötzsch graph is 4-chromatic and triangle-free (it has girth 4 and odd girth 5), so it is incomparable to the triangle graph K3.
Examples of graphs with arbitrarily large values of odd girth and chromatic number are Kneser graphs and generalized Mycielskians.
A sequence of such graphs, with simultaneously increasing values of both parameters, gives infinitely many incomparable graphs (an antichain in the homomorphism preorder).
Other properties, such as density of the homomorphism preorder, can be proved using such families.
Constructions of graphs with large values of chromatic number and girth, not just odd girth, are also possible, but more complicated (see Girth and graph coloring).
Among directed graphs, it is much easier to find incomparable pairs. For example, consider the directed cycle graphs n, with vertices 1, 2, …, n and edges from i to i + 1 (for i = 1, 2, …, n − 1) and from n to 1.
There is a homomorphism from n to k (n, k ≥ 3) if and only if n is a multiple of k.
In particular, directed cycle graphs n with n prime are all incomparable.
Computational complexity
In the graph homomorphism problem, an instance is a pair of graphs (G,H) and a solution is a homomorphism from G to H. The general decision problem, asking whether there is any solution, is NP-complete. However, limiting allowed instances gives rise to a variety of different problems, some of which are much easier to solve. Methods that apply when restraining the left side G are very different than for the right side H, but in each case a dichotomy (a sharp boundary between easy and hard cases) is known or conjectured.
Homomorphisms to a fixed graph
The homomorphism problem with a fixed graph H on the right side of each instance is also called the H-coloring problem. When H is the complete graph Kk, this is the graph k-coloring problem, which is solvable in polynomial time for k = 0, 1, 2, and NP-complete otherwise.
In particular, K2-colorability of a graph G is equivalent to G being bipartite, which can be tested in linear time.
More generally, whenever H is a bipartite graph, H-colorability is equivalent to K2-colorability (or K0 / K1-colorability when H is empty/edgeless), hence equally easy to decide.
Pavol Hell and Jaroslav Nešetřil proved that, for undirected graphs, no other case is tractable:
Hell–Nešetřil theorem (1990): The H-coloring problem is in P when H is bipartite and NP-complete otherwise.
This is also known as the dichotomy theorem for (undirected) graph homomorphisms, since it divides H-coloring problems into NP-complete or P problems, with no intermediate cases.
For directed graphs, the situation is more complicated and in fact equivalent to the much more general question of characterizing the complexity of constraint satisfaction problems.
It turns out that H-coloring problems for directed graphs are just as general and as diverse as CSPs with any other kinds of constraints. Formally, a (finite) constraint language (or template) Γ is a finite domain and a finite set of relations over this domain. CSP(Γ) is the constraint satisfaction problem where instances are only allowed to use constraints in Γ.
Theorem (Feder, Vardi 1998): For every constraint language Γ, the problem CSP(Γ) is equivalent under polynomial-time reductions to some H-coloring problem, for some directed graph H.
Intuitively, this means that every algorithmic technique or complexity result that applies to H-coloring problems for directed graphs H applies just as well to general CSPs. In particular, one can ask whether the Hell–Nešetřil theorem can be extended to directed graphs. By the above theorem, this is equivalent to the Feder–Vardi conjecture (aka CSP conjecture, dichotomy conjecture) on CSP dichotomy, which states that for every constraint language Γ, CSP(Γ) is NP-complete or in P. This conjecture was proved in 2017 independently by Dmitry Zhuk and Andrei Bulatov, leading to the following corollary:
Corollary (Bulatov 2017; Zhuk 2017): The H-coloring problem on directed graphs, for a fixed H, is either in P or NP-complete.
Homomorphisms from a fixed family of graphs
The homomorphism problem with a single fixed graph G on left side of input instances can be solved by brute-force in time |V(H)|O(|V(G)|), so polynomial in the size of the input graph H. In other words, the problem is trivially in P for graphs G of bounded size. The interesting question is then what other properties of G, beside size, make polynomial algorithms possible.
The crucial property turns out to be treewidth, a measure of how tree-like the graph is. For a graph G of treewidth at most k and a graph H, the homomorphism problem can be solved in time |V(H)|O(k) with a standard dynamic programming approach. In fact, it is enough to assume that the core of G has treewidth at most k. This holds even if the core is not known.
The exponent in the |V(H)|O(k)-time algorithm cannot be lowered significantly: no algorithm with running time |V(H)|o(tw(G) /log tw(G)) exists, assuming the exponential time hypothesis (ETH), even if the inputs are restricted to any class of graphs of unbounded treewidth.
The ETH is an unproven assumption similar to P ≠ NP, but stronger.
Under the same assumption, there are also essentially no other properties that can be used to get polynomial time algorithms. This is formalized as follows:
Theorem (Grohe): For a computable class of graphs , the homomorphism problem for instances with is in P if and only if graphs in have cores of bounded treewidth (assuming ETH).
One can ask whether the problem is at least solvable in a time arbitrarily highly dependent on G, but with a fixed polynomial dependency on the size of H.
The answer is again positive if we limit G to a class of graphs with cores of bounded treewidth, and negative for every other class.
In the language of parameterized complexity, this formally states that the homomorphism problem in parameterized by the size (number of edges) of G exhibits a dichotomy. It is fixed-parameter tractable if graphs in have cores of bounded treewidth, and W[1]-complete otherwise.
The same statements hold more generally for constraint satisfaction problems (or for relational structures, in other words). The only assumption needed is that constraints can involve only a bounded number of variables (all relations are of some bounded arity, 2 in the case of graphs). The relevant parameter is then the treewidth of the primal constraint graph.
| Mathematics | Graph theory | null |
676393 | https://en.wikipedia.org/wiki/Dock | Dock | The word dock () in American English refers to one or a group of human-made structures that are involved in the handling of boats or ships (usually on or near a shore). In British English, the term is not used the same way as in American English; it is used to mean the area of water that is next to or around a wharf or quay. The exact meaning varies among different variants of the English language.
"Dock" may also refer to a dockyard (also known as a shipyard) where the loading, unloading, building, or repairing of ships occurs.
History
The earliest known docks were those discovered in Wadi al-Jarf, an ancient Egyptian harbor, of Pharaoh Khufu, dating from c.2500 BC located on the Red Sea coast. Archaeologists also discovered anchors and storage jars near the site.
A dock from Lothal in India dates from 2400 BC and was located away from the main current to avoid deposition of silt. Modern oceanographers have observed that the ancient Harappans must have possessed great knowledge relating to tides in order to build such a dock on the ever-shifting course of the Sabarmati, as well as exemplary hydrography and maritime engineering. This is the earliest known dock found in the world equipped to berth and service ships.
It is speculated that Lothal engineers studied tidal movements and their effects on brick-built structures, since the walls are of kiln-burnt bricks. This knowledge also enabled them to select Lothal's location in the first place, as the Gulf of Khambhat has the highest tidal amplitude and ships can be sluiced through flow tides in the river estuary. The engineers built a trapezoidal structure, with north–south arms of average 21.8 metres (71.5 ft), and east–west arms of 37 metres (121 ft).
British English
In British English, a dock is an enclosed area of water used for loading, unloading, building or repairing ships. Such a dock may be created by building enclosing harbour walls into an existing natural water space, or by excavation within what would otherwise be dry land.
There are specific types of dock structures where the water level is controlled:
A wet dock or impounded dock is a variant in which the water is impounded either by dock gates or by a lock, thus allowing ships to remain afloat at low tide in places with high tidal ranges. The level of water in the dock is maintained despite the rising and falling of the tide. This makes transfer of cargo easier. It works like a lock which controls the water level and allows passage of ships. The world's first enclosed wet dock with lock gates to maintain a constant water level irrespective of tidal conditions was the Howland Great Dock on the River Thames, built in 1703. The dock was merely a haven surrounded by trees, with no unloading facilities. The world's first commercial enclosed wet dock, with quays and unloading warehouses, was the Old Dock at Liverpool, built in 1715 and held up to 100 ships. The dock reduced ship waiting giving quick turnarounds, greatly improving the throughput of cargo.
A drydock is another variant, also with dock gates, which can be emptied of water to allow investigation and maintenance of the underwater parts of ships.
A floating dry dock (sometimes just floating dock) is a submersible structure which lifts ships out of the water to allow dry docking where no land-based facilities are available.
Where the water level is not controlled berths may be:
Floating, where there is always sufficient water to float the ship.
NAABSA (Not Always Afloat But Safely Aground) where ships settle on the bottom at low tide. Ships using NAABSA facilities have to be designed for them.
A dockyard (or shipyard) consists of one or more docks, usually with other structures.
American English
In American English, dock is technically synonymous with pier or wharf—any human-made structure in the water intended for people to be on. However, in modern use, pier is generally used to refer to structures originally intended for industrial use, such as seafood processing or shipping, and more recently for cruise ships, and dock is used for almost everything else, often with a qualifier, such as ferry dock, swimming dock, ore dock and others. However, pier is also commonly used to refer to wooden or metal structures that extend into the ocean from beaches and are used, for the most part, to accommodate fishing in the ocean without using a boat.
In American English, the term for the water area between piers is slip.
In parts of both the US and Canada
In the cottage country of Canada and the United States, a dock is a wooden platform built over water, with one end secured to the shore. The platform is used for the boarding and offloading of small boats.
| Technology | Coastal infrastructure | null |
676502 | https://en.wikipedia.org/wiki/Rogue%20wave | Rogue wave | Rogue waves (also known as freak waves or killer waves) are large and unpredictable surface waves that can be extremely dangerous to ships and isolated structures such as lighthouses. They are distinct from tsunamis, which are long wavelength waves, often almost unnoticeable in deep waters and are caused by the displacement of water due to other phenomena (such as earthquakes). A rogue wave at the shore is sometimes called a sneaker wave.
In oceanography, rogue waves are more precisely defined as waves whose height is more than twice the significant wave height (H or SWH), which is itself defined as the mean of the largest third of waves in a wave record. Rogue waves do not appear to have a single distinct cause but occur where physical factors such as high winds and strong currents cause waves to merge to create a single large wave. Recent research suggests sea state crest-trough correlation leading to linear superposition may be a dominant factor in predicting the frequency of rogue waves.
Among other causes, studies of nonlinear waves such as the Peregrine soliton, and waves modeled by the nonlinear Schrödinger equation (NLS), suggest that modulational instability can create an unusual sea state where a "normal" wave begins to draw energy from other nearby waves, and briefly becomes very large. Such phenomena are not limited to water and are also studied in liquid helium, nonlinear optics, and microwave cavities. A 2012 study reported that in addition to the Peregrine soliton reaching up to about three times the height of the surrounding sea, a hierarchy of higher order wave solutions could also exist having progressively larger sizes and demonstrated the creation of a "super rogue wave" (a breather around five times higher than surrounding waves) in a water-wave tank.
A 2012 study supported the existence of oceanic rogue holes, the inverse of rogue waves, where the depth of the hole can reach more than twice the significant wave height. Although it is often claimed that rogue holes have never been observed in nature despite replication in wave tank experiments, there is a rogue hole recording from an oil platform in the North Sea, revealed in Kharif et al. The same source also reveals a recording of what is known as the 'Three Sisters'.
Background
Rogue waves are waves in open water that are much larger than surrounding waves. More precisely, rogue waves have a height which is more than twice the significant wave height (H or SWH). They can be caused when currents or winds cause waves to travel at different speeds, and the waves merge to create a single large wave; or when nonlinear effects cause energy to move between waves to create a single extremely large wave.
Once considered mythical and lacking hard evidence, rogue waves are now proven to exist and are known to be natural ocean phenomena. Eyewitness accounts from mariners and damage inflicted on ships have long suggested they occur. Still, the first scientific evidence of their existence came with the recording of a rogue wave by the Gorm platform in the central North Sea in 1984. A stand-out wave was detected with a wave height of in a relatively low sea state. However, what caught the attention of the scientific community was the digital measurement of a rogue wave at the Draupner platform in the North Sea on January 1, 1995; called the "Draupner wave", it had a recorded maximum wave height of and peak elevation of . During that event, minor damage was inflicted on the platform far above sea level, confirming the accuracy of the wave-height reading made by a downwards pointing laser sensor.
The existence of rogue waves has since been confirmed by video and photographs, satellite imagery, radar of the ocean surface, stereo wave imaging systems, pressure transducers on the sea-floor, and oceanographic research vessels. In February 2000, a British oceanographic research vessel, the RRS Discovery, sailing in the Rockall Trough west of Scotland, encountered the largest waves ever recorded by any scientific instruments in the open ocean, with an SWH of and individual waves up to . In 2004, scientists using three weeks of radar images from European Space Agency satellites found ten rogue waves, each or higher.
A rogue wave is a natural ocean phenomenon that is not caused by land movement, only lasts briefly, occurs in a limited location, and most often happens far out at sea. Rogue waves are considered rare, but potentially very dangerous, since they can involve the spontaneous formation of massive waves far beyond the usual expectations of ship designers, and can overwhelm the usual capabilities of ocean-going vessels which are not designed for such encounters. Rogue waves are, therefore, distinct from tsunamis. Tsunamis are caused by a massive displacement of water, often resulting from sudden movements of the ocean floor, after which they propagate at high speed over a wide area. They are nearly unnoticeable in deep water and only become dangerous as they approach the shoreline and the ocean floor becomes shallower; therefore, tsunamis do not present a threat to shipping at sea (e.g., the only ships lost in the 2004 Asian tsunami were in port.). These are also different from the wave known as a "hundred-year wave", which is a purely statistical description of a particularly high wave with a 1% chance to occur in any given year in a particular body of water.
Rogue waves have now been proven to cause the sudden loss of some ocean-going vessels. Well-documented instances include the freighter MS München, lost in 1978. Rogue waves have been implicated in the loss of other vessels, including the Ocean Ranger, a semisubmersible mobile offshore drilling unit that sank in Canadian waters on 15 February 1982. In 2007, the United States' National Oceanic and Atmospheric Administration (NOAA) compiled a catalogue of more than 50 historical incidents probably associated with rogue waves.
History of rogue wave knowledge
Early reports
In 1826, French scientist and naval officer Jules Dumont d'Urville reported waves as high as in the Indian Ocean with three colleagues as witnesses, yet he was publicly ridiculed by fellow scientist François Arago. In that era, the thought was widely held that no wave could exceed . Author Susan Casey wrote that much of that disbelief came because there were very few people who had seen a rogue wave and survived; until the advent of steel double-hulled ships of the 20th century, "people who encountered rogue waves generally weren't coming back to tell people about it."
Pre-1995 research
Unusual waves have been studied scientifically for many years (for example, John Scott Russell's wave of translation, an 1834 study of a soliton wave). Still, these were not linked conceptually to sailors' stories of encounters with giant rogue ocean waves, as the latter were believed to be scientifically implausible.
Since the 19th century, oceanographers, meteorologists, engineers, and ship designers have used a statistical model known as the Gaussian function (or Gaussian Sea or standard linear model) to predict wave height, on the assumption that wave heights in any given sea are tightly grouped around a central value equal to the average of the largest third, known as the significant wave height (SWH). In a storm sea with an SWH of , the model suggests hardly ever would a wave higher than occur. It suggests one of could indeed happen, but only once in 10,000 years. This basic assumption was well accepted, though acknowledged to be an approximation. Using a Gaussian form to model waves has been the sole basis of virtually every text on that topic for the past 100 years.
The first known scientific article on "freak waves" was written by Professor Laurence Draper in 1964. In that paper, he documented the efforts of the National Institute of Oceanography in the early 1960s to record wave height, and the highest wave recorded at that time, which was about . Draper also described freak wave holes.
Research on cross-swell waves and their contribution to rogue wave studies
Before the Draupner wave was recorded in 1995, early research had already made significant strides in understanding extreme wave interactions. In 1979, Dik Ludikhuize and Henk Jan Verhagen at TU Delft successfully generated cross-swell waves in a wave basin. Although only monochromatic waves could be produced at the time, their findings, reported in 1981, showed that individual wave heights could be added together even when exceeding breaker criteria. This phenomenon provided early evidence that waves could grow significantly larger than anticipated by conventional theories of wave breaking.
This work highlighted that in cases of crossing waves, wave steepness could increase beyond usual limits. Although the waves studied were not as extreme as rogue waves, the research provided an understanding of how multidirectional wave interactions could lead to extreme wave heights - a key concept in the formation of rogue waves. The crossing wave phenomenon studied in the Delft Laboratory therefore had direct relevance to the unpredictable rogue waves encountered at sea.
Research published in 2024 by TU Delft and other institutions has subsequently demonstrated that waves coming from multiple directions can grow up to four times steeper than previously imagined.
The 1995 Draupner wave
The Draupner wave was the first rogue wave to be detected by a measuring instrument. The wave was recorded in 1995 at Unit E of the Draupner platform, a gas pipeline support complex located in the North Sea about southwest from the southern tip of Norway.
At 15:24 UTC on 1 January 1995, the device recorded a rogue wave with a maximum wave height of . Peak elevation above still water level was . The reading was confirmed by the other sensors. In the area, the SWH at the time was about , so the Draupner wave was more than twice as tall and steep as its neighbors, with characteristics that fell outside any known wave model. The wave caused enormous interest in the scientific community.
Subsequent research
Following the evidence of the Draupner wave, research in the area became widespread.
The first scientific study to comprehensively prove that freak waves exist, which are clearly outside the range of Gaussian waves, was published in 1997. Some research confirms that observed wave height distribution, in general, follows well the Rayleigh distribution. Still, in shallow waters during high energy events, extremely high waves are rarer than this particular model predicts. From about 1997, most leading authors acknowledged the existence of rogue waves with the caveat that wave models could not replicate rogue waves.
Statoil researchers presented a paper in 2000, collating evidence that freak waves were not the rare realizations of a typical or slightly non-gaussian sea surface population (classical extreme waves) but were the typical realizations of a rare and strongly non-gaussian sea surface population of waves (freak extreme waves). A workshop of leading researchers in the world attended the first Rogue Waves 2000 workshop held in Brest in November 2000.
In 2000, British oceanographic vessel RRS Discovery recorded a wave off the coast of Scotland near Rockall. This was a scientific research vessel fitted with high-quality instruments. Subsequent analysis determined that under severe gale-force conditions with wind speeds averaging , a ship-borne wave recorder measured individual waves up to from crest to trough, and a maximum SWH of . These were some of the largest waves recorded by scientific instruments up to that time. The authors noted that modern wave prediction models are known to significantly under-predict extreme sea states for waves with a significant height (Hs) above . The analysis of this event took a number of years and noted that "none of the state-of-the-art weather forecasts and wave modelsthe information upon which all ships, oil rigs, fisheries, and passenger boats relyhad predicted these behemoths." In simple terms, a scientific model (and also ship design method) to describe the waves encountered did not exist. This finding was widely reported in the press, which reported that "according to all of the theoretical models at the time under this particular set of weather conditions, waves of this size should not have existed".
In 2004, the ESA MaxWave project identified more than 10 individual giant waves above in height during a short survey period of three weeks in a limited area of the South Atlantic. By 2007, it was further proven via satellite radar studies that waves with crest-to-trough heights of occur far more frequently than previously thought. Rogue waves are now known to occur in all of the world's oceans many times each day.
Rogue waves are now accepted as a common phenomenon. Professor Akhmediev of the Australian National University has stated that 10 rogue waves exist in the world's oceans at any moment. Some researchers have speculated that roughly three of every 10,000 waves on the oceans achieve rogue status, yet in certain spotssuch as coastal inlets and river mouthsthese extreme waves can make up three of every 1,000 waves, because wave energy can be focused.
Rogue waves may also occur in lakes. A phenomenon known as the "Three Sisters" is said to occur in Lake Superior when a series of three large waves forms. The second wave hits the ship's deck before the first wave clears. The third incoming wave adds to the two accumulated backwashes and suddenly overloads the ship deck with large amounts of water. The phenomenon is one of various theorized causes of the sinking of the on Lake Superior in November 1975.
A 2012 study reported that in addition to the Peregrine soliton reaching up to about 3 times the height of the surrounding sea, a hierarchy of higher order wave solutions could also exist having progressively larger sizes, and demonstrated the creation of a "super rogue a breather around 5 times higher than surrounding wavesin a water tank. Also in 2012, researchers at the Australian National University proved the existence of "rogue wave holes", an inverted profile of a rogue wave. Their research created rogue wave holes on the water surface in a water-wave tank. In maritime folklore, stories of rogue holes are as common as stories of rogue waves. They had followed from theoretical analysis but had never been proven experimentally.
"Rogue wave" has become a near-universal term used by scientists to describe isolated, large-amplitude waves that occur more frequently than expected for normal, Gaussian-distributed, statistical events. Rogue waves appear ubiquitous and are not limited to the oceans. They appear in other contexts and have recently been reported in liquid helium, nonlinear optics, and microwave cavities. Marine researchers universally now accept that these waves belong to a specific kind of sea wave, not considered by conventional models for sea wind waves. A 2015 paper studied the wave behavior around a rogue wave, including optical and the Draupner wave, and concluded, "rogue events do not necessarily appear without warning but are often preceded by a short phase of relative order".
In 2019, researchers succeeded in producing a wave with similar characteristics to the Draupner wave (steepness and breaking), and proportionately greater height, using multiple wavetrains meeting at an angle of 120°. Previous research had strongly suggested that the wave resulted from an interaction between waves from different directions ("crossing seas"). Their research also highlighted that wave-breaking behavior was not necessarily as expected. If waves met at an angle less than about 60°, then the top of the wave "broke" sideways and downwards (a "plunging breaker"). Still, from about 60° and greater, the wave began to break vertically upwards, creating a peak that did not reduce the wave height as usual but instead increased it (a "vertical jet"). They also showed that the steepness of rogue waves could be reproduced in this manner. Lastly, they observed that optical instruments such as the laser used for the Draupner wave might be somewhat confused by the spray at the top of the wave if it broke, and this could lead to uncertainties of around in the wave height. They concluded, "... the onset and type of wave breaking play a significant role and differ significantly for crossing and noncrossing waves. Crucially, breaking becomes less crest-amplitude limiting for sufficiently large crossing angles and involves the formation of near-vertical jets".
Extreme rogue wave events
On 17 November 2020, a buoy moored in of water on Amphitrite Bank in the Pacific Ocean off Ucluelet, Vancouver Island, British Columbia, Canada, at recorded a lone tall wave among surrounding waves about in height. The wave exceeded the surrounding significant wave heights by a factor of 2.93. When the wave's detection was revealed to the public in February 2022, one scientific paper and many news outlets christened the event as "the most extreme rogue wave event ever recorded" and a "once-in-a-millennium" event, claiming that at about three times the height of the waves around it, the Ucluelet wave set a record as the most extreme rogue wave ever recorded at the time in terms of its height in proportion to surrounding waves, and that a wave three times the height of those around it was estimated to occur on average only once every 1,300 years worldwide.
The Ucluelet event generated controversy. Analysis of scientific papers dealing with rogue wave events since 2005 revealed the claims for the record-setting nature and rarity of the wave to be incorrect. The paper Oceanic rogue waves by Dysthe, Krogstad and Muller reports on an event in the Black Sea in 2004 which was far more extreme than the Ucluelet wave, where the Datawell Waverider buoy reported a wave whose height was higher and 3.91 times the significant wave height, as detailed in the paper. Thorough inspection of the buoy after the recording revealed no malfunction. The authors of the paper that reported the Black Sea event assessed the wave as "anomalous" and suggested several theories on how such an extreme wave may have arisen. The Black Sea event differs in the fact that it, unlike the Ucluelet wave, was recorded with a high-precision instrument. The Oceanic rogue waves paper also reports even more extreme waves from a different source, but these were possibly overestimated, as assessed by the data's own authors. The Black Sea wave occurred in relatively calm weather.
Furthermore, a paper by I. Nikolkina and I. Didenkulova also reveals waves more extreme than the Ucluelet wave. In the paper, they infer that in 2006 a wave appeared in the Pacific Ocean off the Port of Coos Bay, Oregon, with a significant wave height of . The ratio is 5.38, almost twice that of the Ucluelet wave. The paper also reveals the incident as marginally more extreme than the Ucluelet event. The paper also assesses a report of an wave in a significant wave height of , but the authors cast doubt on that claim. A paper written by Craig B. Smith in 2007 reported on an incident in the North Atlantic, in which the submarine Grouper was hit by a 30-meter wave in calm seas.
Causes
Because the phenomenon of rogue waves is still a matter of active research, clearly stating what the most common causes are or whether they vary from place to place is premature. The areas of highest predictable risk appear to be where a strong current runs counter to the primary direction of travel of the waves; the area near Cape Agulhas off the southern tip of Africa is one such area. The warm Agulhas Current runs to the southwest, while the dominant winds are westerlies, but since this thesis does not explain the existence of all waves that have been detected, several different mechanisms are likely, with localized variation. Suggested mechanisms for freak waves include:
Diffractive focusing According to this hypothesis, coast shape or seabed shape directs several small waves to meet in phase. Their crest heights combine to create a freak wave.
Focusing by currents Waves from one current are driven into an opposing current. This results in shortening of wavelength, causing shoaling (i.e., increase in wave height), and oncoming wave trains to compress together into a rogue wave. This happens off the South African coast, where the Agulhas Current is countered by westerlies.
Nonlinear effects (modulational instability)A rogue wave may occur by natural, nonlinear processes from a random background of smaller waves. In such a case, it is hypothesized, an unusual, unstable wave type may form, which "sucks" energy from other waves, growing to a near-vertical monster itself, before becoming too unstable and collapsing shortly thereafter. One simple model for this is a wave equation known as the nonlinear Schrödinger equation (NLS), in which a normal and perfectly accountable (by the standard linear model) wave begins to "soak" energy from the waves immediately fore and aft, reducing them to minor ripples compared to other waves. The NLS can be used in deep-water conditions. In shallow water, waves are described by the Korteweg–de Vries equation or the Boussinesq equation. These equations also have nonlinear contributions and show solitary-wave solutions. The terms soliton (a type of self-reinforcing wave) and breather (a wave where energy concentrates in a localized and oscillatory fashion) are used for some of these waves, including the well-studied Peregrine soliton. Studies show that nonlinear effects could arise in bodies of water. A small-scale rogue wave consistent with the NLS on (the Peregrine soliton) was produced in a laboratory water-wave tank in 2011.
Normal part of the wave spectrum Some studies argue that many waves classified as rogue waves (with the sole condition that they exceed twice the SWH) are not freaks but just rare, random samples of the wave height distribution, and are, as such, statistically expected to occur at a rate of about one rogue wave every 28 hours. This is commonly discussed as the question "Freak Waves: Rare Realizations of a Typical Population Or Typical Realizations of a Rare Population?" According to this hypothesis, most real-world encounters with huge waves can be explained by linear wave theory (or weakly nonlinear modifications thereof), without the need for special mechanisms like the modulational instability. Recent studies analyzing billions of wave measurements by wave buoys demonstrate that rogue wave occurrence rates in the ocean can be explained with linear theory when the finite spectral bandwidth of the wave spectrum is taken into account. However, whether weakly nonlinear dynamics can explain even the largest rogue waves (such as those exceeding three times the significant wave height, which would be exceedingly rare in linear theory) is not yet known. This has also led to criticism questioning whether defining rogue waves using only their relative height is meaningful in practice.:
Constructive interference of elementary waves Rogue waves can result from the constructive interference (dispersive and directional focusing) of elementary three-dimensional waves enhanced by nonlinear effects.:
Wind wave interactions While wind alone is unlikely to generate a rogue wave, its effect combined with other mechanisms may provide a fuller explanation of freak wave phenomena. As the wind blows over the ocean, energy is transferred to the sea surface. When strong winds from a storm blow in the ocean current's opposing direction, the forces might be strong enough to generate rogue waves randomly. Theories of instability mechanisms for the generation and growth of wind waves – although not on the causes of rogue waves – are provided by Phillips and Miles.
The spatiotemporal focusing seen in the NLS equation can also occur when the non-linearity is removed. In this case, focusing is primarily due to different waves coming into phase rather than any energy-transfer processes. Further analysis of rogue waves using a fully nonlinear model by R. H. Gibbs (2005) brings this mode into question, as it is shown that a typical wave group focuses in such a way as to produce a significant wall of water at the cost of a reduced height.
A rogue wave, and the deep trough commonly seen before and after it, may last only for some minutes before either breaking or reducing in size again. Apart from a single one, the rogue wave may be part of a wave packet consisting of a few rogue waves. Such rogue wave groups have been observed in nature.
Research efforts
A number of research programmes are currently underway or have concluded whose focus is/was on rogue waves, including:
In the course of Project MaxWave, researchers from the GKSS Research Centre, using data collected by ESA satellites, identified a large number of radar signatures that have been portrayed as evidence for rogue waves. Further research is underway to develop better methods of translating the radar echoes into sea surface elevation, but at present this technique is not proven.
The Australian National University, working in collaboration with Hamburg University of Technology and the University of Turin, have been conducting experiments in nonlinear dynamics to try to explain rogue or killer waves. The "Lego Pirate" video has been widely used and quoted to describe what they call "super rogue waves", which their research suggests can be up to five times bigger than the other waves around them.
The European Space Agency continues to do research into rogue waves by radar satellite.
United States Naval Research Laboratory, the science arm of the Navy and Marine Corps published results of their modelling work in 2015.
Massachusetts Institute of Technology(MIT)'s research in this field is ongoing. Two researchers there partially supported by the Naval Engineering Education Consortium (NEEC) considered the problem of short-term prediction of rare, extreme water waves and developed and published their research on a predictive tool of about 25 wave periods. This tool can give ships and their crews a two to three-minute warning of a potentially catastrophic impact allowing crew some time to shut down essential operations on a ship (or offshore platform). The authors cite landing on an aircraft carrier as a prime example.
The University of Colorado and the University of Stellenbosch
Kyoto University
Swinburne University of Technology in Australia recently published work on the probabilities of rogue waves.
The University of Oxford Department of Engineering Science published a comprehensive review of the science of rogue waves in 2014. In 2019, A team from the Universities of Oxford and Edinburgh recreated the Draupner wave in a lab.
University of Western Australia
Tallinn University of Technology in Estonia
Extreme Seas Project funded by the EU.
At Umeå University in Sweden, a research group in August 2006 showed that normal stochastic wind-driven waves can suddenly give rise to monster waves. The nonlinear evolution of the instabilities was investigated by means of direct simulations of the time-dependent system of nonlinear equations.
The Great Lakes Environmental Research Laboratory did research in 2002, which dispelled the long-held contentions that rogue waves were of rare occurrence.
The University of Oslo has conducted research into crossing sea state and rogue wave probability during the Prestige accident; nonlinear wind-waves, their modification by tidal currents, and application to Norwegian coastal waters; general analysis of realistic ocean waves; modelling of currents and waves for sea structures and extreme wave events; rapid computations of steep surface waves in three dimensions, and comparison with experiments; and very large internal waves in the ocean.
The National Oceanography Centre in the United Kingdom
Scripps Institute of Oceanography in the United States
Ritmare project in Italy.
University of Copenhagen and University of Victoria
Other media
Researchers at UCLA observed rogue-wave phenomena in microstructured optical fibers near the threshold of soliton supercontinuum generation and characterized the initial conditions for generating rogue waves in any medium. Research in optics has pointed out the role played by a Peregrine soliton that may explain those waves that appear and disappear without leaving a trace.
Rogue waves in other media appear to be ubiquitous and have also been reported in liquid helium, in quantum mechanics, in nonlinear optics, in microwave cavities, in Bose–Einstein condensate, in heat and diffusion, and in finance.
Reported encounters
Many of these encounters are reported only in the media, and are not examples of open-ocean rogue waves. Often, in popular culture, an endangering huge wave is loosely denoted as a "rogue wave", while the case has not been established that the reported event is a rogue wave in the scientific sense – i.e. of a very different nature in characteristics as the surrounding waves in that sea state] and with a very low probability of occurrence.
This section lists a limited selection of notable incidents.
19th century
Eagle Island lighthouse (1861) – Water broke the glass of the structure's east tower and flooded it, implying a wave that surmounted the cliff and overwhelmed the tower.
Flannan Isles Lighthouse (1900) – Three lighthouse keepers vanished after a storm that resulted in wave-damaged equipment being found above sea level.
20th century
SS Kronprinz Wilhelm, September 18, 1901 – The most modern German ocean liner of its time (winner of the Blue Riband) was damaged on its maiden voyage from Cherbourg to New York by a huge wave. The wave struck the ship head-on.
RMS Lusitania (1910) – On the night of 10 January 1910, a wave struck the ship over the bow, damaging the forecastle deck and smashing the bridge windows.
Voyage of the James Caird (1916) – Sir Ernest Shackleton encountered a wave he termed "gigantic" while piloting a lifeboat from Elephant Island to South Georgia.
USS Memphis, August 29, 1916 – An armored cruiser, formerly known as the USS Tennessee, wrecked while stationed in the harbor of Santo Domingo, with 43 men killed or lost, by a succession of three waves, the largest estimated at 70 feet.
RMS Homeric (1924) – Hit by a wave while sailing through a hurricane off the East Coast of the United States, injuring seven people, smashing numerous windows and portholes, carrying away one of the lifeboats, and snapping chairs and other fittings from their fastenings.
USS Ramapo (1933) – Triangulated at .
(1942) – Broadsided by a wave and listed briefly about 52° before slowly righting.
SS Michelangelo (1966) – Hole torn in superstructure, heavy glass was smashed by the wave above the waterline, and three deaths.
(1975) – Lost on Lake Superior, a Coast Guard report blamed water entry to the hatches, which gradually filled the hold, or errors in navigation or charting causing damage from running onto shoals. However, another nearby ship, the , was hit at a similar time by two rogue waves and possibly a third, and this appeared to coincide with the sinking around 10 minutes later.
(1978) – Lost at sea, leaving only scattered wreckage and signs of sudden damage including extreme forces above the water line. Although more than one wave was probably involved, this remains the most likely sinking due to a freak wave.
Esso Languedoc (1980) – A wave washed across the deck from the stern of the French supertanker near Durban, South Africa.
Fastnet Lighthouse – Struck by a wave in 1985
Draupner wave (North Sea, 1995) – The first rogue wave confirmed with scientific evidence, it had a maximum height of .
Queen Elizabeth 2 (1995) – Encountered a wave in the North Atlantic, during Hurricane Luis. The master said it "came out of the darkness" and "looked like the White Cliffs of Dover." Newspaper reports at the time described the cruise liner as attempting to "surf" the near-vertical wave in order not to be sunk.
21st century
U.S. Naval Research Laboratory ocean-floor pressure sensors detected a freak wave caused by Hurricane Ivan in the Gulf of Mexico, 2004. The wave was around high from peak to trough, and around long. Their computer models also indicated that waves may have exceeded in the eyewall.
Aleutian Ballad, (Bering Sea, 2005) footage of what is identified as an wave appears in an episode of Deadliest Catch. The wave strikes the ship at night and cripples the vessel, causing the boat to tip for a short period onto its side. This is one of the few video recordings of what might be a rogue wave.
In 2006, researchers from U.S. Naval Institute theorized rogue waves may be responsible for the unexplained loss of low-flying aircraft, such as U.S. Coast Guard helicopters during search-and-rescue missions.
MS Louis Majesty (Mediterranean Sea, March 2010) was struck by three successive waves while crossing the Gulf of Lion on a Mediterranean cruise between Cartagena and Marseille. Two passengers were killed by flying glass when the second and third waves shattered a lounge window. The waves, which struck without warning, were all abnormally high in respect to the sea swell at the time of the incident.
In 2011, the Sea Shepherd vessel MV Brigitte Bardotwas damaged by a rogue wave of 11 m (36.1 ft) while pursuing the Japanese whaling fleet off the western coast of Australia on 28 December 2011. The MV Brigitte Bardot was escorted back to Fremantle by the SSCS flagship, MV Steve Irwin. The main hull was cracked, and the port side pontoon was being held together by straps. The vessel arrived at Fremantle Harbor on 5 January 2012. Both ships were followed by the ICR security vessel MV Shōnan Maru 2 at a distance of 5 nautical miles (9 km).
In 2019, Hurricane Dorian's extratropical remnant generated a rogue wave off the coast of Newfoundland.
In 2022, the Viking cruise ship Viking Polaris was hit by a rogue wave on its way to Ushuaia, Argentina. One person died, four more were injured, and the ship's scheduled route to Antarctica was canceled.
Quantifying the impact of rogue waves on ships
The loss of the in 1978 provided some of the first physical evidence of the existence of rogue waves. München was a state-of-the-art cargo ship with multiple water-tight compartments and an expert crew. She was lost with all crew, and the wreck has never been found. The only evidence found was the starboard lifeboat recovered from floating wreckage sometime later. The lifeboats hung from forward and aft blocks above the waterline. The pins had been bent back from forward to aft, indicating the lifeboat hanging below it had been struck by a wave that had run from fore to aft of the ship and had torn the lifeboat from the ship. To exert such force, the wave must have been considerably higher than . At the time of the inquiry, the existence of rogue waves was considered so statistically unlikely as to be near impossible. Consequently, the Maritime Court investigation concluded that the severe weather had somehow created an "unusual event" that had led to the sinking of the München.
In 1980, the MV Derbyshire was lost during Typhoon Orchid south of Japan, along with all of her crew. The Derbyshire was an ore-bulk oil combination carrier built in 1976. At 91,655 gross register tons, she remains the largest British ship ever lost at sea. The wreck was found in June 1994. The survey team deployed a remotely operated vehicle to photograph the wreck. A private report published in 1998 prompted the British government to reopen a formal investigation into the sinking. The investigation included a comprehensive survey by the Woods Hole Oceanographic Institution, which took 135,774 pictures of the wreck during two surveys. The formal forensic investigation concluded that the ship sank because of structural failure and absolved the crew of any responsibility. Most notably, the report determined the detailed sequence of events that led to the structural failure of the vessel. A third comprehensive analysis was subsequently done by Douglas Faulkner, professor of marine architecture and ocean engineering at the University of Glasgow. His 2001 report linked the loss of the Derbyshire with the emerging science on freak waves, concluding that the Derbyshire was almost certainly destroyed by a rogue wave.
Work by sailor and author Craig B. Smith in 2007 confirmed prior forensic work by Faulkner in 1998 and determined that the Derbyshire was exposed to a hydrostatic pressure of a "static head" of water of about with a resultant static pressure of . This is in effect of seawater (possibly a super rogue wave) flowing over the vessel. The deck cargo hatches on the Derbyshire were determined to be the key point of failure when the rogue wave washed over the ship. The design of the hatches only allowed for a static pressure less than of water or , meaning that the typhoon load on the hatches was more than 10 times the design load. The forensic structural analysis of the wreck of the Derbyshire is now widely regarded as irrefutable.
In addition, fast-moving waves are now known to also exert extremely high dynamic pressure. Plunging or breaking waves are known to cause short-lived impulse pressure spikes called Gifle peaks. These can reach pressures of (or more) for milliseconds, which is sufficient pressure to lead to brittle fracture of mild steel. Evidence of failure by this mechanism was also found on the Derbyshire. Smith documented scenarios where hydrodynamic pressure up to or over 500 metric tonnes/m2 could occur.
In 2004, an extreme wave was recorded impacting the Alderney Breakwater, Alderney, in the Channel Islands. This breakwater is exposed to the Atlantic Ocean. The peak pressure recorded by a shore-mounted transducer was . This pressure far exceeds almost any design criteria for modern ships, and this wave would have destroyed almost any merchant vessel.
Design standards
In November 1997, the International Maritime Organization (IMO) adopted new rules covering survivability and structural requirements for bulk carriers of and upwards. The bulkhead and double bottom must be strong enough to allow the ship to survive flooding in hold one unless loading is restricted.
Rogue waves present considerable danger for several reasons: they are rare, unpredictable, may appear suddenly or without warning, and can impact with tremendous force. A wave in the usual "linear" model would have a breaking force of . Although modern ships are typically designed to tolerate a breaking wave of 15 t/m2, a rogue wave can dwarf both of these figures with a breaking force far exceeding 100 t/m2. Smith presented calculations using the International Association of Classification Societies (IACS) Common Structural Rules for a typical bulk carrier.
Peter Challenor, a scientist from the National Oceanography Centre in the United Kingdom, was quoted in Casey's book in 2010 as saying: "We don't have that random messy theory for nonlinear waves. At all." He added, "People have been working actively on this for the past 50 years at least. We don't even have the start of a theory."
In 2006, Smith proposed that the IACS recommendation 34 pertaining to standard wave data be modified so that the minimum design wave height be increased to . He presented analysis that sufficient evidence exists to conclude that high waves can be experienced in the 25-year lifetime of oceangoing vessels, and that high waves are less likely, but not out of the question. Therefore, a design criterion based on high waves seems inadequate when the risk of losing crew and cargo is considered. Smith also proposed that the dynamic force of wave impacts should be included in the structural analysis.
The Norwegian offshore standards now consider extreme severe wave conditions and require that a 10,000-year wave does not endanger the ships' integrity. W. Rosenthal noted that as of 2005, rogue waves were not explicitly accounted for in Classification Society's rules for ships' design. As an example, DNV GL, one of the world's largest international certification bodies and classification society with main expertise in technical assessment, advisory, and risk management publishes their Structure Design Load Principles which remain largely based on the Significant Wave Height, and as of January 2016, still have not included any allowance for rogue waves.
The U.S. Navy historically took the design position that the largest wave likely to be encountered was . Smith observed in 2007 that the navy now believes that larger waves can occur and the possibility of extreme waves that are steeper (i.e. do not have longer wavelengths) is now recognized. The navy has not had to make any fundamental changes in ship design due to new knowledge of waves greater than 21.4 m because the ships are built to higher standards than required.
The more than 50 classification societies worldwide each has different rules. However, most new ships are built to the standards of the 12 members of the International Association of Classification Societies, which implemented two sets of common structural rules - one for oil tankers and one for bulk carriers, in 2006. These were later harmonised into a single set of rules.
| Physical sciences | Oceanography | Earth science |
676555 | https://en.wikipedia.org/wiki/Grand%20tourer | Grand tourer | A grand tourer (GT) is a type of car that is designed for high speed and long-distance driving with performance and luxury. The most common format is a front-engine, rear-wheel-drive two-door coupé with either a two-seat or a 2+2 arrangement. Grand tourers are often the coupé derivative of luxury saloons or sedans. Some models, such as the Ferrari 250 GT, Jaguar E-Type, and Aston Martin DB5, are considered classic examples of gran turismo cars.
The term is a near-calque from the Italian language phrase gran turismo, which became popular in the English language in the 1950s, evolving from fast touring cars and streamlined closed sports cars during the 1930s.
Origin in Europe
The grand touring car concept originated in Europe in the early 1950s, especially with the 1951 introduction of the Lancia Aurelia B20 GT, and features notable luminaries of Italian automotive history such as Vittorio Jano, Enzo Ferrari and Johnny Lurani. Motorsports became important in the evolution of the grand touring concept, and grand touring entries are important in endurance sports-car racing. The grand touring definition implies material differences in performance, speed, comfort, and amenities between elite cars and those of ordinary motorists.
In the post-war United States, manufacturers were less inclined to adopt the "ethos of the GT car", preferring to build cars "suited to their long, straight, smooth roads and labor-saving lifestyles" with wide availability of powerful straight-six and V8 engines in all price-ranges like the 1955–1965 Chrysler 300. Despite this, the United States, enjoying early post-war economic expansion, became the largest market for European grand-touring cars, supplying transportation for movie stars, celebrities and the jet set; notably the Mercedes-Benz 300 SL (imported by Max Hoffman), the Jaguar XK120, and the Ferrari berlinettas (imported by Luigi Chinetti). Classic grand-touring cars from the post-war era especially, have since become valuable cars among wealthy collectors. Within ten years, grand touring cars found success penetrating the new American personal luxury car market.
Characteristics
The terms grand tourer, gran turismo, grande routière, and GT are among the most misused terms in motoring. The grand touring designation generally "means motoring at speed, in style, safety, and comfort". "Purists define gran turismo as the enjoyment, excitement and comfort of open-road touring."
According to Sam Dawson, news editor of Classic Cars, "the ideal is of a car with the ability to cross a continent at speed and in comfort yet provide driving thrills when demanded" and it should exhibit the following:
The engines "should be able to cope with cruising comfortably at the upper limits on all continental roads without drawbacks or loss of usable power".
"Ideally, the GT car should have been devised by its progenitors as a Grand Tourer, with all associated considerations in mind."
"It should be able to transport at least two comfortably with their luggage and have room to spare — probably in the form of a two plus two seating arrangement."
The design, both "inside and out, should be geared toward complete control by the driver".
Its "chassis and suspension provide suitable handling and roadholding on all routes" during travels.
Grand tourers emphasize comfort and handling over straight-out high performance or ascetic, spartan accommodations. In comparison, sports cars (also a "much abused and confused term") are typically more "crude" compared to "sophisticated Grand Touring machinery". However, the popularity of using GT for marketing purposes has meant that it has become a "much misused term, eventually signifying no more than a slightly tuned version of a family car with trendy wheels and a go-faster stripe on the side".
Historically, most GTs have been front-engined with rear-wheel drive, offering more cabin space than mid-mounted engine layouts. Softer suspensions, greater storage, and more luxurious appointments add to their appeal.
GT abbreviation in marketing
The GT abbreviation—and variations thereof—are often used as model names. However, some cars with GT in the model name are not actually grand touring cars.
Among the many variations of GT are:
GTA: Gran turismo alleggerita - the Italian word for 'lightweight'. GTAm indicates a modified version. GTA is also sometimes used for automatic transmission models.
GTB: Gran turismo berlinetta
GTC: Various uses including gran turismo compressore for supercharged engines, gran turismo cabriolet, gran turismo compact, gran turismo crossover and gran turismo corsa - the Italian word for 'racing'.
GTD: "Gran turismo diesel"
GT/E: "Gran turismo Einspritzung" - the German word for 'fuel injection'
GTE: "Grand touring estate"
GTi or GTI: "Grand touring injection", mostly used for hot hatches following the introduction of the Volkswagen Golf GTi
GTO: "Gran turismo omologato" - the Italian word for 'homologation'
GTR or GT-R: "Gran turismo and racing"
GTS: sometimes "Gran turismo spider" for convertible models. However, GTS has also been used for saloons and other body styles.
GT-T: "Gran turismo turbo"
GTV: "Gran turismo veloce" - the Italian word for 'fast'
GTX: "Grand tourisme extreme"
HGT: "High gran turismo"
World Championships and other GT racing series
Current World Championship
FIA World Endurance Championship – In operation since 2012, the current auto racing World Championship for both Sports prototypes and GT cars organised by the Automobile Club de l'Ouest (ACO) and sanctioned by the Fédération Internationale de l'Automobile (FIA). The championship is considered a revival of the defunct World Sportscar Championship which ended in 1992. In 2012-2016 seasons the World Cup for GT Manufacturer was awarded to a winning manufacturer of GT cars, promoted to the World GT Manufacturers' Championship title since the 2017 season. This title awarded to a winning manufacturer of GT cars was discontinued after the 2022 season.
Former World Championships
World Sportscar Championship – The first sports car racing world series run from 1953 to 1992. Originally contested only by Sports prototypes, the series expanded to GT cars in the 1954 season at the 1954 Mille Miglia, but from the 1985 season until its end it was again restricted to Sports prototypes only. In 1962-1984 seasons (except of the 1982 season) titles were awarded to manufacturers of GT cars alongside the manufacturers of Sports prototypes.
FIA GT1 World Championship – A short-lived series for GT cars run from 2010 to 2012, created by Stéphane Ratel Organisation (SRO) in an attempt to promote the FIA GT Championship to World Championship status.
Other GT racing series
Several past and present motor racing series have used "GT" in their name. These include:
LM GTE 1999–2023: A set of regulations for modified road cars, which is used for the 24 Hours of Le Mans race and several related racing series. LM GTE was originally called 'GT class' and was also known as GT2 class from 2005 to 2010. Also known as GTLM in the United States
GT World Challenge Europe 2013–present: A racing series for Group GT3 cars. The FIA GT Series replaced the FIA GT Championship (1997–2009) and the FIA GT1 World Championship (2010–2012).
GT4 European Series 2007–present: A European amateur racing series with the least powerful class of GT cars.
IMSA GT3 Cup Challenge 2005–present: A North American racing series for Porsche 911 GT3 Cup cars.
FIA GT3 European Championship 2006–2012: A European amateur racing series for Group GT3 cars.
FIA R-GT: As part of its structure, the Group R regulations have a provision for GT cars, known as R-GT.
There have also been several classes of racing cars called GT. The Group GT3 regulations for modified road cars have been used for various racing series worldwide since 2006. The Group GT1 regulations were used for the fastest category of sports car racing from 1994 to 2001.
Examples of grand tourers
The inclusion of "grand tourer", "gran turismo", "GT" or similar in the model name does not necessarily mean that the car is a grand tourer since several manufacturers have used the terms for the marketing of cars that are not grand tourers.
Evolution of the gran turismo car
Grand touring car design evolved from vintage and pre-World War II fast touring cars and streamlined closed sports cars.
Italy developed the first gran turismo cars. The small, light-weight, and aerodynamic coupés, named the "Berlinetta", originated in the 1930s. A contemporary French concept, known as "grande routière", emphasized style, elegance, luxury, and gentlemanly transcontinental touring; the grande routières were often larger cars than the Italian gran turismos. Italian designers saw that compared to traditional open two-seat sports car, the increase in weight and frontal area of an enclosed cabin for the driver and mechanic could be offset by the benefits of streamlining to reduce drag. Independent carrozzeria (coachbuilders) provided light and flexible fabric coachwork for powerful short-wheelbase fast-touring chassis by manufacturers such as Alfa Romeo. Later, Carrozzeria Touring of Milan pioneered sophisticated superleggera (super light-weight) aluminum bodywork, allowing for even more aerodynamic forms. The additional comfort of an enclosed cabin was beneficial for the Mille Miglia road race held in Italy's often wintry north.
1929 Alfa Romeo 6C 1750 GT
The first car to be named "gran turismo" was the 1929 Alfa Romeo 6C 1750 Gran Turismo, a sporting dual-purpose road/race chassis and engine specification that was available with a wide variety of body styles or carrozzeria. The influential Weymann fabric-bodied berlinetta version by Carrozzeria Touring, "an early example of what we generally perceive to be a GT car", was winner of the Vetture Chiuse category at the 1931 Mille Miglia. An improved and supercharged version, the 6C 1750 GTC Gran Turismo Compressore, won the Vetture a Guida Interna category of the 1932 Mille Miglia. The Alfa Romeo 6C 1750 was designed by Vittorio Jano, who would later be instrumental in the design of the 1951 Lancia Aurelia B20 GT.
1935 Fiat 508 Balilla S berlinetta
From the basic Fiat 508 Balilla touring chassis came the SIATA and Fiat aerodynamic gran turismo-style Berlinetta Mille Miglias of 1933 and 1935. Siata was a Turin, Italy-based Fiat tuner, typical of a popular class of Italian artisan manufacturers of small gran turismo, sports and racing cars—usually Fiat based—that came to be known in the 1970s as Etceterini, such as Nardi, Abarth, Ermini and, in 1946, Cisitalia. The Fiat and SIATA berlinettas, influenced by the successful Alfa Romeo 6C GT/GTC coupés, competed in the Mille Miglia endurance race and were significant among Weymann and Superleggera enclosed sporting cars appearing in the 1930s. They featured tuned Fiat engine and chassis, and bespoke carrozzeria, in common with the landmark post-war Cisitalia 202 SC, and are among the first small-displacement gran turismos.
1947 Cisitalia 202 SC
The first recognised motor race specifically for gran turismo cars was the 1949 Coppa Inter-Europa held at Monza. It was initially hoped by Italian motor industry observers that the small and struggling Italian sports and racing car manufacturer, Cisitalia, would find in the 1949 Coppa Inter-Europa regulations (initially called Turismo Veloce or Fast Touring) a category for its Cisitalia Tipo 202 SC—the road-going production coupé version of Cisitalia's single-seat D46 racing car and two-seat 202 open sports car. However, the Fiat-based 1100 cc four-cylinder Cisitalia was no match on the race track for Ferrari's new hand-built 2000 cc V12, and Ferrari dominated, taking the first three places. An 1100 cc class was hurriedly created, but not in time to save Cisitalia's business fortunes—the company's bankrupt owner Piero Dusio had already decamped to Argentina. The Cisitalia 202 SC gained considerable fame for the outstanding design of its Pinin Farina coachwork, and is credited with greatly influencing the style of subsequent berlinetta or fastback gran turismo coupés. A Cisitalia 202 "GT" is exhibited at the Museum of Modern Art in New York City.
1947 Maserati A6 1500
The Maserati A6 1500 won the 1500 cc class at the 1949 Coppa-Europa. It was driven by Franco Bordoni, former fighter ace of the Regia Aeronautica who had debuted as a pilota da corsa at the 1949 Mille Miglia. The A6 1500 was the first road going production car to be offered by the Maserati factory, featuring a tubular chassis with independent front suspension and coil springs, the 1500 cc six-cylinder being derived from the Maserati brothers pre-war voiturette racing engines. The body of the A6 1500 was an elegant two-door fast-back coupé body, also by Pinin Farina.
1949 Ferrari 166 Inter
Enzo Ferrari, whose Scuderia Ferrari had been the racing division of Alfa Romeo from 1929 until 1938, parted ways from Alfa Romeo in 1939: Enzo Ferrari's first car (itself an Etceterini) the Fiat-based Auto Avio Costruzioni 815 racing sports car, debuted at the 1940 Mille Miglia. Two were produced. The first car constructed in Ferrari's name, the V12 125 S, also a racing sports car, debuted in 1947 at the Piacenza racing circuit. Again, only two were produced, but they rapidly evolved into the 159 and 166 models, including the 1949 Ferrari 166 Inter, a road-going berlinetta coupé with coachwork by Carrozzeria Touring and other coachbuilders.
The Ferrari 166 'Inter' S coupé model won the 1949 Coppa Inter-Europa motor race. Regulations stipulated body form and dimensions but did not at this time specify a minimum production quantity. The car was driven by Bruno Sterzi, and is recognized as the first Ferrari gran turismo.
After that race, the national governing body of Italian motorsport, CSAI (Commissione Sportiva Automobilistica Italiana), officially introduced a new class, called Gran Turismo Internazionale, for cars with production over thirty units per year, thereby ruling out Ferrari's hand-built berlinettas.
1951 Ferrari 212 Export
Ferrari's response for the new Italian Gran Turismo Internazionale championship in 1951 was the road/race Ferrari 212. Twenty-seven short-wheelbase competition versions called Export, some with increasingly popular gran turismo-style berlinetta coupé coachwork, were produced for enthusiasts (Ferrari called the first example 212 MM) while the road version was called Inter. The Ferrari 212 Export featured long-range fuel tanks, high compression pistons and triple Weber 32 DCF carburettors; power was 170 bhp from the 2600cc Gioacchino Colombo-designed 'short-block' V12 engine, evolved from the earlier Ferrari 166 (2000cc) and 195 (2300cc). All versions came with the standard Ferrari five-speed non-synchromesh gearbox and hydraulic drum brakes. All 1951 Ferraris shared a double tube frame chassis design evolved from the 166. Double-wishbone front suspension with transverse leaf spring, and live rear axle with semi-elliptic leaf springs and radius rods were employed. The Ferrari 212 Export (212 MM) gran turismo berlinetta (chassis No. 0070M) debuted in first-place overall at the April 1951 Coppa Inter-Europa, driven by Luigi Villoresi, and in June (chassis no. 0092E) was first in the gran turismo category at the Coppa della Toscana driven by Milanese Ferrari concessionaire and proprietor of Scuderia Guastalla, Franco Cornacchia. The 212 Export continued to serve Ferrari well in the Sports and GT categories until replaced by the 225 S, and although it would later be overshadowed by the internationally famous 250 GT, the 212 Export was an important model in the successful line of Colombo-engined V12 GT cars that made Ferrari legendary.
1951 Lancia Aurelia B20 GT
1951 was the stunning debut of Lancia's Aurelia B20 GT.
Lancia had begun production in 1950 of their technically advanced Aurelia saloon; the design had been overseen by Vittorio Jano. At the 1951 Turin Motor Show, the Pinin Farina-bodied gran turismo B20 coupé version was unveiled to an enthusiastic motoring public. Here, finally, according to historians Jonathan Wood and Sam Dawson, was a fully realized production GT car, representing the starting point of the definitive grand tourer:
This outwardly conventional saloon bristled with innovation and ingenuity, in which the masterly hand of Vittorio Jano is apparent. In the B20 are elements of the Cistalia of 1947, coupés which Pinin undertook on a 6C Alfa Romeo and Maserati in 1948, along with the Fiat 1100 S coupé with its rear accommodation for children. The original Aurelia had been under-powered and, in 1951, the V6 was enlarged to 1991 cc, which was also extended to the coupé, though in 75 rather than 70 bhp form as the B20 was developed as a sporting model in its own right. In addition the B20 had a shorter wheelbase and a higher rear axle ratio, making it a 100 mph car. Lancia chose the Gran Turismo name for its new model and the suggestion could only have come from Vittorio Jano himself, for had he not been responsible for the original 1750 Alfa Romeo of the same name back in 1929?
Four semi-ufficiali works B20 GTs, together with a number of privateer entrants, were sent to the Mille Miglia in April 1951, where the factory Bracco / Maglioli car finished second overall, behind only a Ferrari sports racer of twice the engine capacity. Lancia Aurelias swept the GT 2.0 Liter division. In June 1951, Bracco was partnered with the "father of GT racing" himself, Johnny Lurani, to race a B20 GT at Le Mans, where they were victorious in the 2.0 liter sportscar division, placing a very creditable 12th overall. A 1–2 finish at the famous Coppa d'Oro delle Dolomiti, among other victories including the 6 Ore di Pescara, rounded out an astonishing debut racing season for this ground-breaking car, winning its division in the Italian GT Championship for Umberto Castiglioni in 1951. Lancia B20 GTs would go on to win the over 2.0 liter Italian GT Championship in 1953, 1954 and 1955 with the B20-2500.
1952 Fiat 8V "Otto Vu" Zagato
A surprise to the international press, who were not expecting a gran turismo berlinetta from Italy's largest manufacturer of everyday standard touring models, the Fiat 8V "Otto Vu" was unveiled at the Geneva Salon in March 1952 to international acclaim. Although not raced by the factory, the Otto Vu was raced by a number of private owners. Vincenzo Auricchio and Piero Bozzinio raced to fifth in the gran turismo category of the 1952 Mille Miglia, and Ovidio Capelli placed third in the GT 2000 cc class at the Coppa della Toscana in June, with a special race-spec lightweight Zagato coupe; the GT category overall at this event was won by Franco Cornacchia's Ferrari 212 Export (refer above). Capelli and the 8V Zagato topped this accomplishment by winning the GT category of the Pescara 12 Hours in August, ahead of two Lancias. The new Fiat 8V garnered sufficient competition points over the season to become the national two-liter GT Champion (a feat it repeated every year until 1959).
Elio Zagato, the coachbuilder's son, was successful in competition with the Otto Vu in 1954 and 1955, attracting further customer interest and leading Zagato to eventually develop two different GT racing versions. Upon his passing in 2009, Elio Zagato was described as a leading figure of Italian GT racing and design:
Elio Zagato, who has died aged 88, was one of the leading figures of Italian Gran Turismo (GT) racing and car-body design. In the 1950s, driving a Zagato-bodied Fiat 8V, Elio emerged as the consummate gentleman racer in Italian GT championship events. Zagato, his father's firm, provided the lithe, lightweight aluminium bodies for many of the Lancias, Alfa Romeos, Abarths and Maseratis that dominated these meetings. Elio won 82 races out of the 150 he entered, and won four of the five championships he entered. Working with the chief stylist Ercole Spada, Zagato produced some of the most beautiful GT designs of the era; spare and muscular cars such as the Aston Martin DB4GTZ, the Alfa Romeo Junior TZ and SZ, and the Lancia Flaminia Sport. These were minimalist shapes bereft of superfluous trim that introduced phrases such as "double bubble" roof to the car body design language: twin shallow domes, devised by Elio, to give extra head room and strengthen the roof. For lightness, Zagato pioneered the use of Perspex and of aerodynamics, with trademark forms such as the split or stub tail. Indeed, Elio would take prototypes out on the autostrada covered in wool tufts in order to test air flow over the body.
The 8V Otto Vu earned its name courtesy of its high-performance V8 engine (Ford having already trademarked "V8").
1954 Mercedes-Benz 300SL
The German automotive industry was devastated by the second World War, but in the post-war period a small number of firms brought it to prominence again. The emergence of the classic Porsche 356 is covered in the accompanying sports car article. In 1957 author John Stanford wrote: "The post-war Mercedes sports cars are in a way even more remarkable than those of Porsche. The firm was particularly badly hit by the war and it was several years before anything but a nominal production of cars could be undertaken. In 1951 appeared the "300", a luxurious and fast touring car with a single-camshaft six-cylinder engine of 2996 c.c. and chassis derived from the pre-war cars with swing-axle rear suspension. The "300S" was a three-carburetor edition, but in 1952 great interest was aroused by the almost invincible performance in sports-car racing of a team of prototype cars of extremely advanced and interesting design. By 1954 these had undergone sufficient development to be placed on the market as the "300SL", one of the costliest and most desirable cars of our time. The conventional chassis has been abandoned in favor of a complex structure of welded tubes, although the coil spring suspension is retained, and exceptionally large brakes are fitted, inboard at the rear. The engine is sharply inclined to the near-side in the interests of a low bonnet-line, and with Bosch fuel injection produces 240 b.h.p. at 6,000 r.p.m. Claimed maximum speed is in excess of 160 m.p.h. and although the car is by no means small, dry weight has been kept to 23 cwt. The depth of the multi-tubular frame prevents the use of conventional side-hinged doors and these cars are fitted with the roof-hinged "gull-wing" doors which characterize an exceedingly handsome and practical car. An open touring version is available. In competition the "300SL" has become a powerful contender, and abetted by the success of the Grand Prix cars [and "300 SLR"] has captured a substantial portion of the export market."
1956 Ferrari 250 GT
1953 saw the first serious attempt to series produce the Ferrari motor car, two models of the Type 250 Europa being produced. The cars were an evolution of the previous models, available with either the Colombo or Lampredi versions of the 250 V12 engine, coil spring front suspension, an improved sports gearbox (four speeds) with Porsche synchromesh, large drum brakes and luxurious outfitting. A few appeared in motorsports but did not initially threaten the international Mercedes-Benz 300 SL and Porsche 356 competition.
After its 1956 debut, the 250 GT "went from strength to strength". Powered by the Colombo 250 engine, output was up to 240 b.h.p. at 7,000 r.p.m. A short-wheelbase (SWB) version of the 250 chassis was employed for improved handling and road-holding in corners, and top speed was up to 157 m.p.h. In 1957 Gendebien finished third overall in the Mille Miglia, and won the "index of performance". Alfonso de Portago won the Tour de France and GT races at Montlhéry and Castelfusano in a lightweight Carrozzeria Scaglietti 250 GT. Gendebien became a gran turismo specialist in 250 GTs when he wasn't driving sports racing Ferrari Testa Rossas ("Red Heads" for their red engine covers), achieving success in both the Giro Sicilia and Tour de France.
In 1958, sports racing Testa Rossas swept the Manufacturer's Championship, and in 1959 the T.R. engine was adapted to the 250 GT. The spark plugs were relocated and each cylinder now had a separate intake port. Larger Weber twin-choke carburetors were employed in a triple configuration (sports racing T.R.s employed six) and some special customer cars had three four-choke Webers (one choke per cylinder). Dry-sump lubrication was employed, and the camshaft valve timing was only slightly less than the full-race Testa Rossas. G.T. power was up to 267 b.h.p. at 7,000 r.p.m. (240 b.h.p at 6,800 rpm for road versions). Experiments were conducted with Dunlop disc brakes, which were adopted in 1960, along with an even shorter wheelbase for competizione versions.
In 1962, the definitive competition gran turismo was unveiled, the 250 GTO. A full Testa Rossa engine was employed (albeit with black crinkle-finish engine covers) with six twin-choke Webers. Power was up to 300 b.h.p. at 7,400 r.p.m. and with a lightweight 2000 lb body and chassis: the car was an immediate winner.
In November 2016, it was reported that a 1962 Ferrari 250 GTO was being offered for public sale—normally brokers negotiate deals between extremely wealthy collectors "behind closed doors". GTOs had previously been auctioned in 1990 and 2014. The 2017 sale was expected to reach US$56,000,000.00, the particular GTO concerned (the second of just thirty-six ever made) thus set to become the world's most expensive car.
Impact of racing
The Italian Mille Miglia thousand-mile race, held from 1927 to 1957, was central to the evolution of the gran turismo concept. The event was one of the most important on the Italian motor-sport calendar and could attract up to five million spectators. Winning drivers such as Tazio Nuvolari, Rudolf Caracciola, and Stirling Moss; and manufacturers such as Alfa Romeo, BMW, Ferrari and Porsche would become household names.
According to Enzo Ferrari:
In my opinion, the Mille Miglia was an epoch-making event, which told a wonderful story. The Mille Miglia created our cars and the Italian car industry. The Mille Miglia permitted the birth of GT, or grand touring cars, which are now sold all over the world. The Mille Miglia proved that by racing over open roads for 1,000 miles, there were great technical lessons to be learned by the petrol and oil companies and by brake, clutch, transmission, electrical and lighting component manufacturers, fully justifying the old adage that motor racing improves the breed.
The Mille Miglia is still celebrated today as one of the world's premier historic racing events.
A closed sports coupé almost prevailed at Le Mans in 1938, when a carrozzeria touring-bodied Alfa Romeo 8C 2900B, driven by Raymond Sommer and Clemente Biondetti, led the famous 24-hour race from the third lap until early Sunday afternoon, retiring only due to engine problems.
Johnny Lurani was impressed by the dominant performance at the Mille Miglia in 1940, by a carrozzeria touring-bodied BMW 328 coupé, winning the event at over 100 mph average speed, driven by Fritz Huschke von Hanstein and Walter Bäumer:
The BMW team included a splendid aerodynamic Berlinetta, wind tunnel designed by German specialists, that was extremely fast at 135 mph... I couldn't believe the speeds these BMWs were capable of.
1937–1948 CSAI
Italy's national governing body of motorsport was the Commissione Sportiva Automobilistica Italiana (CSAI). Count Giovanni Lurani Cernuschi (popularly known as Johnny Lurani) was a key commissioner. He was also a senior member of the world governing body, the Fédération Internationale de l'Automobile (FIA).
Lurani was instrumental in designing the regulations for the Italian 1937 Turismo Nazionale championship, whereby production vehicles approved by the CSAI were raced with the original chassis and engine layout as specified in the factory catalog and available for customers to buy; engines could be tuned and bored out, but the bodywork had to conform to regulations. The CSAI were concerned that FIA (known as AIACR at the time) 'Annexe C' Sports cars were becoming little more than thinly-disguised two-seat Grand Prix racers, far removed from the cars ordinary motorists could purchase from the manufacturers' catalogs.
The CSAI was shut down by the Italian Fascist government under Mussolini at the end of 1937, and replaced with a new organization called FASI. The Italian Fascists, as in Nazi Germany, sought control of motor racing as an important vehicle for national prestige and propaganda. FASI replaced Turismo Nazionale with the less strictly regulated Sports Nazionale championship, which ran in 1938 and 1939.
Postwar, the CSAI was re-established and in 1947 Italian national championships were held for both Sports Internazionale (FIA Annexe C sports cars) and Sports Nazionale. Sports Nazionale was abolished in 1948, creating the opportunity for a new category in 1949.
1949 Coppa Inter-Europa
The first race specifically for grand touring motor cars (at the time the regulations, designed by Johnny Lurani, were actually called "turismo veloce", or 'fast touring') was the 1949 Coppa Inter-Europa, held over three hours on 29 May, at the 6.3 kilometer Autodromo Nazionale di Monza (Italy). It was won by a limited production, V-12 engined, Ferrari 166 "inter", originally known as the "sport", with a coupé body by Carrozzeria Touring of Milan with the Superleggera system.
After this race, governing body CSAI officially introduced a new category, called Gran Turismo Internazionale, for 1950. The regulations were drawn up by Johnny Lurani and fellow Italian motor racing journalist and organizer Corrado Filippini, requiring for qualification the production of thirty models per year, thereby ruling out, for the time being, Ferrari's hand-built berlinettas. Nonetheless, Ferrari 166 (including the upgraded MM - Mille Miglia - version) were produced and raced in sports car categories as both open barchettas and closed berlinettas, including winning the 1950 Mille Miglia outright.
1950 Mille Miglia
On the third weekend of April 1950, it was the occasion of the annual Mille Miglia, one-thousand miles from Brescia to Rome and back over closed public roads, to include a Gran Turismo Internazionale category for the first time: twenty-four GT cars were entered, including Alfa Romeo 6C 2500 SS coupé touring, Cisitalia 202B berlinetta and Fiat 1100 S coupé. The field was rounded out by a solitary Fiat-based Siata Daina. Alfa Romeo took first place in the Gran Turismo Internazionale category (a creditable tenth overall) and also second place in category, followed by three Cisitalias. The overall race winning Ferrari 195 S was also a gran turismo-style coupé, but in the over 2,000 sports car class—in fact a special 166MM/195S Berlinetta Le Mans, chassis No. 0026MM, famously driven by Giannino Marzotto in a double-breasted suit, "a fitting advertisement for his family's textile business".
1950 Coppa Inter-Europa
The 1950 Coppa Inter-Europa at Monza was held in March. Separate races were held for sports cars, and for gran turismo cars in four classes: 750, 1100, 1500, and over 1500.
Ferrari entered, and won, the Sports car 2000 class with a Ferrari 166 MM berlinetta, while an Alfa Romeo Sperimentale (over 2000 class) won the sports car race overall.
The gran turismo race was contested by Lancia Aprilia, Cisitalia 202B, Stanguellini GT 1100, Fiat 500, Alfa Romeo 2500 and Fiat Zagato. The overall winner was WWII fighter ace Franco Bordoni's Maserati A6 1500.
1950 Targa Florio
The annual Targa Florio in Sicily was held the first weekend of April, and featured a Gran Turismo Internazionale category for the first time, in two classes: 1500 and over 1500. Contested by Lancia Aprilia, Cisitalia 202, Fiat 1100, Maserati A6, and even a solitary British Bristol 400 (based on the successful pre-war BMW 328), the Gran Turismo Internazionale category was won by Argentinian driver, Adolfo Schwelm Cruz, in an Alfa Romeo 6C 2500 SS.
Schwelm Cruz and Alfa Romeo repeated their success in the 1950 Targa Florio and Mille Miglia by winning the gran turismo category at the Coppa della Toscana in June. An Alfa Romeo 6C 2500, driven by Salvatore Amendola, was also victorious in the gran turismo category of the Coppa d' Oro delle Dolomiti in July, run through the Dolomite Mountains, starting and finishing in the town of Cortina d'Ampezzo. An Alfa Romeo 6C 2500 took the gran turismo honours again at the Giro delle Calabria in August. The Alfa Romeo 6C 2500 was based on a pre-war design, and is considered by some to be the last of the classic Alfa Romeos.
1951 Campionato Gran Turismo Internazionale
For 1951, the CSAI organized an Italian national championship for the Gran Turismo Internazionale category in four classes: 750, 1,500, 2,000, and over 2,000 cc. Interest was attracted from manufacturers such as Alfa Romeo, Lancia, Maserati, Ferrari, Fiat and SIATA. The championship was held over ten events, including all the classic long-distance road races (the Giro di Sicilia, the Mille Miglia, the Coppa della Toscana, the Giro dell'Umbria, the Coppa d' Oro delle Dolomiti, the Giro delle Calabrie and the Stella Alpina) as well as three circuit races (the Coppa Inter-Europa at Monza, the Circuito di Caracalla night-race in Rome, and the 6 Ore di Pescara).
1954 FIA Appendix J
Prior to 1954, internationally agreed motor-sport regulations existed only for racing cars and sports cars (FIA Appendix C). After a testy initial period, the FIA introduced for the 1954 motor racing calendar new "Appendix J" regulations covering production touring cars, tuned special touring cars, gran turismo cars, and production sports cars. This was the first officially sanctioned international recognition of the gran turismo category.
The 1954 gran turismo regulations stipulated cars for personal transport with closed bodywork built by the manufacturer of the chassis, although open bodies and special coachwork were admissible if listed in the official catalog of the manufacturer of the chassis and if the weight of the car was at least the same as the closed standard model. Minimum production was 100 cars during 12 months and cars needed to have only two seats.
Gran turismo categories (under 1500 and over 1500) were first included in the World Sportscar Championship in round three of the 1954 season at the 1954 Mille Miglia (the first placed GT car being the Lancia Aurelia B20 GT of Serafini and Mancini). GT entries would become a regular feature alongside their sports car brethren at international races from this time forward: GT cars raced in world championship rounds at the Targa Florio from 1955, Nürburgring from 1956, Sebring from 1957, 24 Hours of Le Mans from 1959, and Buenos Aires from 1960 (from which year every round of the world championship included GT cars). In 1960 and 1961 an FIA Coupé de Grand Tourisme (Grand Touring Cup) was awarded.
In 1962-1984 seasons (except of the 1982 season) World Sportscar Championship titles were awarded to manufacturers of GT cars alongside to manufacturers of Sports prototypes.
The FIA grand touring category came to be known as "Group 3", and is defined in the 1961 Appendix J (English) regulation as: "Vehicles built in small series for customers who are looking for better performance and/or maximum comfort and are not particularly concerned about economy. Such cars shall conform to a model defined in a catalog and be offered to the customers by the regular Sales Department of the manufacturer."
1962–1965 International Championship for GT Manufacturers
In 1962 the FIA, addressing concerns to reduce the speeds attained in sports car racing following the disastrous accident at Le Mans in 1955, shifted focus from Appendix C sports cars to production based GT cars of Appendix J. The previous World Sportscar Championship title was discontinued, being replaced by the International Championship for GT Manufacturers, won by the Ferrari 250 GTO in 1962, 1963 and 1964.
Cobra Ferrari wars
The period 1963–1965 is famous for the "Cobra Ferrari wars", a rivalry between American former-racing driver and Le Mans winner Carroll Shelby (Le Mans 1959, Aston Martin DBR1/300), and Enzo Ferrari, whose 250 GTs were the dominant grand touring cars of the time. Shelby retired from driving due to a heart condition, returning to California from Europe in 1959 with the idea to marry the AC Ace sports car chassis with Ford's V-8 small-block engine: the resulting Shelby AC Cobra was a sales success.
Shelby, like Enzo Ferrari, sold road cars to support his racing team, and like Ferrari the Cobra was a success on the track, at least on the short circuits common in the United States. On the longer tracks prevalent in Europe however, the Cobra's crude aerodynamics could not compete with the sleek 180 mph Ferrari 250 GTOs: even fitted with a removable roof the Cobra's top speed was 150 mph. At the 1963 24 Hours of Le Mans, a Cobra placed seventh; Ferraris placed first to sixth. Shelby team engineer Pete Brock hand-designed a Kamm-backed aerodynamic body for the Cobra, creating the Shelby Daytona coupe, and a showdown with Ferrari was set.
In testing, the Shelby Daytona coupe attained a top speed of 196 mph, and went on to win the GT class at the 1964 24 Hours of Le Mans. Shelby had beaten Ferrari on the biggest stage; however, the fast and reliable Ferrari 250 GTOs were again victorious in the 1964 International Championship for GT Manufacturers. The championship was controversial: Enzo Ferrari, with only a narrow points lead over Shelby, attempted to have the radical new mid-engined Ferrari LM250 homologated for the final championship round at Monza in Italy. When the FIA turned Ferrari down, Ferrari withdrew. The race organizers Auto Club d'Italia, fearing a financial disaster from the withdrawal of the famous Italian team, canceled the event, and Ferrari was crowned world champion. In the aftermath, Ferrari declared he would never race GTs again, and for 1965 the rivalry with Ferrari was taken up by Ford Motor Company and the Ford GT40, also mid-engined, in the sports car divisions.
In 1965, with Shelby's race team now dedicated to the GT40, the Daytona coupes were entrusted to Alan Mann Racing in the United Kingdom, and easily won the GT world championship. From 1966, the FIA returned its world championship focus to the sports car division, but GT entries remained an important feature of international sports car racing.
British grand tourers 1946–1963
While Italy was the home of the gran turismo, of all the other European nations that took the concept up, it was Britain that was most enthusiastic.
1946 Healey Elliot
Before Donald Healey turned to production of the small, light and inexpensive Austin-Healey 100 sports car in 1952, he had brought to market a fast and aerodynamic 2-liter Riley-powered Healey Elliot closed saloon (named for the coach-builder). Claimed to be the fastest closed car of its day, only 101 were made before production was given over to the successful new sports car.
1947 Bristol 400–406
Immediately following the Second World War, H. J. Aldington, pre-war Frazer Nash manufacturer and BMW importer, sought out BMW's badly bombed Munich factory and there discovered the special-bodied open BMW 328, duly returning with it to Britain with a view to building Fraser Nash-BMWs with the aid of key former-BMW personnel. The Bristol Aeroplane Company, looking to enter the car sector, acquired a majority shareholding. There were government concerns about using German engineers, and in the end, only Fritz Fiedler was involved as consultant to Bristol's own engineers. By the time the new car debuted at the 1947 Geneva Motor Show, it was known simply as the Bristol 400.
The Bristol 400 was essentially a hand-built, to aircraft industry standards, BMW 327 two-door coupe, mounted on a BMW 326 chassis, powered by the legendary 2-liter BMW 328 engine. It was fast, 90 mph, but expensive. The 1948 401 featured an improved aerodynamic body in the lightweight Touring Superleggera fashion; and the 1953 403 boasted improved suspension, brakes, and gearbox, while power was boosted from 85 to 100 bhp. The 1954 short-chassis 404 had a completely new body, and top speed was up to 110 mph. The 1958 406 was the last of the BMW-powered versions and was produced until 1961, after which they were superseded by a range of automatic transmission equipped and Chrysler V8 powered Bristols, with the engines rebuilt by Bristol engineers and fitted with high-lift camshafts and mechanical lifters.
1953 Aston Martin DB2
David Brown purchased the Aston Martin concern in 1947, and the company was effectively reborn for the post-war era. Unlike the Bristol, the Aston Martin DB2 which debuted at the 1949 Motor Show (as a prototype Le Mans racer) was an all-British affair. The 2.6-liter twin overhead camshaft Lagonda engine was designed by W. O. Bentley (Brown having also purchased the Lagonda company). Brown decided on a closed coupé body in the latest Italian tradition, rather than the traditional Aston Martin open two-seater sports car. The 1950 production DB2 was a styling triumph for designer Frank Feeley, and Brown later recalled that many believed the car styled in Italy. The 105 bhp DB2 was a genuine 110 mph grand tourer; in 1951 came the more powerful optional 125 bhp "Vantage" version. In its original form, the DB2 was a two-seater; the 1953 DB2/4 added a 2+2 and hatchback arrangement and a 3-liter engine in 1954. A Mark II version with Tickford coachwork appeared in 1955 (Brown had purchased this company too). The Mark III version from 1957 to 1959 developed 162 bhp, and was available with 180 and 195 bhp high-output engine options.
| Technology | Motorized road transport | null |
677191 | https://en.wikipedia.org/wiki/Differential%20%28mathematics%29 | Differential (mathematics) | In mathematics, differential refers to several related notions derived from the early days of calculus, put on a rigorous footing, such as infinitesimal differences and the derivatives of functions.
The term is used in various branches of mathematics such as calculus, differential geometry, algebraic geometry and algebraic topology.
Introduction
The term differential is used nonrigorously in calculus to refer to an infinitesimal ("infinitely small") change in some varying quantity. For example, if x is a variable, then a change in the value of x is often denoted Δx (pronounced delta x). The differential dx represents an infinitely small change in the variable x. The idea of an infinitely small or infinitely slow change is, intuitively, extremely useful, and there are a number of ways to make the notion mathematically precise.
Using calculus, it is possible to relate the infinitely small changes of various variables to each other mathematically using derivatives. If y is a function of x, then the differential dy of y is related to dx by the formula
where denotes not 'dy divided by dx' as one would intuitively read, but 'the derivative of y with respect to x '. This formula summarizes the idea that the derivative of y with respect to x is the limit of the ratio of differences Δy/Δx as Δx approaches zero.
Basic notions
In calculus, the differential represents a change in the linearization of a function.
The total differential is its generalization for functions of multiple variables.
In traditional approaches to calculus, differentials (e.g. dx, dy, dt, etc.) are interpreted as infinitesimals. There are several methods of defining infinitesimals rigorously, but it is sufficient to say that an infinitesimal number is smaller in absolute value than any positive real number, just as an infinitely large number is larger than any real number.
The differential is another name for the Jacobian matrix of partial derivatives of a function from Rn to Rm (especially when this matrix is viewed as a linear map).
More generally, the differential or pushforward refers to the derivative of a map between smooth manifolds and the pushforward operations it defines. The differential is also used to define the dual concept of pullback.
Stochastic calculus provides a notion of stochastic differential and an associated calculus for stochastic processes.
The integrator in a Stieltjes integral is represented as the differential of a function. Formally, the differential appearing under the integral behaves exactly as a differential: thus, the integration by substitution and integration by parts formulae for Stieltjes integral correspond, respectively, to the chain rule and product rule for the differential.
History and usage
Infinitesimal quantities played a significant role in the development of calculus. Archimedes used them, even though he did not believe that arguments involving infinitesimals were rigorous. Isaac Newton referred to them as fluxions. However, it was Gottfried Leibniz who coined the term differentials for infinitesimal quantities and introduced the notation for them which is still used today.
In Leibniz's notation, if x is a variable quantity, then dx denotes an infinitesimal change in the variable x. Thus, if y is a function of x, then the derivative of y with respect to x is often denoted dy/dx, which would otherwise be denoted (in the notation of Newton or Lagrange) ẏ or y. The use of differentials in this form attracted much criticism, for instance in the famous pamphlet The Analyst by Bishop Berkeley. Nevertheless, the notation has remained popular because it suggests strongly the idea that the derivative of y at x is its instantaneous rate of change (the slope of the graph's tangent line), which may be obtained by taking the limit of the ratio Δy/Δx as Δx becomes arbitrarily small. Differentials are also compatible with dimensional analysis, where a differential such as dx has the same dimensions as the variable x.
Calculus evolved into a distinct branch of mathematics during the 17th century CE, although there were antecedents going back to antiquity. The presentations of, e.g., Newton, Leibniz, were marked by non-rigorous definitions of terms like differential, fluent and "infinitely small". While many of the arguments in Bishop Berkeley's 1734 The Analyst are theological in nature, modern mathematicians acknowledge the validity of his argument against "the Ghosts of departed Quantities"; however, the modern approaches do not have the same technical issues. Despite the lack of rigor, immense progress was made in the 17th and 18th centuries. In the 19th century, Cauchy and others gradually developed the Epsilon, delta approach to continuity, limits and derivatives, giving a solid conceptual foundation for calculus.
In the 20th century, several new concepts in, e.g., multivariable calculus, differential geometry, seemed to encapsulate the intent of the old terms, especially differential; both differential and infinitesimal are used with new, more rigorous, meanings.
Differentials are also used in the notation for integrals because an integral can be regarded as an infinite sum of infinitesimal quantities: the area under a graph is obtained by subdividing the graph into infinitely thin strips and summing their areas. In an expression such as
the integral sign (which is a modified long s) denotes the infinite sum, f(x) denotes the "height" of a thin strip, and the differential dx denotes its infinitely thin width.
Approaches
There are several approaches for making the notion of differentials mathematically precise.
Differentials as linear maps. This approach underlies the definition of the derivative and the exterior derivative in differential geometry.
Differentials as nilpotent elements of commutative rings. This approach is popular in algebraic geometry.
Differentials in smooth models of set theory. This approach is known as synthetic differential geometry or smooth infinitesimal analysis and is closely related to the algebraic geometric approach, except that ideas from topos theory are used to hide the mechanisms by which nilpotent infinitesimals are introduced.
Differentials as infinitesimals in hyperreal number systems, which are extensions of the real numbers that contain invertible infinitesimals and infinitely large numbers. This is the approach of nonstandard analysis pioneered by Abraham Robinson.
These approaches are very different from each other, but they have in common the idea of being quantitative, i.e., saying not just that a differential is infinitely small, but how small it is.
Differentials as linear maps
There is a simple way to make precise sense of differentials, first used on the Real line by regarding them as linear maps. It can be used on , , a Hilbert space, a Banach space, or more generally, a topological vector space. The case of the Real line is the easiest to explain. This type of differential is also known as a covariant vector or cotangent vector, depending on context.
Differentials as linear maps on R
Suppose is a real-valued function on . We can reinterpret the variable in as being a function rather than a number, namely the identity map on the real line, which takes a real number to itself: . Then is the composite of with , whose value at is . The differential (which of course depends on ) is then a function whose value at (usually denoted ) is not a number, but a linear map from to . Since a linear map from to is given by a matrix, it is essentially the same thing as a number, but the change in the point of view allows us to think of as an infinitesimal and compare it with the standard infinitesimal , which is again just the identity map from to (a matrix with entry ). The identity map has the property that if is very small, then is very small, which enables us to regard it as infinitesimal. The differential has the same property, because it is just a multiple of , and this multiple is the derivative by definition. We therefore obtain that , and hence . Thus we recover the idea that is the ratio of the differentials and .
This would just be a trick were it not for the fact that:
it captures the idea of the derivative of at as the best linear approximation to at ;
it has many generalizations.
Differentials as linear maps on Rn
If is a function from to , then we say that is differentiable at if there is a linear map from to such that for any , there is a neighbourhood of such that for ,
We can now use the same trick as in the one-dimensional case and think of the expression as the composite of with the standard coordinates on (so that is the -th component of ). Then the differentials at a point form a basis for the vector space of linear maps from to and therefore, if is differentiable at , we can write as a linear combination of these basis elements:
The coefficients are (by definition) the partial derivatives of at with respect to . Hence, if is differentiable on all of , we can write, more concisely:
In the one-dimensional case this becomes
as before.
This idea generalizes straightforwardly to functions from to . Furthermore, it has the decisive advantage over other definitions of the derivative that it is invariant under changes of coordinates. This means that the same idea can be used to define the differential of smooth maps between smooth manifolds.
Aside: Note that the existence of all the partial derivatives of at is a necessary condition for the existence of a differential at . However it is not a sufficient condition. For counterexamples, see Gateaux derivative.
Differentials as linear maps on a vector space
The same procedure works on a vector space with a enough additional structure to reasonably talk about continuity. The most concrete case is a Hilbert space, also known as a complete inner product space, where the inner product and its associated norm define a suitable concept of distance. The same procedure works for a Banach space, also known as a complete Normed vector space. However, for a more general topological vector space, some of the details are more abstract because there is no concept of distance.
For the important case of a finite dimension, any inner product space is a Hilbert space, any normed vector space is a Banach space and any topological vector space is complete. As a result, you can define a coordinate system from an arbitrary basis and use the same technique as for .
Differentials as germs of functions
This approach works on any differentiable manifold. If
and are open sets containing
is continuous
is continuous
then is equivalent to at , denoted , if and only if
there is an open containing such that for every in .
The germ of at , denoted , is the set of all real continuous functions equivalent to at ; if is smooth at then is a smooth germ.
If
, and are open sets containing
, , and are smooth functions
is a real number
then
This shows that the germs at p form an algebra.
Define to be the set of all smooth germs vanishing at and
to be the product of ideals . Then a differential at (cotangent vector at ) is an element of . The differential of a smooth function at , denoted , is .
A similar approach is to define differential equivalence of first order in terms of derivatives in an arbitrary coordinate patch.
Then the differential of at is the set of all functions differentially equivalent to at .
Algebraic geometry
In algebraic geometry, differentials and other infinitesimal notions are handled in a very explicit way by accepting that the coordinate ring or structure sheaf of a space may contain nilpotent elements. The simplest example is the ring of dual numbers R[ε], where ε2 = 0.
This can be motivated by the algebro-geometric point of view on the derivative of a function f from R to R at a point p. For this, note first that f − f(p) belongs to the ideal Ip of functions on R which vanish at p. If the derivative f vanishes at p, then f − f(p) belongs to the square Ip2 of this ideal. Hence the derivative of f at p may be captured by the equivalence class [f − f(p)] in the quotient space Ip/Ip2, and the 1-jet of f (which encodes its value and its first derivative) is the equivalence class of f in the space of all functions modulo Ip2. Algebraic geometers regard this equivalence class as the restriction of f to a thickened version of the point p whose coordinate ring is not R (which is the quotient space of functions on R modulo Ip) but R[ε] which is the quotient space of functions on R modulo Ip2. Such a thickened point is a simple example of a scheme.
Algebraic geometry notions
Differentials are also important in algebraic geometry, and there are several important notions.
Abelian differentials usually mean differential one-forms on an algebraic curve or Riemann surface.
Quadratic differentials (which behave like "squares" of abelian differentials) are also important in the theory of Riemann surfaces.
Kähler differentials provide a general notion of differential in algebraic geometry.
Synthetic differential geometry
A fifth approach to infinitesimals is the method of synthetic differential geometry or smooth infinitesimal analysis. This is closely related to the algebraic-geometric approach, except that the infinitesimals are more implicit and intuitive. The main idea of this approach is to replace the category of sets with another category of smoothly varying sets which is a topos. In this category, one can define the real numbers, smooth functions, and so on, but the real numbers automatically contain nilpotent infinitesimals, so these do not need to be introduced by hand as in the algebraic geometric approach. However the logic in this new category is not identical to the familiar logic of the category of sets: in particular, the law of the excluded middle does not hold. This means that set-theoretic mathematical arguments only extend to smooth infinitesimal analysis if they are constructive (e.g., do not use proof by contradiction). Constuctivists regard this disadvantage as a positive thing, since it forces one to find constructive arguments wherever they are available.
Nonstandard analysis
The final approach to infinitesimals again involves extending the real numbers, but in a less drastic way. In the nonstandard analysis approach there are no nilpotent infinitesimals, only invertible ones, which may be viewed as the reciprocals of infinitely large numbers. Such extensions of the real numbers may be constructed explicitly using equivalence classes of sequences of real numbers, so that, for example, the sequence (1, 1/2, 1/3, ..., 1/n, ...) represents an infinitesimal. The first-order logic of this new set of hyperreal numbers is the same as the logic for the usual real numbers, but the completeness axiom (which involves second-order logic) does not hold. Nevertheless, this suffices to develop an elementary and quite intuitive approach to calculus using infinitesimals, see transfer principle.
Differential geometry
The notion of a differential motivates several concepts in differential geometry (and differential topology).
The differential (Pushforward) of a map between manifolds.
Differential forms provide a framework which accommodates multiplication and differentiation of differentials.
The exterior derivative is a notion of differentiation of differential forms which generalizes the differential of a function (which is a differential 1-form).
Pullback is, in particular, a geometric name for the chain rule for composing a map between manifolds with a differential form on the target manifold.
Covariant derivatives or differentials provide a general notion for differentiating of vector fields and tensor fields on a manifold, or, more generally, sections of a vector bundle: see Connection (vector bundle). This ultimately leads to the general concept of a connection.
Other meanings
The term differential has also been adopted in homological algebra and algebraic topology, because of the role the exterior derivative plays in de Rham cohomology: in a cochain complex the maps (or coboundary operators) di are often called differentials. Dually, the boundary operators in a chain complex are sometimes called codifferentials.
The properties of the differential also motivate the algebraic notions of a derivation and a differential algebra.
| Mathematics | Basics_2 | null |
12354211 | https://en.wikipedia.org/wiki/Corn%20harvester | Corn harvester | A corn harvester is a machine used on farms to harvest corn, stripping the stalks about one foot from the ground shooting the stalks through the header to the ground. The corn is stripped from its stalk and then moves through the header to the intake conveyor belt. From there it goes up the conveying system through a fan system, separating the remaining stalks from the ears. The stalks blow out the fan duct into the field while the ears drop onto another conveyor belt. The ears ride the belt and drop into a large moving bucket.
This method is done with both fresh corn and seed corn.
The first mechanical corn harvester was developed in 1930 by Gleaner Harvester Combine Corporation of Independence, Missouri. The unit was pulled by a tractor with the unit on the left side.
| Technology | Farm and garden machinery | null |
2380869 | https://en.wikipedia.org/wiki/Ferrite%20%28magnet%29 | Ferrite (magnet) | A ferrite is one of a family of iron oxide-containing magnetic ceramic materials. They are ferrimagnetic, meaning they are attracted by magnetic fields and can be magnetized to become permanent magnets. Unlike many ferromagnetic materials, most ferrites are not electrically conductive, making them useful in applications like magnetic cores for transformers to suppress eddy currents.
Ferrites can be divided into two groups based on their magnetic coercivity, their resistance to being demagnetized:
"Hard" ferrites have high coercivity, so are difficult to demagnetize. They are used to make permanent magnets for applications such as refrigerator magnets, loudspeakers, and small electric motors.
"Soft" ferrites have low coercivity, so they easily change their magnetization and act as conductors of magnetic fields. They are used in the electronics industry to make efficient magnetic cores called ferrite cores for high-frequency inductors, transformers and antennas, and in various microwave components.
Ferrite compounds are extremely low cost, being made mostly of iron oxide, and have excellent corrosion resistance. Yogoro Kato and Takeshi Takei of the Tokyo Institute of Technology synthesized the first ferrite compounds in 1930.
Composition, structure, and properties
Ferrites are usually ferrimagnetic ceramic compounds derived from iron oxides, with either a body-centered cubic or hexagonal crystal structure. Like most of the other ceramics, ferrites are hard, brittle, and poor conductors of electricity.
They are typically composed of α-iron(III) oxide (e.g. hematite ) with one, or more additional, metallic element oxides, usually with an approximately stochiometric formula of MO·Fe2O3 such as Fe(II) such as in the common mineral magnetite composed of Fe(II)-Fe(III)2O4. Above 585 °C Fe(II)-Fe(III)2O4 transforms into the non-magnetic gamma phase. Fe(II)-Fe(III)2O4 is commonly seen as the black iron(II) oxide coating the surface of cast-iron cookware). The other pattern is M·Fe(III)2O3, where M is another metallic element. Common, naturally occurring ferrites (typically members of the spinel group) include those with nickel (NiFe2O4) which occurs as the mineral trevorite, magnesium containing magnesioferrite (MgFe2O4), cobalt (cobalt ferrite), or manganese (MnFe2O4) which occurs naturally as the mineral jacobsite. Less often bismuth, strontium, zinc as found in franklinite, aluminum,yittrium, or barium ferrites are used In addition, more complex synthetic alloys are often used for specific applications.
Many ferrites adopt the spinel chemical structure with the formula , where A and B represent various metal cations, one of which is usually iron (Fe). Spinel ferrites usually adopt a crystal motif consisting of cubic close-packed (fcc) oxides (O) with A cations occupying one eighth of the tetrahedral holes, and B cations occupying half of the octahedral holes, i.e., . An exception exists for ɣ-Fe2O3 which has a spinel crystalline form and is widely used a magnetic recording substrate.
However the structure is not an ordinary spinel structure, but rather the inverse spinel structure: One eighth of the tetrahedral holes are occupied by B cations, one fourth of the octahedral sites are occupied by A cations. and the other one fourth by B cation. It is also possible to have mixed structure spinel ferrites with formula [] [] , where is the degree of inversion.
The magnetic material known as "Zn Fe" has the formula , with occupying the octahedral sites and occupying the tetrahedral sites, it is an example of normal structure spinel ferrite.
Some ferrites adopt hexagonal crystal structure, like barium and strontium ferrites () and ().
In terms of their magnetic properties, the different ferrites are often classified as "soft", "semi-hard" or "hard", which refers to their low or high magnetic coercivity, as follows.
Soft ferrites
Ferrites that are used in transformer or electromagnetic cores contain nickel, zinc, and/or manganese compounds. Soft ferrites are not suitable to make permanent magnets. They have high magnetic permeability so they conduct magnetic fields and are attracted to magnets, but when the external magnetic field is removed, the remanent magnetization does not tend to persist. This is due to their low coercivity. The low coercivity also means the material's magnetization can easily reverse direction without dissipating much energy (hysteresis losses), while the material's high resistivity prevents eddy currents in the core, another source of energy loss. Because of their comparatively low core losses at high frequencies, they are extensively used in the cores of RF transformers and inductors in applications such as switched-mode power supplies and loopstick antennas used in AM radios.
The most common soft ferrites are:
Manganese-zinc ferrite "Mn Zn", with the formula . Mn Zn have higher permeability and saturation induction than Ni Zn.
Nickel-zinc ferrite "Ni Zn", with the formula . Ni Zn ferrites exhibit higher resistivity than Mn Zn, and are therefore more suitable for frequencies above 1 MHz.
For use with frequencies above 0.5 MHz but below 5 MHz, Mn Zn ferrites are used; above that, Ni Zn is the usual choice. The exception is with common mode inductors, where the threshold of choice is at 70 MHz.
Semi-hard ferrites
Cobalt ferrite is in between soft and hard magnetic material and is usually classified as a semi-hard material. It is mainly used for its magnetostrictive applications like sensors and actuators thanks to its high saturation magnetostriction (~200 ppm). has also the benefits to be rare-earth free, which makes it a good substitute for terfenol-D.
Moreover, cobalt ferrite's magnetostrictive properties can be tuned by inducing a magnetic uniaxial anisotropy. This can be done by magnetic annealing, magnetic field assisted compaction, or reaction under uniaxial pressure. This last solution has the advantage to be ultra fast (20 min) thanks to the use of spark plasma sintering. The induced magnetic anisotropy in cobalt ferrite is also beneficial to enhance the magnetoelectric effect in composite.
Hard ferrites
In contrast, permanent ferrite magnets are made of hard ferrites, which have a high coercivity and high remanence after magnetization. Iron oxide and barium carbonate or strontium carbonate are used in manufacturing of hard ferrite magnets. The high coercivity means the materials are very resistant to becoming demagnetized, an essential characteristic for a permanent magnet. They also have high magnetic permeability. These so-called ceramic magnets are cheap, and are widely used in household products such as refrigerator magnets. The maximum magnetic field is about 0.35 tesla and the magnetic field strength is about 30–160 kiloampere turns per meter (400–2000 oersteds). The density of ferrite magnets is about 5 g/cm3.
The most common hard ferrites are:
Strontium ferrite (), used in small electric motors, micro-wave devices, recording media, magneto-optic media, telecommunication, and electronics industry. Strontium hexaferrite () is well known for its high coercivity due to its magnetocrystalline anisotropy. It has been widely used in industrial applications as permanent magnets and, because they can be powdered and formed easily, they are finding their applications into micro and nano-types systems such as biomarkers, bio diagnostics and biosensors.
Barium ferrite (), a common material for permanent magnet applications. Barium ferrites are robust ceramics that are generally stable to moisture and corrosion-resistant. They are used in e.g. loudspeaker magnets and as a medium for magnetic recording, e.g. on magnetic stripe cards.
Production
Ferrites are produced by heating a mixture of the oxides of the constituent metals at high temperatures, as shown in this idealized equation:
Fe2O3 + ZnO → ZnFe2O4
In some cases, the mixture of finely-powdered precursors is pressed into a mold. For barium and strontium ferrites, these metals are typically supplied as their carbonates, BaCO3 or SrCO3. During the heating process, these carbonates undergo calcination:
MCO3 → MO + CO2
After this step, the two oxides combine to give the ferrite. The resulting mixture of oxides undergoes sintering.
Processing
Having obtained the ferrite, the cooled product is milled to particles smaller than 2 μm, sufficiently small that each particle consists of a single magnetic domain. Next the powder is pressed into a shape, dried, and re-sintered. The shaping may be performed in an external magnetic field, in order to achieve a preferred orientation of the particles (anisotropy).
Small and geometrically easy shapes may be produced with dry pressing. However, in such a process small particles may agglomerate and lead to poorer magnetic properties compared to the wet pressing process. Direct calcination and sintering without re-milling is possible as well but leads to poor magnetic properties.
Electromagnets are pre-sintered as well (pre-reaction), milled and pressed. However, the sintering takes place in a specific atmosphere, for instance one with an oxygen shortage. The chemical composition and especially the structure vary strongly between the precursor and the sintered product.
To allow efficient stacking of product in the furnace during sintering and prevent parts sticking together, many manufacturers separate ware using ceramic powder separator sheets. These sheets are available in various materials such as alumina, zirconia and magnesia. They are also available in fine, medium and coarse particle sizes. By matching the material and particle size to the ware being sintered, surface damage and contamination can be reduced while maximizing furnace loading.
Uses
Ferrite cores are used in electronic inductors, transformers, and electromagnets where the high electrical resistance of the ferrite leads to very low eddy current losses.
Ferrites are also found as a lump in a computer cable, called a ferrite bead, which helps to prevent high frequency electrical noise (radio frequency interference) from exiting or entering the equipment; these types of ferrites are made with lossy materials to not just block (reflect), but also absorb and dissipate as heat, the unwanted higher-frequency energy.
Early computer memories stored data in the residual magnetic fields of hard ferrite cores, which were assembled into arrays of core memory. Ferrite powders are used in the coatings of magnetic recording tapes.
Ferrite particles are also used as a component of radar-absorbing materials or coatings used in stealth aircraft and in the absorption tiles lining the rooms used for electromagnetic compatibility measurements.
Most common audio magnets, including those used in loudspeakers and electromagnetic instrument pickups, are ferrite magnets. Except for certain "vintage" products, ferrite magnets have largely displaced the more expensive Alnico magnets in these applications. In particular, for hard hexaferrites today the most common uses are still as permanent magnets in refrigerator seal gaskets, microphones and loud speakers, small motors for cordless appliances and in automobile applications.
Ferrite magnets find applications in electric power steering systems and automotive sensors due to their cost-effectiveness and corrosion resistance. Ferrite magnets are known for their high magnetic permeability and low electrical conductivity, making them suitable for high-frequency applications. In electric power steering systems, they provide the necessary magnetic field for efficient motor operation, contributing to the system's overall performance and reliability. Automotive sensors utilize ferrite magnets for accurate detection and measurement of various parameters, such as position, speed, and fluid levels.
Due to ceramic ferrite magnet’s weaker magnetic fields compared to superconducting magnets, they are sometimes used in low-field or open MRI systems. These magnets are favored in certain cases due to their lower cost, stable magnetic field, and ability to function without the need for complex cooling systems.
Ferrite nanoparticles exhibit superparamagnetic properties.
History
Yogoro Kato and Takeshi Takei of the Tokyo Institute of Technology synthesized the first ferrite compounds in 1930. This led to the founding of TDK Corporation in 1935, to manufacture the material.
Barium hexaferrite (BaO•6Fe2O3) was discovered in 1950 at the Philips Natuurkundig Laboratorium (Philips Physics Laboratory). The discovery was somewhat accidental—due to a mistake by an assistant who was supposed to be preparing a sample of hexagonal lanthanum ferrite for a team investigating its use as a semiconductor material. On discovering that it was actually a magnetic material, and confirming its structure by X-ray crystallography, they passed it on to the magnetic research group. Barium hexaferrite has both high coercivity (170 kA/m) and low raw material costs. It was developed as a product by Philips Industries (Netherlands) and from 1952 was marketed under the trade name Ferroxdure. The low price and good performance led to a rapid increase in the use of permanent magnets.
In the 1960s Philips developed strontium hexaferrite (SrO•6Fe2O3), with better properties than barium hexaferrite. Barium and strontium hexaferrite dominate the market due to their low costs. Other materials have been found with improved properties. BaO•2(FeO)•8(Fe2O3) came in 1980. and Ba2ZnFe18O23 came in 1991.
| Physical sciences | Ceramic compounds | Chemistry |
2381321 | https://en.wikipedia.org/wiki/Shot%20%28pellet%29 | Shot (pellet) | Shot is a collective term for small spheres or pellets, often made of lead. These have been projected from slings since ancient times and were the original projectiles for shotguns and are still fired primarily from shotguns and grenade launchers, while they are less commonly used in riot guns. Shot shells are also available in many handgun calibers in a configuration known as "birdshot", "rat shot", or "snake shot".
Lead shot is also used for a variety of other purposes such as filling cavities with dense material for weight and/or balance. Some versions may be plated with other metals. Lead shot was originally made by pouring molten lead through screens into water, forming what was known as "swan shot", and, later, more economically mass-produced at higher quality using a shot tower. The Bliemeister method has supplanted the shot tower method since the early 1960s.
Manufacture
Producing lead shot from a shot tower was pioneered in the late 18th century by William Watts of Bristol who adapted his house on Redcliffe Hill by adding a three-storey tower and digging a shaft under the house through the caves underneath to achieve the required drop. The process was patented in 1782. The process was later brought above ground through the building of shot towers.
Molten lead would be dropped from the top of the tower. Like most liquids, surface tension makes drops of molten lead become near-spherical as they fall. When the tower is high enough, the lead droplets will solidify during the fall and thus retain their spherical form. Water is usually placed at the bottom of the tower, cooling the lead immediately upon landing.
Roundness of manufactured shot produced from the shot tower process is graded by forcing the newly produced shot to roll accurately down inclined planes. Unround shot will naturally roll to the side, for collection. The unround shot was either re-processed in another attempt to make round shot using the shot tower again, or used for applications which did not require round shot (e.g., split shot for fishing).
The hardness of lead shot is controlled through adding variable amounts of tin, antimony and arsenic, forming alloys. This also affects its melting point. Hardness is also controlled by the rate of cooling that is used in manufacturing lead shot.
The , named after inventor Louis W. Bliemeister of Los Angeles, California, (, dated April 11, 1961) is a process for making lead shot in small sizes from about #7 to about #9. In this process, molten lead is dripped from small orifices and dropped approximately into a hot liquid, where it is then rolled along an incline and then dropped another . The temperature of the liquid controls the cooling rate of the lead, while the surface tension of the liquid and the inclined surface(s) work together to bring the small droplets of lead into highly regular balls of lead in spherical form. The size of the lead shot that is produced is determined by the diameter of the orifice used to drip the lead, ranging from approximately for #9 lead shot to about for #6 or #7.0 shot, while also depending on the specific lead alloy that is used.
The roundness of the lead shot depends on the angle of the inclined surfaces as well as the temperature of the liquid coolant. Various coolants have successfully been used, ranging from diesel fuel to antifreeze and water-soluble oil. After the lead shot cools, it is washed, then dried, and small amounts of graphite are finally added to prevent clumping of the lead shot. Lead shot larger than about #5 tends to clump badly when fed through tubes, even when graphite is used, whereas lead shot smaller than about #6 tends not to clump when fed through tubes when graphite is used.
Lead shot dropped quickly into liquid cooling baths when being produced from molten lead is known as "chilled lead shot", in contrast to "soft lead shot" which is produced by molten lead not being dropped as quickly into a liquid cooling bath. The process of rapidly chilling lead shot during its manufacturing process causes the shot to become harder than it would otherwise be if allowed to cool more slowly. Hence, chilled lead shot, being harder and less likely to deform during firing, is preferred by shotgunners for improving shot pattern densities at longer (> ) ranges, whereas soft lead shot, being softer and more likely to deform during firing, is preferred for improving shot pattern densities at very close (< ) ranges as the softer and now deformed shot scatters more quickly when fired. Soft lead shot is also more readily deformed during the firing process by the effects of chokes.
The manufacture of non-lead shot differs from that of lead, with compression molding used to create some alloys.
Sizes
Shot is available in many sizes for different applications. The size of numbered shot decreases as the number increases. In hunting, some sizes are traditionally used for certain game, or certain shooting situations, although there is overlap and subjective preference. The range at which game is typically encountered and the penetration needed to assure a clean kill must both be considered. Local hunting regulations may also specify a size range for certain game. Shot loses its velocity very quickly due to its low sectional density and ballistic coefficient (see external ballistics). Generally, larger shot carries farther, and does not spread out as much as smaller shot.
Buckshot
Buckshot is a shot formed to larger diameters so that it can be used against bigger game such as deer, moose, or caribou. Sizes range in ascending order from size #B (0.17 in, 4.32 mm) to Tri-Ball. It is usually referred by the size, followed by "buck", e.g. "#000" is referred to as "triple-aught buck" in the United States or "triple-o buck" in other English speaking countries. Buckshot is traditionally swaged (in high volume production) or cast (in small volume production). The Bliemeister method does not work for shot larger than #5 (0.12 in, 3.05 mm), and works progressively poorly for shot sizes larger than about #6.
Lead shot comparison chart
Below is a chart with diameters per pellet and weight for idealized lead spheres for U.S. Standard Designations with a comparison to English shot sizes.
Applications outside firearms
When used as a pourable/mouldable weight, lead shot may be left loose, or mixed with a bonding agent such as epoxy to contain and stabilize the pellets after they are poured.
Some applications of lead shot are:
As ballast in various situations, especially where a dense, pourable weight is required. Generally, small shot is best for these applications, as it can be poured more like a liquid. Completely round shot is not required.
Stress testing: Providing variable weights in strength-of-materials stress-testing systems. Shot pours from a hopper into a basket, which is connected to the test item. When the test item fractures, the chute closes and the mass of the lead shot in the basket is used to calculate the fracture stress of the item.
Hydrometers: use a weight made of shot, since the weight has to be poured into a narrow glass vessel.
Split shot, a larger type of lead shot where each pellet is cut part-way through the diameter. This type of shot was formerly commonly used as a line weight in angling. They are no longer solely manufactured from lead but instead are often made from softer materials that can be easily pressed onto the fishing line instead of being closed in a crimp using pliers, as was once common.
The heads of some dead blow hammers are filled with shot to minimize rebound off the struck surface.
Shot belt: some scuba diving weight belts contain pouches filled with lead shot.
Many blackjacks and saps use lead shot as a flexible weight to deliver high energy blows while minimizing damage from sharp impact force (similar to the way it is used in dead blow hammers).
Loudspeaker stands can be filled with lead shot for additional acoustic decoupling, as well as stability.
Model rocketry: to add weight to the nose of the rocket, increasing the stability factor.
Due to its heat capacity and low thermal conductivity at low temperatures, lead shot has been used as a suitable material for a regenerator in Stirling engines and thermoacoustic cryocoolers.
Due to lead's high density, it is used to attenuate radiation, especially X-rays and gamma rays. Lead shot may be enclosed in a vest, blanket, or bag that is placed around a point source for radiation shielding.
Bird lead poisoning
Lead shot-related waterfowl poisonings were first documented in the US in the 1880s; by 1919, the spent lead pellets from waterfowl hunting were positively identified as a major source of deaths of bottom-feeding waterfowl. Once ingested, stomach acids and mechanical action cause the lead to break down and be absorbed into the body and bloodstream, resulting in death. "If a bird swallows only one pellet, it usually survives, although its immune system and fertility are likely to be affected. Even low concentrations of lead have a negative impact on energy storage, which affects the ability to prepare for migration." Upland game birds such as mourning doves, ring-necked pheasants, wild turkey, northern bobwhite quail and chukars can also ingest lead and thus be poisoned when they feed on seeds.
Lead from spent ammunition also impacts scavenging bird species such as vultures, ravens, eagles and other birds of prey. Foraging studies of the endangered Californian condor have shown that avian scavengers consume lead fragments in gut piles left in the field from harvested big game animals, as well as by the consumption of small game, or "pest animal," carcasses that have been shot with lead-core ammo, but not retrieved. Not all lead exposure in these circumstances leads to immediate mortality, but multiple sub-lethal exposures result in secondary poisoning impacts, which eventually lead to death. Among condors around the Grand Canyon, lead poisoning because of eating lead shot is the most frequently diagnosed cause of death.
Restrictions on the use of lead
Alternatives to lead shot are mandated for use by hunters in certain locations or when hunting migratory waterfowl and migratory birds or when hunting within federal waterfowl production areas, wildlife refuges, or some state wildlife management areas. Shot pellets used in waterfowl hunting must be lead-free in the United States, Canada, and in the European Union.
Lead shot is also banned within an eight-county area in California designated as the condor's range. As of 2011, thirty-five states prohibited lead shot use in such specially-specified areas when hunting.
In an effort to protect the condor, the use of projectiles containing lead has been banned for hunting wild boar, deer, antelope, elk, pronghorn, antelope, coyote, squirrel, and other non-game wildlife in areas of California designated as its habitat range. The bald eagle has similarly been shown to be affected by lead originating from dead or wounded waterfowl—the requirement to protect this species was one of the biggest factors behind laws being introduced in 1991 by the United States Fish and Wildlife Service to ban lead shot in migratory waterfowl hunting.
Hunting restrictions have also banned the use of lead shot while hunting migratory waterfowl in at least 29 countries across by international agreement, for example the Agreement on the Conservation of African-Eurasian Migratory Waterbirds. Depending on hunting laws, alternatives to lead shot are mandated for use by hunters in some locations when hunting migratory birds, notably waterfowl. In the US, the restrictions are limited to migratory waterfowl, while Canadian restrictions are wider and apply (with some exceptions) to all migratory birds. The hunting of upland migratory birds such as mourning doves was specifically excluded from the 1991 US restrictions as scientific evidence did not support their contribution to the poisoning of bald eagles. In 1985, Denmark banned the use of lead in wetlands covered by the Ramsar Convention, later expanding this restriction to the whole country. The use of lead has been banned for all hunting activities in the Netherlands as of 1992.
The Missouri Department of Conservation introduced regulations in 2007 in some hunting areas requiring the use of non-toxic shot to protect upland birds. Some clay pigeon ranges in the US have banned the use of lead after elevated levels of lead were found in waterfowl, small birds, mammals and frogs in their vicinity.
Non-toxic alternatives to lead shot
Approved alternatives while hunting migratory waterfowl include pellets manufactured from steel, tungsten-iron, tungsten-polymer, tungsten-nickel-iron, and bismuth-tin in place of lead shot. In Canada, the United States, the United Kingdom, and many western European countries (France as of 2006), all shot used for hunting migratory waterfowl must now be non-toxic, and therefore may not contain any lead.
Steel was one of the first widely used lead alternatives that the ammunition industry turned to. But steel is one hundred times harder than lead, with only two-thirds its density, resulting in undesirable ballistic properties compared to lead. Steel shot can be as hard as some barrels, and may therefore damage chokes on older firearms that were designed only for use with softer lead shot. The higher pressures required to compensate for the lower density of steel may exceed the design limits of a barrel.
Within recent years, several companies have created non-toxic shot out of bismuth, tungsten, or other elements or alloys with a density similar to or greater than lead, and with a shot softness that results in ballistic properties that are comparable to lead. These shells provide more consistent patterns and greater range than steel shot. They are also generally safe to use in older shotguns with barrels and chokes not rated for use with steel shot, such as for bismuth and tungsten-polymer (although not tungsten-iron) shot. Unfortunately, all non-lead shot other than steel is far more expensive than lead, which has diminished in its acceptance by hunters.
| Technology | Ammunition | null |
19009006 | https://en.wikipedia.org/wiki/Fur | Fur | Fur is a thick growth of hair that covers the skin of almost all mammals. It consists of a combination of oily guard hair on top and thick underfur beneath. The guard hair keeps moisture from reaching the skin; the underfur acts as an insulating blanket that keeps the animal warm.
The fur of mammals has many uses: protection, sensory purposes, waterproofing, and camouflaging, with the primary usage being thermoregulation. The types of hair include
definitive, which may be shed after reaching a certain length;
vibrissae, which are sensory hairs and are most commonly whiskers;
pelage, which consists of guard hairs, under-fur, and awn hair;
spines, which are a type of stiff guard hair used for defense in, for example, porcupines;
bristles, which are long hairs usually used in visual signals, such as the mane of a lion;
velli, often called "down fur", which insulates newborn mammals; and
wool, which is long, soft, and often curly.
Hair length is negligible in thermoregulation, as some tropical mammals, such as sloths, have the same fur length as some arctic mammals but with less insulation; and, conversely, other tropical mammals with short hair have the same insulating value as arctic mammals. The denseness of fur can increase an animal's insulation value, and arctic mammals especially have dense fur; for example, the muskox has guard hairs measuring as well as a dense underfur, which forms an airtight coat, allowing them to survive in temperatures of . Some desert mammals, such as camels, use dense fur to prevent solar heat from reaching their skin, allowing the animal to stay cool; a camel's fur may reach in the summer, but the skin stays at . Aquatic mammals, conversely, trap air in their fur to conserve heat by keeping the skin dry.
Mammalian coats are colored for a variety of reasons, the major selective pressures including camouflage, sexual selection, communication, and physiological processes such as temperature regulation. Camouflage is a powerful influence in many mammals, as it helps to conceal individuals from predators or prey. Aposematism, warning off possible predators, is the most likely explanation of the black-and-white pelage of many mammals which are able to defend themselves, such as in the foul-smelling skunk and the powerful and aggressive honey badger. In arctic and subarctic mammals such as the arctic fox (Vulpes lagopus), collared lemming (Dicrostonyx groenlandicus), stoat (Mustela erminea), and snowshoe hare (Lepus americanus), seasonal color change between brown in summer and white in winter is driven largely by camouflage. Differences in female and male coat color may indicate nutrition and hormone levels, important in mate selection. Some arboreal mammals, notably primates and marsupials, have shades of violet, green, or blue skin on parts of their bodies, indicating some distinct advantage in their largely arboreal habitat due to convergent evolution. The green coloration of sloths, however, is the result of a symbiotic relationship with algae. Coat color is sometimes sexually dimorphic, as in many primate species. Coat color may influence the ability to retain heat, depending on how much light is reflected. Mammals with darker colored coats can absorb more heat from solar radiation and stay warmer; some smaller mammals, such as voles, have darker fur in the winter. The white, pigmentless fur of arctic mammals, such as the polar bear, may reflect more solar radiation directly onto the skin.
The term pelagefirst known use in English (French, from Middle French, from for 'hair', from Old French , from Latin )is sometimes used to refer to an animal's complete coat. The term fur is also used to refer to animal pelts that have been processed into leather with their hair still attached. The words fur or furry are also used, more casually, to refer to hair-like growths or formations, particularly when the subject being referred to exhibits a dense coat of fine, soft "hairs". If layered, rather than grown as a single coat, it may consist of short down hairs, long guard hairs, and in some cases, medium awn hairs. Mammals with reduced amounts of fur are often called "naked", as with the naked mole-rat, or "hairless", as with hairless dogs.
An animal with commercially valuable fur is known within the fur industry as a furbearer. The use of fur as clothing or decoration is controversial; animal welfare advocates object to the trapping and killing of wildlife, and the confinement and killing of animals on fur farms.
Composition
The modern mammalian fur arrangement is known to have occurred as far back as docodonts, haramiyidans and eutriconodonts, with specimens of Castorocauda, Megaconus and Spinolestes preserving compound follicles with both guard hair and underfur.
Fur may consist of three layers, each with a different type of hair.
Down hair
Down hair (also known as underfur, undercoat, underhair or ground hair) is the bottomor innerlayer, composed of wavy or curly hairs with no straight portions or sharp points. Down hairs, which are also flat, tend to be the shortest and most numerous in the coat. Thermoregulation is the principal function of the down hair, which insulates a layer of dry air next to the skin.
Awn hair
The awn hair can be thought of as a hybrid, bridging the gap between the distinctly different characteristics of down and guard hairs. Awn hairs begin their growth much like guard hairs, but less than halfway to their full length, awn hairs start to grow thin and wavy like down hair. The proximal part of the awn hair assists in thermoregulation (like the down hair), whereas the distal part can shed water (like the guard hair). The awn hair's thin basal portion does not allow the amount of piloerection that the stiffer guard hairs are capable of. Mammals with well-developed down and guard hairs also usually have large numbers of awn hairs, which may even sometimes be the bulk of the visible coat.
Guard hair
Guard hair (overhair) is the top—or outer—layer of the coat. Guard hairs are longer, generally coarser, and have nearly straight shafts that protrude through the layer of softer down hair. The distal end of the guard hair is the visible layer of most mammal coats. This layer has the most marked pigmentation and gloss, manifesting as coat markings that are adapted for camouflage or display. Guard hair repels water and blocks sunlight, protecting the undercoat and skin in wet or aquatic habitats, and from the sun's ultraviolet radiation. Guard hairs can also reduce the severity of cuts or scratches to the skin. Many mammals, such as the domestic dog and cat, have a pilomotor reflex that raises their guard hairs as part of a threat display when agitated.
Mammals with reduced fur
Hair is one of the defining characteristics of mammals; however, several species or breeds have considerably reduced amounts of fur. These are often called "naked" or "hairless".
Natural selection
Some mammals naturally have reduced amounts of fur. Some semiaquatic or aquatic mammals such as cetaceans, pinnipeds and hippopotamuses have evolved hairlessness, presumably to reduce resistance through water. The naked mole-rat has evolved hairlessness, perhaps as an adaptation to their subterranean lifestyle. Two of the largest extant terrestrial mammals, the elephant and the rhinoceros, are largely hairless. The hairless bat is mostly hairless but does have short bristly hairs around its neck, on its front toes, and around the throat sac, along with fine hairs on the head and tail membrane. Most hairless animals cannot go in the sun for long periods of time, or stay in the cold for too long. Marsupials are born hairless and grow out fur later in development.
Humans are the only primate species that have undergone significant hair loss. The hairlessness of humans compared to related species may be due to loss of functionality in the pseudogene KRTHAP1 (which helps produce keratin) Although the researchers dated the mutation to 240 000 ya, both the Altai Neandertal and Denisovan peoples possessed the loss-of-function mutation, indicating it is much older. Mutations in the gene HR can lead to complete hair loss, though this is not typical in humans.
Artificial selection
At times, when a hairless domesticated animal is discovered, usually owing to a naturally occurring genetic mutation, humans may intentionally inbreed those hairless individuals and, after multiple generations, artificially create hairless breeds. There are several breeds of hairless cats, perhaps the most commonly known being the Sphynx cat. Similarly, there are some breeds of hairless dogs. Other examples of artificially selected hairless animals include the hairless guinea-pig, nude mouse, and the hairless rat.
Use in clothing
Fur has long served as a source of clothing for humans, including Neanderthals. Historically, it was worn for its insulating quality, with aesthetics becoming a factor over time. Pelts were worn in or out, depending on their characteristics and desired use. Today fur and trim used in garments may be dyed bright colors or to mimic exotic animal patterns, or shorn close like velvet. The term "a fur" may connote a coat, wrap, or shawl.
The manufacturing of fur clothing involves obtaining animal pelts where the hair is left on the animal's processed skin. In contrast, making leather involves removing the hair from the hide or pelt and using only the skin.
Fur is also used to make felt. A common felt is made from beaver fur and is used in bowler hats, top hats, and high-end cowboy hats.
Common furbearers used include fox, rabbit, mink, muskrat, leopard, beaver, ermine, otter, sable, jaguar, seal, coyote, chinchilla, raccoon, lemur, and possum.
| Biology and health sciences | Integumentary system | null |
19009110 | https://en.wikipedia.org/wiki/Rain | Rain | Rain is water droplets that have condensed from atmospheric water vapor and then fall under gravity. Rain is a major component of the water cycle and is responsible for depositing most of the fresh water on the Earth. It provides water for hydroelectric power plants, crop irrigation, and suitable conditions for many types of ecosystems.
The major cause of rain production is moisture moving along three-dimensional zones of temperature and moisture contrasts known as weather fronts. If enough moisture and upward motion is present, precipitation falls from convective clouds (those with strong upward vertical motion) such as cumulonimbus (thunder clouds) which can organize into narrow rainbands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation which forces moist air to condense and fall out as rainfall along the sides of mountains. On the leeward side of mountains, desert climates can exist due to the dry air caused by downslope flow which causes heating and drying of the air mass. The movement of the monsoon trough, or Intertropical Convergence Zone, brings rainy seasons to savannah climes.
The urban heat island effect leads to increased rainfall, both in amounts and intensity, downwind of cities. Global warming is also causing changes in the precipitation pattern, including wetter conditions across eastern North America and drier conditions in the tropics. Antarctica is the driest continent. The globally averaged annual precipitation over land is , but over the whole Earth, it is much higher at . Climate classification systems such as the Köppen classification system use average annual rainfall to help differentiate between differing climate regimes. Rainfall is measured using rain gauges. Rainfall amounts can be estimated by weather radar.
Formation
Water-saturated air
Air contains water vapor, and the amount of water in a given mass of dry air, known as the mixing ratio, is measured in grams of water per kilogram of dry air (g/kg). The amount of moisture in the air is also commonly reported as relative humidity; which is the percentage of the total water vapor air can hold at a particular air temperature. How much water vapor a parcel of air can contain before it becomes saturated (100% relative humidity) and forms into a cloud (a group of visible tiny water or ice particles suspended above the Earth's surface) depends on its temperature. Warmer air can contain more water vapor than cooler air before becoming saturated. Therefore, one way to saturate a parcel of air is to cool it. The dew point is the temperature to which a parcel must be cooled in order to become saturated.
There are four main mechanisms for cooling the air to its dew point: adiabatic cooling, conductive cooling, radiational cooling, and evaporative cooling. Adiabatic cooling occurs when air rises and expands. The air can rise due to convection, large-scale atmospheric motions, or a physical barrier such as a mountain (orographic lift). Conductive cooling occurs when the air comes into contact with a colder surface, usually by being blown from one surface to another, for example from a liquid water surface to colder land. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. Evaporative cooling occurs when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or until it reaches saturation.
The main ways water vapor is added to the air are wind convergence into areas of upward motion, precipitation or virga falling from above, daytime heating evaporating water from the surface of oceans, water bodies or wet land, transpiration from plants, cool or dry air moving over warmer water, and lifting air over mountains. Water vapor normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. Elevated portions of weather fronts (which are three-dimensional in nature) force broad areas of upward motion within the Earth's atmosphere which form clouds decks such as altostratus or cirrostratus. Stratus is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. It can also form due to the lifting of advection fog during breezy conditions.
Coalescence and fragmentation
Coalescence occurs when water droplets fuse to create larger water droplets. Air resistance typically causes the water droplets in a cloud to remain stationary. When air turbulence occurs, water droplets collide, producing larger droplets.
As these larger water droplets descend, coalescence continues, so that drops become heavy enough to overcome air resistance and fall as rain. Coalescence generally happens most often in clouds above freezing (in their top) and is also known as the warm rain process. In clouds below freezing, when ice crystals gain enough mass they begin to fall. This generally requires more mass than coalescence when occurring between the crystal and neighboring water droplets. This process is temperature dependent, as supercooled water droplets only exist in a cloud that is below freezing. In addition, because of the great temperature difference between cloud and ground level, these ice crystals may melt as they fall and become rain.
Raindrops have sizes ranging from mean diameter but develop a tendency to break up at larger sizes. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Large rain drops become increasingly flattened on the bottom, like hamburger buns; very large ones are shaped like parachutes. Contrary to popular belief, their shape does not resemble a teardrop. The biggest raindrops on Earth were recorded over Brazil and the Marshall Islands in 2004 — some of them were as large as . The large size is explained by condensation on large smoke particles or by collisions between drops in small regions with particularly high content of liquid water.
Raindrops associated with melting hail tend to be larger than other raindrops.
Intensity and duration of rainfall are usually inversely related, i.e., high-intensity storms are likely to be of short duration and low-intensity storms can have a long duration.
Droplet size distribution
The final droplet size distribution is an exponential distribution. The number of droplets with diameter between and per unit volume of space is . This is commonly referred to as the Marshall–Palmer law after the researchers who first characterized it. The parameters are somewhat temperature-dependent, and the slope also scales with the rate of rainfall (d in centimeters and R in millimeters per hour).
Deviations can occur for small droplets and during different rainfall conditions. The distribution tends to fit averaged rainfall, while instantaneous size spectra often deviate and have been modeled as gamma distributions. The distribution has an upper limit due to droplet fragmentation.
Raindrop impacts
Raindrops impact at their terminal velocity, which is greater for larger drops due to their larger mass-to-drag ratio. At sea level and without wind, drizzle impacts at or , while large drops impact at around or .
Rain falling on loosely packed material such as newly fallen ash can produce dimples that can be fossilized, called raindrop impressions. The air density dependence of the maximum raindrop diameter together with fossil raindrop imprints has been used to constrain the density of the air 2.7 billion years ago.
The sound of raindrops hitting water is caused by bubbles of air oscillating underwater.
The METAR code for rain is RA, while the coding for rain showers is SHRA.
Virga
In certain conditions, precipitation may fall from a cloud but then evaporate or sublime before reaching the ground. This is termed virga and is more often seen in hot and dry climates.
Causes
Frontal activity
Stratiform (a broad shield of precipitation with a relatively similar intensity) and dynamic precipitation (convective precipitation which is showery in nature with large changes in intensity over short distances) occur as a consequence of slow ascent of air in synoptic systems (on the order of cm/s), such as in the vicinity of cold fronts and near and poleward of surface warm fronts. Similar ascent is seen around tropical cyclones outside the eyewall, and in comma-head precipitation patterns around mid-latitude cyclones.
A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually, their passage is associated with a drying of the air mass. Occluded fronts usually form around mature low-pressure areas. What separates rainfall from other precipitation types, such as ice pellets and snow, is the presence of a thick layer of air aloft which is above the melting point of water, which melts the frozen precipitation well before it reaches the ground. If there is a shallow near-surface layer that is below freezing, freezing rain (rain which freezes on contact with surfaces in subfreezing environments) will result. Hail becomes an increasingly infrequent occurrence when the freezing level within the atmosphere exceeds above ground level.
Convection
Convective rain, or showery precipitation, occurs from convective clouds (e.g., cumulonimbus or cumulus congestus). It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited horizontal extent. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform precipitation also occurs. Graupel and hail indicate convection. In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts.
Orographic effects
Orographic precipitation occurs on the windward side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see katabatic wind) on the descending and generally warming, leeward side where a rain shadow is observed.
In Hawaii, Mount Waiʻaleʻale, on the island of Kauai, is notable for its extreme rainfall, as it is amongst the places in the world with the highest levels of rainfall, with . Systems known as Kona storms affect the state with heavy rains between October and April. Local climates vary considerably on each island due to their topography, divisible into windward (Koolau) and leeward (Kona) regions based upon location relative to the higher mountains. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover.
In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desert-like climate just downwind across western Argentina. The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts.
Within the tropics
The wet, or rainy, season is the time of year, covering one or more months, when most of the average annual rainfall in a region falls. The term green season is also sometimes used as a euphemism by tourist authorities. Areas with wet seasons are dispersed across portions of the tropics and subtropics. Savanna climates and areas with monsoon regimes have wet summers and dry winters. Tropical rainforests technically do not have dry or wet seasons, since their rainfall is equally distributed through the year. Some areas with pronounced rainy seasons will see a break in rainfall mid-season when the Intertropical Convergence Zone or monsoon trough move poleward of their location during the middle of the warm season. When the wet season occurs during the warm season, or summer, rain falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves, and vegetation grows significantly.
Tropical cyclones, a source of very heavy rainfall, consist of large air masses several hundred miles across with low pressure at the centre and with winds blowing inward towards the centre in either a clockwise direction (southern hemisphere) or counterclockwise (northern hemisphere). Although cyclones can take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage.
Human influence
The fine particulate matter produced by car exhaust and other human sources of pollution forms cloud condensation nuclei leads to the production of clouds and increases the likelihood of rain. As commuters and commercial traffic cause pollution to build up over the course of the week, the likelihood of rain increases: it peaks by Saturday, after five days of weekday pollution has been built up. In heavily populated areas that are near the coast, such as the United States' Eastern Seaboard, the effect can be dramatic: there is a 22% higher chance of rain on Saturdays than on Mondays. The urban heat island effect warms cities above surrounding suburbs and rural areas. This extra heat leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between downwind of cities, compared with upwind. Some cities induce a total precipitation increase of 51%.
Increasing temperatures tend to increase evaporation which can lead to more precipitation. Precipitation generally increased over land north of 30°N from 1900 through 2005 but has declined over the tropics since the 1970s. Globally there has been no statistically significant overall trend in precipitation over the past century, although trends have varied widely by region and over time. Eastern portions of North and South America, northern Europe, and northern and central Asia have become wetter. The Sahel, the Mediterranean, southern Africa and parts of southern Asia have become drier. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Changes in precipitation and evaporation over the oceans are suggested by the decreased salinity of mid- and high-latitude waters (implying more precipitation), along with increased salinity in lower latitudes (implying less precipitation and/or more evaporation). Over the contiguous United States, total annual precipitation increased at an average rate of 6.1 percent since 1900, with the greatest increases within the East North Central climate region (11.6 percent per century) and the South (11.1 percent). Hawaii was the only region to show a decrease (−9.25 percent).
Analysis of 65 years of United States of America rainfall records show the lower 48 states have an increase in heavy downpours since 1950. The largest increases are in the Northeast and Midwest, which in the past decade, have seen 31 and 16 percent more heavy downpours compared to the 1950s. Rhode Island is the state with the largest increase, 104%. McAllen, Texas is the city with the largest increase, 700%. Heavy downpour in the analysis are the days where total precipitation exceeded the top one percent of all rain and snow days during the years 1950–2014.
The most successful attempts at influencing weather involve cloud seeding, which include techniques used to increase winter precipitation over mountains and suppress hail.
Characteristics
Patterns
Rainbands are cloud and precipitation areas which are significantly elongated. Rainbands can be stratiform or convective, and are generated by differences in temperature. When noted on weather radar imagery, this precipitation elongation is referred to as banded structure. Rainbands in advance of warm occluded fronts and warm fronts are associated with weak upward motion, and tend to be wide and stratiform in nature.
Rainbands spawned near and ahead of cold fronts can be squall lines which are able to produce tornadoes. Rainbands associated with cold fronts can be warped by mountain barriers perpendicular to the front's orientation due to the formation of a low-level barrier jet. Bands of thunderstorms can form with sea breeze and land breeze boundaries if enough moisture is present. If sea breeze rainbands become active enough just ahead of a cold front, they can mask the location of the cold front itself.
Once a cyclone occludes an occluded front (a trough of warm air aloft) will be caused by strong southerly winds on its eastern periphery rotating aloft around its northeast, and ultimately northwestern, periphery (also termed the warm conveyor belt), forcing a surface trough to continue into the cold sector on a similar curve to the occluded front. The front creates the portion of an occluded cyclone known as its comma head, due to the comma-like shape of the mid-tropospheric cloudiness that accompanies the feature. It can also be the focus of locally heavy precipitation, with thunderstorms possible if the atmosphere along the front is unstable enough for convection. Banding within the comma head precipitation pattern of an extratropical cyclone can yield significant amounts of rain. Behind extratropical cyclones during fall and winter, rainbands can form downwind of relative warm bodies of water such as the Great Lakes. Downwind of islands, bands of showers and thunderstorms can develop due to low-level wind convergence downwind of the island edges. Offshore California, this has been noted in the wake of cold fronts.
Rainbands within tropical cyclones are curved in orientation. Tropical cyclone rainbands contain showers and thunderstorms that, together with the eyewall and the eye, constitute a hurricane or tropical storm. The extent of rainbands around a tropical cyclone can help determine the cyclone's intensity.
Acidity
The phrase acid rain was first used by Scottish chemist Robert Augus Smith in 1852. The pH of rain varies, especially due to its origin. On America's East Coast, rain that is derived from the Atlantic Ocean typically has a pH of 5.0–5.6; rain that comes across the continental from the west has a pH of 3.8–4.8; and local thunderstorms can have a pH as low as 2.0. Rain becomes acidic primarily due to the presence of two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3). Sulfuric acid is derived from natural sources such as volcanoes, and wetlands (sulfate-reducing bacteria); and anthropogenic sources such as the combustion of fossil fuels, and mining where H2S is present. Nitric acid is produced by natural sources such as lightning, soil bacteria, and natural fires; while also produced anthropogenically by the combustion of fossil fuels and from power plants. In the past 20 years, the concentrations of nitric and sulfuric acid has decreased in presence of rainwater, which may be due to the significant increase in ammonium (most likely as ammonia from livestock production), which acts as a buffer in acid rain and raises the pH.
Köppen climate classification
The Köppen classification depends on average monthly values of temperature and precipitation. The most commonly used form of the Köppen classification has five primary types labeled A through E. Specifically, the primary types are A, tropical; B, dry; C, mild mid-latitude; D, cold mid-latitude; and E, polar. The five primary classifications can be further divided into secondary classifications such as rain forest, monsoon, tropical savanna, humid subtropical, humid continental, oceanic climate, Mediterranean climate, steppe, subarctic climate, tundra, polar ice cap, and desert.
Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between . A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia. The humid subtropical climate zone is where winter rainfall is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones. Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees away from the equator.
An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year-round. The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of Western and South Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters. A steppe is a dry grassland. Subarctic climates are cold with continuous permafrost and little precipitation.
Pollution
Measurement
Gauges
Rain is measured in units of length per unit time, typically in millimeters per hour, or in countries where imperial units are more common, inches per hour. The "length", or more accurately, "depth" being measured is the depth of rain water that would accumulate on a flat, horizontal and impermeable surface during a given amount of time, typically an hour. One millimeter of rainfall is the equivalent of one liter of water per square meter.
The standard way of measuring rainfall or snowfall is the standard rain gauge, which can be found in 100-mm (4-in) plastic and 200-mm (8-in) metal varieties. The inner cylinder is filled by of rain, with overflow flowing into the outer cylinder. Plastic gauges have markings on the inner cylinder down to resolution, while metal gauges require use of a stick designed with the appropriate markings. After the inner cylinder is filled, the amount inside it is discarded, then filled with the remaining rainfall in the outer cylinder until all the fluid in the outer cylinder is gone, adding to the overall total until the outer cylinder is empty. Other types of gauges include the popular wedge gauge (the cheapest rain gauge and most fragile), the tipping bucket rain gauge, and the weighing rain gauge. For those looking to measure rainfall the most inexpensively, a can that is cylindrical with straight sides will act as a rain gauge if left out in the open, but its accuracy will depend on what ruler is used to measure the rain with. Any of the above rain gauges can be made at home, with enough know-how.
When a precipitation measurement is made, various networks exist across the United States and elsewhere where rainfall measurements can be submitted through the Internet, such as CoCoRAHS or GLOBE. If a network is not available in the area where one lives, the nearest local weather or met office will likely be interested in the measurement.
Remote sensing
One of the main uses of weather radar is to be able to assess the amount of precipitations fallen over large basins for hydrological purposes. For instance, river flood control, sewer management and dam construction are all areas where planners use rainfall accumulation data. Radar-derived rainfall estimates complement surface station data which can be used for calibration. To produce radar accumulations, rain rates over a point are estimated by using the value of reflectivity data at individual grid points. A radar equation is then used, which is
where Z represents the radar reflectivity, R represents the rainfall rate, and A and b are constants.
Satellite-derived rainfall estimates use passive microwave instruments aboard polar orbiting as well as geostationary weather satellites to indirectly measure rainfall rates. If one wants an accumulated rainfall over a time period, one has to add up all the accumulations from each grid box within the images during that time.
Intensity
Rainfall intensity is classified according to the rate of precipitation, which depends on the considered time. The following categories are used to classify rainfall intensity:
Light rain — when the precipitation rate is < per hour
Moderate rain — when the precipitation rate is between or per hour
Heavy rain — when the precipitation rate is > per hour, or between per hour
Violent rain — when the precipitation rate is > per hour
Terms used for a heavy or violent rain include gully washer, trash-mover and toad-strangler.
The intensity can also be expressed by rainfall erosivity R-factor or in terms of the rainfall time-structure n-index.
Return period
The average time between occurrences of an event with a specified intensity and duration is called the return period. The intensity of a storm can be predicted for any return period and storm duration, from charts based on historic data for the location. The return period is often expressed as an n-year event. For instance, a 10-year storm describes a rare rainfall event occurring on average once every 10 years. The rainfall will be greater and the flooding will be worse than the worst storm expected in any single year. A 100-year storm describes an extremely rare rainfall event occurring on average once in a century. The rainfall will be extreme and flooding worse than a 10-year event. The probability of an event in any year is the inverse of the return period (assuming the probability remains the same for each year). For instance, a 10-year storm has a probability of occurring of 10 percent in any given year, and a 100-year storm occurs with a 1 percent probability in a year. As with all probability events, it is possible, though improbable, to have multiple 100-year storms in a single year.
Forecasting
The Quantitative Precipitation Forecast (abbreviated QPF) is the expected amount of liquid precipitation accumulated over a specified time period over a specified area. A QPF will be specified when a measurable precipitation type reaching a minimum threshold is forecast for any hour during a QPF valid period. Precipitation forecasts tend to be bound by synoptic hours such as 0000, 0600, 1200 and 1800 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid to late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States.
Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within 6 to 7 hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast.
Impact
Agricultural
Precipitation, especially rain, has a dramatic effect on agriculture. All plants need at least some water to survive, therefore rain (being the most effective means of watering) is important to agriculture. While a regular rain pattern is usually vital to healthy plants, too much or too little rainfall can be harmful, even devastating to crops. Drought can kill crops and increase erosion, while overly wet weather can cause harmful fungus growth. Plants need varying amounts of rainfall to survive. For example, certain cacti require small amounts of water, while tropical plants may need up to hundreds of inches of rain per year to survive.
In areas with wet and dry seasons, soil nutrients diminish and erosion increases during the wet season. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season. Rain may be harvested through the use of rainwater tanks; treated to potable use or for non-potable use indoors or for irrigation. Excessive rain during short periods of time can cause flash floods.
Culture and religion
Cultural attitudes towards rain differ across the world. In temperate climates, people tend to be more stressed when the weather is unstable or cloudy, with its impact greater on men than women. Rain can also bring joy, as some consider it to be soothing or enjoy the aesthetic appeal of it. In dry places, such as India, or during periods of drought, rain lifts people's moods. In Botswana, the Setswana word for rain, pula, is used as the name of the national currency, in recognition of the economic importance of rain in its country, since it has a desert climate. Several cultures have developed means of dealing with rain and have developed numerous protection devices such as umbrellas and raincoats, and diversion devices such as gutters and storm drains that lead rains to sewers. Many people find the scent during and immediately after rain pleasant or distinctive. The source of this scent is petrichor, an oil produced by plants, then absorbed by rocks and soil, and later released into the air during rainfall.
Rain holds an important religious significance in many cultures. The ancient Sumerians believed that rain was the semen of the sky god An, which fell from the heavens to inseminate his consort, the earth goddess Ki, causing her to give birth to all the plants of the earth. The Akkadians believed that the clouds were the breasts of Anu's consort Antu and that rain was milk from her breasts. According to Jewish tradition, in the first century BC, the Jewish miracle-worker Honi ha-M'agel ended a three-year drought in Judaea by drawing a circle in the sand and praying for rain, refusing to leave the circle until his prayer was granted. In his Meditations, the Roman emperor Marcus Aurelius preserves a prayer for rain made by the Athenians to the Greek sky god Zeus. Various Native American tribes are known to have historically conducted rain dances in effort to encourage rainfall. Rainmaking rituals are also important in many African cultures. In the present-day United States, various state governors have held Days of Prayer for rain, including the Days of Prayer for Rain in the State of Texas in 2011.
Global climatology
Approximately of water falls as precipitation each year across the globe with of it over the oceans. Given the Earth's surface area, that means the globally averaged annual precipitation is . Deserts are defined as areas with an average annual precipitation of less than per year, or as areas where more water is lost by evapotranspiration than falls as precipitation.
Deserts
The northern half of Africa is dominated by the world's most extensive hot, dry region, the Sahara Desert. Some deserts also occupy much of southern Africa: the Namib and the Kalahari. Across Asia, a large annual rainfall minimum, composed primarily of deserts, stretches from the Gobi Desert in Mongolia west-southwest through western Pakistan (Balochistan) and Iran into the Arabian Desert in Saudi Arabia. Most of Australia is semi-arid or desert, making it the world's driest inhabited continent. In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desert-like climate just downwind across western Argentina. The drier areas of the United States are regions where the Sonoran Desert overspreads the Desert Southwest, the Great Basin, and central Wyoming.
Polar deserts
Since rain only falls as liquid, it rarely falls when surface temperatures are below freezing unless there is a layer of warm air aloft, in which case it becomes freezing rain. Due to the entire atmosphere being below freezing, frigid climates usually see very little rainfall and are often known as polar deserts. A common biome in this area is the tundra, which has a short summer thaw and a long frozen winter. Ice caps see no rain at all, making Antarctica the world's driest continent.
Rainforests
Rainforests are areas of the world with very high rainfall. Both tropical and temperate rainforests exist. Tropical rainforests occupy a large band of the planet, mainly along the equator. Most temperate rainforests are located on mountainous west coasts between 45 and 55 degrees latitude but are often found in other areas.
Around 40–75% of all biotic life is found in rainforests. Rainforests are also responsible for 28% of the world's oxygen turnover.
Monsoons
The equatorial region near the Intertropical Convergence Zone (ITCZ), or monsoon trough, is the wettest portion of the world's continents. Annually, the rain belt within the tropics marches northward by August, then moves back southward into the Southern Hemisphere by February and March. Within Asia, rainfall is favored across its southern portion from India east and northeast across the Philippines and southern China into Japan due to the monsoon advecting moisture primarily from the Indian Ocean into the region. The monsoon trough can reach as far north as the 40th parallel in East Asia during August before moving southward after that. Its poleward progression is accelerated by the onset of the summer monsoon, which is characterized by the development of lower air pressure (a thermal low) over the warmest part of Asia. Similar, but weaker, monsoon circulations are present over North America and Australia.
During the summer, the Southwest monsoon combined with Gulf of California and Gulf of Mexico moisture moving around the subtropical ridge in the Atlantic Ocean brings the promise of afternoon and evening thunderstorms to the southern tier of the United States as well as the Great Plains. The eastern half of the contiguous United States east of the 98th meridian, the mountains of the Pacific Northwest, and the Sierra Nevada range are the wetter portions of the nation, with average rainfall exceeding per year. Tropical cyclones enhance precipitation across southern sections of the United States, as well as Puerto Rico, the United States Virgin Islands, the Northern Mariana Islands, Guam, and American Samoa.
Impact of the Westerlies
Westerly flow from the mild North Atlantic leads to wetness across western Europe, in particular Ireland and the United Kingdom, where the western coasts can receive between , at sea level and , on the mountains of rain per year. Bergen, Norway is one of the more famous European rain-cities with its yearly precipitation of on average. During the fall, winter, and spring, Pacific storm systems bring most of Hawaii and the western United States much of their precipitation. Over the top of the ridge, the jet stream brings a summer precipitation maximum to the Great Lakes. Large thunderstorm areas known as mesoscale convective complexes move through the Plains, Midwest, and Great Lakes during the warm season, contributing up to 10% of the annual precipitation to the region.
The El Niño-Southern Oscillation affects the precipitation distribution by altering rainfall patterns across the western United States, Midwest, the Southeast, and throughout the tropics. There is also evidence that global warming leads to increased precipitation in the eastern portions of North America, while droughts are becoming more frequent in the tropics and subtropics.
Wettest known locations
Cherrapunji, situated on the southern slopes of the Eastern Himalaya in Shillong, India is the confirmed wettest place on Earth, with an average annual rainfall of . The highest recorded rainfall in a single year was in 1861. The 38-year average at nearby Mawsynram, Meghalaya, India is . The wettest spot in Australia is Mount Bellenden Ker in the north-east of the country which records an average of per year, with over of rain recorded during 2000. The Big Bog on the island of Maui has the highest average annual rainfall in the Hawaiian Islands, at . Mount Waiʻaleʻale on the island of Kauaʻi achieves similar torrential rains, while slightly lower than that of the Big Bog, at of rain per year over the last 32 years, with a record in 1982. Its summit is considered one of the rainiest spots on earth, with a reported 350 days of rain per year.
Lloró, a town situated in Chocó, Colombia, is probably the place with the largest rainfall in the world, averaging per year. The Department of Chocó is extraordinarily humid. Tutunendaó, a small town situated in the same department, is one of the wettest estimated places on Earth, averaging per year; in 1974 the town received , the largest annual rainfall measured in Colombia. Unlike Cherrapunji, which receives most of its rainfall between April and September, Tutunendaó receives rain almost uniformly distributed throughout the year. Quibdó, the capital of Chocó, receives the most rain in the world among cities with over 100,000 inhabitants: per year. Storms in Chocó can drop of rainfall in a day. This amount is more than what falls in many cities in a year.
| Physical sciences | Earth science | null |
19013767 | https://en.wikipedia.org/wiki/Medical%20diagnosis | Medical diagnosis | Medical diagnosis (abbreviated Dx, Dx, or Ds) is the process of determining which disease or condition explains a person's symptoms and signs. It is most often referred to as a diagnosis with the medical context being implicit. The information required for a diagnosis is typically collected from a history and physical examination of the person seeking medical care. Often, one or more diagnostic procedures, such as medical tests, are also done during the process. Sometimes the posthumous diagnosis is considered a kind of medical diagnosis.
Diagnosis is often challenging because many signs and symptoms are nonspecific. For example, redness of the skin (erythema), by itself, is a sign of many disorders and thus does not tell the healthcare professional what is wrong. Thus differential diagnosis, in which several possible explanations are compared and contrasted, must be performed. This involves the correlation of various pieces of information followed by the recognition and differentiation of patterns. Occasionally the process is made easy by a sign or symptom (or a group of several) that is pathognomonic.
Diagnosis is a major component of the procedure of a doctor's visit. From the point of view of statistics, the diagnostic procedure involves classification tests.
Medical uses
A diagnosis, in the sense of diagnostic procedure, can be regarded as an attempt at classification of an individual's condition into separate and distinct categories that allow medical decisions about treatment and prognosis to be made. Subsequently, a diagnostic opinion is often described in terms of a disease or other condition. (In the case of a wrong diagnosis, however, the individual's actual disease or condition is not the same as the individual's diagnosis.) A total evaluation of a condition is often termed a diagnostic workup.
A diagnostic procedure may be performed by various healthcare professionals such as a physician, physiotherapist, dentist, podiatrist, optometrist, nurse practitioner, healthcare scientist or physician assistant. This article uses diagnostician as any of these person categories.
A diagnostic procedure (as well as the opinion reached thereby) does not necessarily involve elucidation of the etiology of the diseases or conditions of interest, that is, what caused the disease or condition. Such elucidation can be useful to optimize treatment, further specify the prognosis or prevent recurrence of the disease or condition in the future.
The initial task is to detect a medical indication to perform a diagnostic procedure. Indications include:
Detection of any deviation from what is known to be normal, such as can be described in terms of, for example, anatomy (the structure of the human body), physiology (how the body works), pathology (what can go wrong with the anatomy and physiology), psychology (thought and behavior) and human homeostasis (regarding mechanisms to keep body systems in balance). Knowledge of what is normal and measuring of the patient's current condition against those norms can assist in determining the patient's particular departure from homeostasis and the degree of departure, which in turn can assist in quantifying the indication for further diagnostic processing.
A complaint expressed by a patient.
The fact that a patient has sought a diagnostician can itself be an indication to perform a diagnostic procedure. For example, in a doctor's visit, the physician may already start performing a diagnostic procedure by watching the gait of the patient from the waiting room to the doctor's office even before she or he has started to present any complaints.
Even during an already ongoing diagnostic procedure, there can be an indication to perform another, separate, diagnostic procedure for another, potentially concomitant, disease or condition. This may occur as a result of an incidental finding of a sign unrelated to the parameter of interest, such as can occur in comprehensive tests such as radiological studies like magnetic resonance imaging or blood test panels that also include blood tests that are not relevant for the ongoing diagnosis.
Procedure
General components which are present in a diagnostic procedure in most of the various available methods include:
Complementing the already given information with further data gathering, which may include questions of the medical history (potentially from other people close to the patient as well), physical examination and various diagnostic tests. A diagnostic test is any kind of medical test performed to aid in the diagnosis or detection of disease. Diagnostic tests can also be used to provide prognostic information on people with established disease.
Processing of the answers, findings or other results. Consultations with other providers and specialists in the field may be sought.
There are a number of methods or techniques that can be used in a diagnostic procedure, including performing a differential diagnosis or following medical algorithms. In reality, a diagnostic procedure may involve components of multiple methods.
Differential diagnosis
The method of differential diagnosis is based on finding as many candidate diseases or conditions as possible that can possibly cause the signs or symptoms, followed by a process of elimination or at least of rendering the entries more or less probable by further medical tests and other processing, aiming to reach the point where only one candidate disease or condition remains as probable. The result may also remain a list of possible conditions, ranked in order of probability or severity. Such a list is often generated by computer-aided diagnosis systems.
The resultant diagnostic opinion by this method can be regarded more or less as a diagnosis of exclusion. Even if it does not result in a single probable disease or condition, it can at least rule out any imminently life-threatening conditions.
Unless the provider is certain of the condition present, further medical tests, such as medical imaging, are performed or scheduled in part to confirm or disprove the diagnosis but also to document the patient's status and keep the patient's medical history up to date.
If unexpected findings are made during this process, the initial hypothesis may be ruled out and the provider must then consider other hypotheses.
Pattern recognition
In a pattern recognition method the provider uses experience to recognize a pattern of clinical characteristics. It is mainly based on certain symptoms or signs being associated with certain diseases or conditions, not necessarily involving the more cognitive processing involved in a differential diagnosis.
This may be the primary method used in cases where diseases are "obvious", or the provider's experience may enable him or her to recognize the condition quickly. Theoretically, a certain pattern of signs or symptoms can be directly associated with a certain therapy, even without a definite decision regarding what is the actual disease, but such a compromise carries a substantial risk of missing a diagnosis which actually has a different therapy so it may be limited to cases where no diagnosis can be made.
Diagnostic criteria
The term diagnostic criteria designates the specific combination of signs and symptoms, and test results that the clinician uses to attempt to determine the correct diagnosis.
Some examples of diagnostic criteria, also known as clinical case definitions, are:
Amsterdam criteria for hereditary nonpolyposis colorectal cancer
McDonald criteria for multiple sclerosis
ACR criteria for systemic lupus erythematosus
Centor criteria for strep throat
Clinical decision support system
Clinical decision support systems are interactive computer programs designed to assist health professionals with decision-making tasks. The clinician interacts with the software utilizing both the clinician's knowledge and the software to make a better analysis of the patients data than either human or software could make on their own. Typically the system makes suggestions for the clinician to look through and the clinician picks useful information and removes erroneous suggestions. Some programs attempt to do this by replacing the clinician, such as reading the output of a heart monitor. Such automated processes are usually deemed a "device" by the FDA and require regulatory approval. In contrast, clinical decision support systems that "support" but do not replace the clinician are deemed to be "Augmented Intelligence" if it meets the FDA criteria that (1) it reveals the underlying data, (2) reveals the underlying logic, and (3) leaves the clinician in charge to shape and make the decision.
Other diagnostic procedure methods
Other methods that can be used in performing a diagnostic procedure include:
Usage of medical algorithms
An "exhaustive method", in which every possible question is asked and all possible data is collected.
Adverse effects
Diagnosis problems are the dominant cause of medical malpractice payments, accounting for 35% of total payments in a study of 25 years of data and 350,000 claims.
Overdiagnosis
Overdiagnosis is the diagnosis of "disease" that will never cause symptoms or death during a patient's lifetime. It is a problem because it turns people into patients unnecessarily and because it can lead to economic waste (overutilization) and treatments that may cause harm. Overdiagnosis occurs when a disease is diagnosed correctly, but the diagnosis is irrelevant. A correct diagnosis may be irrelevant because treatment for the disease is not available, not needed, or not wanted.
Errors
Most people will experience at least one diagnostic error in their lifetime, according to a 2015 report by the National Academies of Sciences, Engineering, and Medicine.
Causes and factors of error in diagnosis are:
the manifestation of disease are not sufficiently noticeable
a disease is omitted from consideration
too much significance is given to some aspect of the diagnosis
the condition is a rare disease with symptoms suggestive of many other conditions
the condition has a rare presentation
Lag time
When making a medical diagnosis, a lag time is a delay in time until a step towards diagnosis of a disease or condition is made. Types of lag times are mainly:
Onset-to-medical encounter lag time, the time from onset of symptoms until visiting a health care provider
Encounter-to-diagnosis lag time, the time from first medical encounter to diagnosis
Lag time due to delays in reading x-rays have been cited as a major challenge in care delivery. The Department of Health and Human Services has reportedly found that interpretation of x-rays is rarely available to emergency room physicians prior to patient discharge.
Long lag times are often called "diagnostic odyssey".
History
The first recorded examples of medical diagnosis are found in the writings of Imhotep (2630–2611 BC) in ancient Egypt (the Edwin Smith Papyrus). A Babylonian medical textbook, the Diagnostic Handbook written by Esagil-kin-apli (fl.1069–1046 BC), introduced the use of empiricism, logic and rationality in the diagnosis of an illness or disease. Traditional Chinese Medicine, as described in the Yellow Emperor's Inner Canon or Huangdi Neijing, specified four diagnostic methods: inspection, auscultation-olfaction, inquiry and palpation. Hippocrates was known to make diagnoses by tasting his patients' urine and smelling their sweat.
Word
Medical diagnosis or the actual process of making a diagnosis is a cognitive process. A clinician uses several sources of data and puts the pieces of the puzzle together to make a diagnostic impression. The initial diagnostic impression can be a broad term describing a category of diseases instead of a specific disease or condition. After the initial diagnostic impression, the clinician obtains follow up tests and procedures to get more data to support or reject the original diagnosis and will attempt to narrow it down to a more specific level. Diagnostic procedures are the specific tools that the clinicians use to narrow the diagnostic possibilities.
The plural of diagnosis is diagnoses. The verb is to diagnose, and a person who diagnoses is called a diagnostician.
Etymology
The word diagnosis is derived through Latin from the Greek word διάγνωσις (diágnōsis) from διαγιγνώσκειν (diagignṓskein), meaning "to discern, distinguish".
Society and culture
Social context
Diagnosis can take many forms. It might be a matter of naming the disease, lesion, dysfunction or disability. It might be a management-naming or prognosis-naming exercise. It may indicate either degree of abnormality on a continuum or kind of abnormality in a classification. It is influenced by non-medical factors such as power, ethics and financial incentives for patient or doctor. It can be a brief summation or an extensive formulation, even taking the form of a story or metaphor. It might be a means of communication such as a computer code through which it triggers payment, prescription, notification, information or advice. It might be pathogenic or salutogenic. It is generally uncertain and provisional.
Once a diagnostic opinion has been reached, the provider is able to propose a management plan, which will include treatment as well as plans for follow-up. From this point on, in addition to treating the patient's condition, the provider can educate the patient about the etiology, progression, prognosis, other outcomes, and possible treatments of her or his ailments, as well as providing advice for maintaining health.
A treatment plan is proposed which may include therapy and follow-up consultations and tests to monitor the condition and the progress of the treatment, if needed, usually according to the medical guidelines provided by the medical field on the treatment of the particular illness.
Relevant information should be added to the medical record of the patient.
A failure to respond to treatments that would normally work may indicate a need for review of the diagnosis.
Nancy McWilliams identifies five reasons that determine the necessity for diagnosis:
diagnosis for treatment planning;
information contained in it related to prognosis;
protecting interests of patients;
a diagnosis might help the therapist to empathize with his patient;
might reduce the likelihood that some fearful patients will go-by the treatment.
Types
Sub-types of diagnoses include:
Clinical diagnosis
A diagnosis made on the basis of medical signs and reported symptoms, rather than diagnostic tests
Laboratory diagnosis
A diagnosis based significantly on laboratory reports or test results, rather than the physical examination of the patient. For instance, a proper diagnosis of infectious diseases usually requires both an examination of signs and symptoms, as well as laboratory test results and characteristics of the pathogen involved.
Radiology diagnosis
A diagnosis based primarily on the results from medical imaging studies. Greenstick fractures are common radiological diagnoses.
Electrography diagnosis
A diagnosis based on measurement and recording of electrophysiologic activity.
Endoscopy diagnosis
A diagnosis based on endoscopic inspection and observation of the interior of a hollow organ or cavity of the body.
Tissue diagnosis
A diagnosis based on the macroscopic, microscopic, and molecular examination of tissues such as biopsies or whole organs. For example, a definitive diagnosis of cancer is made via tissue examination by a pathologist.
Principal diagnosis
The single medical diagnosis that is most relevant to the patient's chief complaint or need for treatment. Many patients have additional diagnoses.
Admitting diagnosis
The diagnosis given as the reason why the patient was admitted to the hospital; it may differ from the actual problem or from the discharge diagnoses, which are the diagnoses recorded when the patient is discharged from the hospital.
Differential diagnosis
A process of identifying all of the possible diagnoses that could be connected to the signs, symptoms, and lab findings, and then ruling out diagnoses until a final determination can be made.
Diagnostic criteria
Designates the combination of signs, symptoms, and test results that the clinician uses to attempt to determine the correct diagnosis. They are standards, normally published by international committees, and they are designed to offer the best sensitivity and specificity possible, respect the presence of a condition, with the state-of-the-art technology.
Prenatal diagnosis
Diagnosis work done before birth
Diagnosis of exclusion
A medical condition whose presence cannot be established with complete confidence from history, examination or testing. Diagnosis is therefore by elimination of all other reasonable possibilities.
Dual diagnosis
The diagnosis of two related, but separate, medical conditions or comorbidities. The term almost always referred to a diagnosis of a serious mental illness and a substance use disorder, however, the increasing prevalence of genetic testing has revealed many cases of patients with multiple concomitant genetic disorders.
Self-diagnosis
The diagnosis or identification of a medical conditions in oneself. Self-diagnosis is very common.
Remote diagnosis
A type of telemedicine that diagnoses a patient without being physically in the same room as the patient.
Nursing diagnosis
Rather than focusing on biological processes, a nursing diagnosis identifies people's responses to situations in their lives, such as a readiness to change or a willingness to accept assistance.
Computer-aided diagnosis
Providing symptoms allows the computer to identify the problem and diagnose the user to the best of its ability. Health screening begins by identifying the part of the body where the symptoms are located; the computer cross-references a database for the corresponding disease and presents a diagnosis.
Overdiagnosis
The diagnosis of "disease" that will never cause symptoms, distress, or death during a patient's lifetime
Wastebasket diagnosis
A vague, or even completely fake, medical or psychiatric label given to the patient or to the medical records department for essentially non-medical reasons, such as to reassure the patient by providing an official-sounding label, to make the provider look effective, or to obtain approval for treatment. This term is also used as a derogatory label for disputed, poorly described, overused, or questionably classified diagnoses, such as pouchitis and senility, or to dismiss diagnoses that amount to overmedicalization, such as the labeling of normal responses to physical hunger as reactive hypoglycemia.
Retrospective diagnosis
The labeling of an illness in a historical figure or specific historical event using modern knowledge, methods and disease classifications.
| Biology and health sciences | Medical procedures | null |
9949565 | https://en.wikipedia.org/wiki/Medical%20microbiology | Medical microbiology | Medical microbiology, the large subset of microbiology that is applied to medicine, is a branch of medical science concerned with the prevention, diagnosis and treatment of infectious diseases. In addition, this field of science studies various clinical applications of microbes for the improvement of health. There are four kinds of microorganisms that cause infectious disease: bacteria, fungi, parasites and viruses, and one type of infectious protein called prion.
A medical microbiologist studies the characteristics of pathogens, their modes of transmission, mechanisms of infection and growth. The academic qualification as a clinical/Medical Microbiologist in a hospital or medical research centre generally requires a Bachelors degree while in some countries a Masters in Microbiology along with Ph.D. in any of the life-sciences (Biochem, Micro, Biotech, Genetics, etc.). Medical microbiologists often serve as consultants for physicians, providing identification of pathogens and suggesting treatment options. Using this information, a treatment can be devised.
Other tasks may include the identification of potential health risks to the community or monitoring the evolution of potentially virulent or resistant strains of microbes, educating the community and assisting in the design of health practices. They may also assist in preventing or controlling epidemics and outbreaks of disease.
Not all medical microbiologists study microbial pathology; some study common, non-pathogenic species to determine whether their properties can be used to develop antibiotics or other treatment methods.
Epidemiology, the study of the patterns, causes, and effects of health and disease conditions in populations, is an important part of medical microbiology, although the clinical aspect of the field primarily focuses on the presence and growth of microbial infections in individuals, their effects on the human body, and the methods of treating those infections. In this respect the entire field, as an applied science, can be conceptually subdivided into academic and clinical sub-specialties, although in reality there is a fluid continuum between public health microbiology and clinical microbiology, just as the state of the art in clinical laboratories depends on continual improvements in academic medicine and research laboratories.
History
In 1676, Anton van Leeuwenhoek observed bacteria and other microorganisms, using a single-lens microscope of his own design.
In 1796, Edward Jenner developed a method using cowpox to successfully immunize a child against smallpox. The same principles are used for developing vaccines today.
Following on from this, in 1857 Louis Pasteur also designed vaccines against several diseases such as anthrax, fowl cholera and rabies as well as pasteurization for food preservation.
In 1867 Joseph Lister is considered to be the father of antiseptic surgery. By sterilizing the instruments with diluted carbolic acid and using it to clean wounds, post-operative infections were reduced, making surgery safer for patients.
In the years between 1876 and 1884 Robert Koch provided much insight into infectious diseases. He was one of the first scientists to focus on the isolation of bacteria in pure culture. This gave rise to the germ theory, a certain microorganism being responsible for a certain disease. He developed a series of criteria around this that have become known as the Koch's postulates.
A major milestone in medical microbiology is the Gram stain. In 1884 Hans Christian Gram developed the method of staining bacteria to make them more visible and differentiated under a microscope. This technique is widely used today.
In 1910 Paul Ehrlich tested multiple combinations of arsenic based chemicals on infected rabbits with syphilis. Ehrlich then found that arsphenamine was found effective against syphilis spirochetes. The arsphenamines was then made available in 1910, known as Salvarsan.
In 1929 Alexander Fleming developed one of the most commonly used antibiotic substances both at the time and now: penicillin.
In 1939 Gerhard Domagk found Prontosil red protected mice from pathogenic streptococci and staphylococci without toxicity. Domagk received the Nobel Prize in physiology, or medicine, for the discovery of the sulfa drug.
DNA sequencing, a method developed by Walter Gilbert and Frederick Sanger in 1977, caused a rapid change the development of vaccines, medical treatments and diagnostic methods. Some of these include synthetic insulin which was produced in 1979 using recombinant DNA and the first genetically engineered vaccine was created in 1986 for hepatitis B.
In 1995 a team at The Institute for Genomic Research sequenced the first bacterial genome; Haemophilus influenzae. A few months later, the first eukaryotic genome was completed. This would prove invaluable for diagnostic techniques.
In 2007, a team at the Danish food company Danisco, were able to identify the purpose of the CRIPR-Cas systems as adaptive immunity to phages. The system was then quickly found to be able to help in genome editing through its ability to generate double strand breaks. A patient with sickle cell disease was the first person to be treated for a genetic disorder with CRISPR in July 2019.
Commonly treated infectious diseases
Bacterial
Streptococcal pharyngitis
Chlamydia
Typhoid fever
Tuberculosis
Viral
Rotavirus
Hepatitis C
Human papillomavirus (HPV)
Parasitic
Malaria
Giardia lamblia
Toxoplasma gondii
Fungal
Candida
Histoplasmosis
Dandruff
Causes and transmission of infectious diseases
Infections may be caused by bacteria, viruses, fungi, and parasites. The pathogen that causes the disease may be exogenous (acquired from an external source; environmental, animal or other people, e.g. Influenza) or endogenous (from normal flora e.g. Candidiasis).
The site at which a microbe enters the body is referred to as the portal of entry. These include the respiratory tract, gastrointestinal tract, genitourinary tract, skin, and mucous membranes. The portal of entry for a specific microbe is normally dependent on how it travels from its natural habitat to the host.
There are various ways in which disease can be transmitted between individuals.
These include:
Direct contact - Touching an infected host, including sexual contact
Indirect contact - Touching a contaminated surface
Droplet contact - Coughing or sneezing
Fecal–oral route - Ingesting contaminated food or water sources
Airborne transmission - Pathogen carrying spores
Vector transmission - An organism that does not cause disease itself but transmits infection by conveying pathogens from one host to another
Fomite transmission - An inanimate object or substance capable of carrying infectious germs or parasites
Environmental - Hospital-acquired infection (Nosocomial infections)
Like other pathogens, viruses use these methods of transmission to enter the body, but viruses differ in that they must also enter into the host's actual cells. Once the virus has gained access to the host's cells, the virus' genetic material (RNA or DNA) must be introduced to the cell. Replication between viruses is greatly varied and depends on the type of genes involved in them. Most DNA viruses assemble in the nucleus while most RNA viruses develop solely in cytoplasm.
The mechanisms for infection, proliferation, and persistence of a virus in cells of the host are crucial for its survival. For example, some diseases such as measles employ a strategy whereby it must spread to a series of hosts. In these forms of viral infection, the illness is often treated by the body's own immune response, and therefore the virus is required to disperse to new hosts before it is destroyed by immunological resistance or host death. In contrast, some infectious agents such as the Feline leukemia virus, are able to withstand immune responses and are capable of achieving long-term residence within an individual host, whilst also retaining the ability to spread into successive hosts.
Diagnostic tests
Identification of an infectious agent for a minor illness can be as simple as clinical presentation; such as gastrointestinal disease and skin infections. In order to make an educated estimate as to which microbe could be causing the disease, epidemiological factors need to be considered; such as the patient's likelihood of exposure to the suspected organism and the presence and prevalence of a microbial strain in a community.
Diagnosis of infectious disease is nearly always initiated by consulting the patient's medical history and conducting a physical examination. More detailed identification techniques involve microbial culture, microscopy, biochemical tests and genotyping. Other less common techniques (such as X-rays, CAT scans, PET scans or NMR) are used to produce images of internal abnormalities resulting from the growth of an infectious agent.
Microbial culture
Microbiological culture is the primary method used for isolating infectious disease for study in the laboratory. Tissue or fluid samples are tested for the presence of a specific pathogen, which is determined by growth in a selective or differential medium.
The 3 main types of media used for testing are:
Solid culture: A solid surface is created using a mixture of nutrients, salts and agar. A single microbe on an agar plate can then grow into colonies (clones where cells are identical to each other) containing thousands of cells. These are primarily used to culture bacteria and fungi.
Liquid culture: Cells are grown inside a liquid media. Microbial growth is determined by the time taken for the liquid to form a colloidal suspension. This technique is used for diagnosing parasites and detecting mycobacteria.
Cell culture: Human or animal cell cultures are infected with the microbe of interest. These cultures are then observed to determine the effect the microbe has on the cells. This technique is used for identifying viruses.
Microscopy
Culture techniques will often use a microscopic examination to help in the identification of the microbe. Instruments such as compound light microscopes can be used to assess critical aspects of the organism. This can be performed immediately after the sample is taken from the patient and is used in conjunction with biochemical staining techniques, allowing for resolution of cellular features. Electron microscopes and fluorescence microscopes are also used for observing microbes in greater detail for research. The two main types of electron microscopy are scanning electron microscopy and transmission electron microscopy. Transmission electron microscopy passes electrons through a thin cross-section of the cell of interest, and it then redirects the electrons onto a fluorescent screen. This method is useful for looking at the inside of cells, and the structures within, especially cell walls and membranes. Scanning electron microscopy reads the electrons that are reflected off the surface of the cells. A 3-dimensional image is then made which shows the size and exterior structure of the cells. Both techniques help give more detailed information about the structure of microbes. This makes it useful in many medical fields, such as diagnostics and biopsies of many body parts, hygiene, and virology. They provide critical information about the structure of pathogens, which allow physicians to treat them with more knowledge.
Biochemical tests
Fast and relatively simple biochemical tests can be used to identify infectious agents. For bacterial identification, the use of metabolic or enzymatic characteristics are common due to their ability to ferment carbohydrates in patterns characteristic of their genus and species. Acids, alcohols and gases are usually detected in these tests when bacteria are grown in selective liquid or solid media, as mentioned above. In order to perform these tests en masse, automated machines are used. These machines perform multiple biochemical tests simultaneously, using cards with several wells containing different dehydrated chemicals. The microbe of interest will react with each chemical in a specific way, aiding in its identification.
Serological methods are highly sensitive, specific and often extremely rapid laboratory tests used to identify different types of microorganisms. The tests are based upon the ability of an antibody to bind specifically to an antigen. The antigen (usually a protein or carbohydrate made by an infectious agent) is bound by the antibody, allowing this type of test to be used for organisms other than bacteria. This binding then sets off a chain of events that can be easily and definitively observed, depending on the test. More complex serological techniques are known as immunoassays. Using a similar basis as described above, immunoassays can detect or measure antigens from either infectious agents or the proteins generated by an infected host in response to the infection.
Polymerase chain reaction
Polymerase chain reaction (PCR) assays are the most commonly used molecular technique to detect and study microbes. As compared to other methods, sequencing and analysis is definitive, reliable, accurate, and fast. Today, quantitative PCR is the primary technique used, as this method provides faster data compared to a standard PCR assay. For instance, traditional PCR techniques require the use of gel electrophoresis to visualize amplified DNA molecules after the reaction has finished. quantitative PCR does not require this, as the detection system uses fluorescence and probes to detect the DNA molecules as they are being amplified. In addition to this, quantitative PCR also removes the risk of contamination that can occur during standard PCR procedures (carrying over PCR product into subsequent PCRs). Another advantage of using PCR to detect and study microbes is that the DNA sequences of newly discovered infectious microbes or strains can be compared to those already listed in databases, which in turn helps to increase understanding of which organism is causing the infectious disease and thus what possible methods of treatment could be used. This technique is the current standard for detecting viral infections such as AIDS and hepatitis.
Treatments
Once an infection has been diagnosed and identified, suitable treatment options must be assessed by the physician and consulting medical microbiologists. Some infections can be dealt with by the body's own immune system, but more serious infections are treated with antimicrobial drugs. Bacterial infections are treated with antibacterials (often called antibiotics) whereas fungal and viral infections are treated with antifungals and antivirals respectively. A broad class of drugs known as antiparasitics are used to treat parasitic diseases.
Medical microbiologists often make treatment recommendations to the patient's physician based on the strain of microbe and its antibiotic resistances, the site of infection, the potential toxicity of antimicrobial drugs and any drug allergies the patient has.
In addition to drugs being specific to a certain kind of organism (bacteria, fungi, etc.), some drugs are specific to a certain genus or species of organism, and will not work on other organisms. Because of this specificity, medical microbiologists must consider the effectiveness of certain antimicrobial drugs when making recommendations. Additionally, strains of an organism may be resistant to a certain drug or class of drug, even when it is typically effective against the species. These strains, termed resistant strains, present a serious public health concern of growing importance to the medical industry as the spread of antibiotic resistance worsens. Antimicrobial resistance is an increasingly problematic issue that leads to millions of deaths every year.
Whilst drug resistance typically involves microbes chemically inactivating an antimicrobial drug or a cell mechanically stopping the uptake of a drug, another form of drug resistance can arise from the formation of biofilms. Some bacteria are able to form biofilms by adhering to surfaces on implanted devices such as catheters and prostheses and creating an extracellular matrix for other cells to adhere to. This provides them with a stable environment from which the bacteria can disperse and infect other parts of the host. Additionally, the extracellular matrix and dense outer layer of bacterial cells can protect the inner bacteria cells from antimicrobial drugs.
Phage therapy is a technique that was discovered before antibiotics, but fell to the wayside as antibiotics became predominate. It is now being considered as a potential solution to increasing antimicrobial resistance. Bacteriophages, viruses that only infect bacteria, can specifically target the bacteria of interest and inject their genome. This process makes the bacteria halt its own production to make more phages, and this continues until the bacteria lyses itself and releases the phages into the surrounding environment. Phage therapy does not kill microbiota since it is specific, and it can help those with antibiotic allergies. Some drawbacks are that it is a time-intensive process since the specific bacterium needs to be identified. It also does not currently have the body of research supporting its effects and safety that antibiotics do. Bacteria can also eventually become resistant, through systems like CRISPR/Cas9 system. Many clinical trials have been promising though, showing that it could potentially help with the antimicrobial resistance problem. It can also be used in conjunction with antibiotics for a cumulative effect.
Medical microbiology is not only about diagnosing and treating disease, it also involves the study of beneficial microbes. Microbes have been shown to be helpful in combating infectious disease and promoting health. Treatments can be developed from microbes, as demonstrated by Alexander Fleming's discovery of penicillin as well as the development of new antibiotics from the bacterial genus Streptomyces among many others. Not only are microorganisms a source of antibiotics but some may also act as probiotics to provide health benefits to the host, such as providing better gastrointestinal health or inhibiting pathogens.
| Biology and health sciences | Fields of medicine | Health |
9951602 | https://en.wikipedia.org/wiki/Earth%20mass | Earth mass | An Earth mass (denoted as M🜨, M♁ or ME, where 🜨 and ♁ are the astronomical symbols for Earth), is a unit of mass equal to the mass of the planet Earth. The current best estimate for the mass of Earth is , with a relative uncertainty of 10−4. It is equivalent to an average density of . Using the nearest metric prefix, the Earth mass is approximately six ronnagrams, or 6.0 Rg.
The Earth mass is a standard unit of mass in astronomy that is used to indicate the masses of other planets, including rocky terrestrial planets and exoplanets. One Solar mass is close to Earth masses. The Earth mass excludes the mass of the Moon. The mass of the Moon is about 1.2% of that of the Earth, so that the mass of the Earth–Moon system is close to .
Most of the mass is accounted for by iron and oxygen (c. 32% each), magnesium and silicon (c. 15% each), calcium, aluminium and nickel (c. 1.5% each).
Precise measurement of the Earth mass is difficult, as it is equivalent to measuring the gravitational constant, which is the fundamental physical constant known with least accuracy, due to the relative weakness of the gravitational force. The mass of the Earth was first measured with any accuracy (within about 20% of the correct value) in the Schiehallion experiment in the 1770s, and within 1% of the modern value in the Cavendish experiment of 1798.
Unit of mass in astronomy
The mass of Earth is estimated to be:
,
which can be expressed in terms of solar mass as:
.
The ratio of Earth mass to lunar mass has been measured to great accuracy. The current best estimate is:
The product of and the universal gravitational constant (G) is known as the geocentric gravitational constant (G) and equals . It is determined using laser ranging data from Earth-orbiting satellites, such as LAGEOS-1. can also be calculated by observing the motion of the Moon or the period of a pendulum at various elevations, although these methods are less precise than observations of artificial satellites.
The relative uncertainty of is just , considerably smaller than the relative uncertainty for itself. can be found out only by dividing by , and is known only to a relative uncertainty of so will have the same uncertainty at best. For this reason and others, astronomers prefer to use , or mass ratios (masses expressed in units of Earth mass or Solar mass) rather than mass in kilograms when referencing and comparing planetary objects.
Composition
Earth's density varies considerably, between less than in the upper crust to as much as in the inner core. The Earth's core accounts for 15% of Earth's volume but more than 30% of the mass, the mantle for 84% of the volume and close to 70% of the mass, while the crust accounts for less than 1% of the mass. About 90% of the mass of the Earth is composed of the iron–nickel alloy (95% iron) in the core (30%), and the silicon dioxides (c. 33%) and magnesium oxide (c. 27%) in the mantle and crust. Minor contributions are from iron(II) oxide (5%), aluminium oxide (3%) and calcium oxide (2%), besides numerous trace elements (in elementary terms: iron and oxygen c. 32% each, magnesium and silicon c. 15% each, calcium, aluminium and nickel c. 1.5% each). Carbon accounts for 0.03%, water for 0.02%, and the atmosphere for about one part per million.
History of measurement
The mass of Earth is measured indirectly by determining other quantities such as Earth's density, gravity, or gravitational constant. The first measurement in the 1770s Schiehallion experiment resulted in a value about 20% too low. The Cavendish experiment of 1798 found the correct value within 1%. Uncertainty was reduced to about 0.2% by the 1890s, to 0.1% by 1930.
The figure of the Earth has been known to better than four significant digits since the 1960s (WGS66), so that since that time, the uncertainty of the Earth mass is determined essentially by the uncertainty in measuring the gravitational constant. Relative uncertainty was cited at 0.06% in the 1970s, and at 0.01% (10−4) by the 2000s. The current relative uncertainty of 10−4 amounts to in absolute terms, of the order of the mass of a minor planet (70% of the mass of Ceres).
Early estimates
Before the direct measurement of the gravitational constant, estimates of the Earth mass were limited to estimating Earth's mean density from observation of the crust and estimates on Earth's volume. Estimates on the volume of the Earth in the 17th century were based on a circumference estimate of to the degree of latitude, corresponding to a radius of (86% of the Earth's actual radius of about ), resulting in an estimated volume of about one third smaller than the correct value.
The average density of the Earth was not accurately known. Earth was assumed to consist either mostly of water (Neptunism) or mostly of igneous rock (Plutonism), both suggesting average densities far too low, consistent with a total mass of the order of . Isaac Newton estimated, without access to reliable measurement, that the density of Earth would be five or six times as great as the density of water, which is surprisingly accurate (the modern value is 5.515). Newton under-estimated the Earth's volume by about 30%, so that his estimate would be roughly equivalent to .
In the 18th century, knowledge of Newton's law of universal gravitation permitted indirect estimates on the mean density of the Earth, via estimates of (what in modern terminology is known as) the gravitational constant. Early estimates on the mean density of the Earth were made by observing the slight deflection of a pendulum near a mountain, as in the Schiehallion experiment. Newton considered the experiment in Principia, but pessimistically concluded that the effect would be too small to be measurable.
An expedition from 1737 to 1740 by Pierre Bouguer and Charles Marie de La Condamine attempted to determine the density of Earth by measuring the period of a pendulum (and therefore the strength of gravity) as a function of elevation. The experiments were carried out in Ecuador and Peru, on Pichincha Volcano and mount Chimborazo. Bouguer wrote in a 1749 paper that they had been able to detect a deflection of 8 seconds of arc, the accuracy was not enough for a definite estimate on the mean density of the Earth, but Bouguer stated that it was at least sufficient to prove that the Earth was not hollow.
Schiehallion experiment
That a further attempt should be made on the experiment was proposed to the Royal Society in 1772 by Nevil Maskelyne, Astronomer Royal. He suggested that the experiment would "do honour to the nation where it was made" and proposed Whernside in Yorkshire, or the Blencathra-Skiddaw massif in Cumberland as suitable targets. The Royal Society formed the Committee of Attraction to consider the matter, appointing Maskelyne, Joseph Banks and Benjamin Franklin amongst its members. The Committee despatched the astronomer and surveyor Charles Mason to find a suitable mountain.
After a lengthy search over the summer of 1773, Mason reported that the best candidate was Schiehallion, a peak in the central Scottish Highlands. The mountain stood in isolation from any nearby hills, which would reduce their gravitational influence, and its symmetrical east–west ridge would simplify the calculations. Its steep northern and southern slopes would allow the experiment to be sited close to its centre of mass, maximising the deflection effect. Nevil Maskelyne, Charles Hutton and Reuben Burrow performed the experiment, completed by 1776. Hutton (1778) reported that the mean density of the Earth was estimated at that of Schiehallion mountain. This corresponds to a mean density about higher than that of water (i.e., about ), about 20% below the modern value, but still significantly larger than the mean density of normal rock, suggesting for the first time that the interior of the Earth might be substantially composed of metal. Hutton estimated this metallic portion to occupy some (or 65%) of the diameter of the Earth (modern value 55%). With a value for the mean density of the Earth, Hutton was able to set some values to Jérôme Lalande's planetary tables, which had previously only been able to express the densities of the major Solar System objects in relative terms.
Cavendish experiment
Henry Cavendish (1798) was the first to attempt to measure the gravitational attraction between two bodies directly in the laboratory. Earth's mass could be then found by combining two equations; Newton's second law, and Newton's law of universal gravitation.
In modern notation, the mass of the Earth is derived from the gravitational constant and the mean Earth radius by
Where gravity of Earth, "little g", is
.
Cavendish found a mean density of , about 1% below the modern value.
19th century
While the mass of the Earth is implied by stating the Earth's radius and density, it was not usual to state the absolute mass explicitly prior to the introduction of scientific notation using powers of 10 in the later 19th century, because the absolute numbers would have been too awkward. Ritchie (1850) gives the mass of the Earth's atmosphere as "11,456,688,186,392,473,000 lbs". ( = , modern value is ) and states that "compared with the weight of the globe this mighty sum dwindles to insignificance".
Absolute figures for the mass of the Earth are cited only beginning in the second half of the 19th century, mostly in popular rather than expert literature. An early such figure was given as "14 septillion pounds" (14 Quadrillionen Pfund) [] in Masius (1859). Beckett (1871) cites the "weight of the earth" as "5842 quintillion tons" []. The "mass of the earth in gravitational measure" is stated as "9.81996×63709802" in The New Volumes of the Encyclopaedia Britannica (Vol. 25, 1902) with a "logarithm of earth's mass" given as "14.600522" []. This is the gravitational parameter in m3·s−2 (modern value ) and not the absolute mass.
Experiments involving pendulums continued to be performed in the first half of the 19th century. By the second half of the century, these were outperformed by repetitions of the Cavendish experiment, and the modern value of (and hence, of the Earth mass) is still derived from high-precision repetitions of the Cavendish experiment.
In 1821, Francesco Carlini determined a density value of through measurements made with pendulums in the Milan area. This value was refined in 1827 by Edward Sabine to , and then in 1841 by Carlo Ignazio Giulio to . On the other hand, George Biddell Airy sought to determine ρ by measuring the difference in the period of a pendulum between the surface and the bottom of a mine.
The first tests and experiments took place in Cornwall between 1826 and 1828. The experiment was a failure due to a fire and a flood. Finally, in 1854, Airy got the value by measurements in a coal mine in Harton, Sunderland. Airy's method assumed that the Earth had a spherical stratification. Later, in 1883, the experiments conducted by Robert von Sterneck (1839 to 1910) at different depths in mines of Saxony and Bohemia provided the average density values ρ between 5.0 and . This led to the concept of isostasy, which limits the ability to accurately measure ρ, by either the deviation from vertical of a plumb line or using pendulums. Despite the little chance of an accurate estimate of the average density of the Earth in this way, Thomas Corwin Mendenhall in 1880 realized a gravimetry experiment in Tokyo and at the top of Mount Fuji. The result was .
Modern value
The uncertainty in the modern value for the Earth's mass has been entirely due to the uncertainty in the gravitational constant G since at least the 1960s. G is notoriously difficult to measure, and some high-precision measurements during the 1980s to 2010s have yielded mutually exclusive results. Sagitov (1969) based on the measurement of G by Heyl and Chrzanowski (1942) cited a value of (relative uncertainty ).
Accuracy has improved only slightly since then. Most modern measurements are repetitions of the Cavendish experiment, with results (within standard uncertainty) ranging between 6.672 and (relative uncertainty ) in results reported since the 1980s, although the 2014 CODATA recommended value is close to with a relative uncertainty below 10−4. The Astronomical Almanach Online as of 2016 recommends a standard uncertainty of for Earth mass,
Variation
Earth's mass is variable, subject to both gain and loss due to the accretion of in-falling material, including micrometeorites and cosmic dust and the loss of hydrogen and helium gas, respectively. The combined effect is a net loss of material, estimated at per year. This amount is of the total earth mass. The annual net loss is essentially due to 100,000 tons lost due to atmospheric escape, and an average of 45,000 tons gained from in-falling dust and meteorites. This is well within the mass uncertainty of 0.01% (), so the estimated value of Earth's mass is unaffected by this factor.
Mass loss is due to atmospheric escape of gases. About 95,000 tons of hydrogen per year () and 1,600 tons of helium per year are lost through atmospheric escape. The main factor in mass gain is in-falling material, cosmic dust, meteors, etc. are the most significant contributors to Earth's increase in mass. The sum of material is estimated to be annually, although this can vary significantly; to take an extreme example, the Chicxulub impactor, with a midpoint mass estimate of , added 900 million times that annual dustfall amount to the Earth's mass in a single event.
Additional changes in mass are due to the mass–energy equivalence principle, although these changes are relatively negligible. Mass loss due to the combination of nuclear fission and natural radioactive decay is estimated to amount to 16 tons per year.
An additional loss due to spacecraft on escape trajectories has been estimated at since the mid-20th century. Earth lost about 3473 tons in the initial 53 years of the space age, but the trend is currently decreasing.
| Physical sciences | Mass and weight | Basics and measurement |
9958966 | https://en.wikipedia.org/wiki/Handgun | Handgun | A handgun is a firearm designed to be usable with only one hand. It is distinguished from a long barreled gun (i.e., carbine, rifle, shotgun, submachine gun, or machine gun) which typically is intended to be held by both hands and braced against the shoulder. Handguns have shorter effective ranges compared to long guns, and are much harder to shoot accurately. While most early handguns are single-shot pistols, the two most common types of handguns used in modern times are revolvers and semi-automatic pistols.
Before commercial mass production, handguns were often considered a badge of office — comparable to a ceremonial sword – as they had limited utility and were more expensive than the long barreled guns of the era. In 1836, Samuel Colt patented the Colt Paterson, the first practical mass-produced revolver, which was capable of firing five shots in rapid succession and quickly became a popular personal weapon, giving rise to the saying, "God created men, but Colt made them equal."
Definition
The Encyclopædia Britannica defines a handgun as "any firearm small enough to be held in one hand when fired"; while the American Webster's Dictionary defines it as "a firearm (such as a revolver or pistol) designed to be held and fired with one hand".
Among the Anglophone countries, neither the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), which is part of the United States Department of Justice or the Government of the United Kingdom, who are in charge of American and British firearms licensing respectively, offer any specific legal definitions of a handgun. The ATF, however, does separately define "handgun – pistol" and "handgun – revolver" under its "Terminology & Nomenclature" section, both with the "pistol-type" description of "a weapon originally designed, made, and intended to fire a projectile (bullet) from one or more barrels when held in one hand".
The Canadian Criminal Code defines a handgun as "a firearm that is designed, altered or intended to be aimed and fired by the action of one hand, whether or not it has been redesigned or subsequently altered to be aimed and fired by the action of both hands".
The Australian gun laws, which are based on the National Firearms Agreement (1996) and interpreted and enforced independently by each state or territory, consider a "handgun" a firearm that:
is reasonably capable of being carried or concealed about the person; or
is reasonably capable of being raised and fired by one hand; or
does not exceed in length measured parallel to the barrel.
History
Hand cannons
Firearms started in China where gunpowder was first developed. The oldest known bronze barrel handgun is the Heilongjiang hand cannon in 1288. It is 34cm (13.4 inches) long without a handle and weighs 3.55 kg (7.83 pounds). The diameter of the powder chamber is 6.6cm (2.6 inches) while the diameter of the interior at the end of the barrel is 2.5cm (1.0 inch). The barrel is the lengthiest part of the hand cannon at 6.9 inches long.
The hand cannon has a bulbous base at the breech called the Yoshi (), or gunpowder chamber, where the explosion that propels the projectile occurs. The walls of the powder chamber are noticeably thicker to better withstand the explosive pressure of the gunpowder. The powder chamber also has a touch hole, a small hole for the fuse that ignites the gunpowder. Behind the gunpowder chamber is a socket shaped like a trumpet where the handle of the hand cannon is inserted. The bulbous shape of the base gave the earliest Chinese and Western cannons a vase-like or pear-like appearance, which gradually disappeared when advancements in metallurgical technology made the bulbous base obsolete.
In 1432, Joseon dynasty under the reign of Sejong the Great introduced the world's first handgun named se-chongtong (세총통). Se-chongtong has a total length of 13.8 cm, an inner diameter of 0.9 cm, and an outer diameter of 1.4 cm. Se-chongtong is held by cheolheumja (철흠자, iron tong-handle), which allows quick change barrel for the next shot, and fires chase-jeon (차세전, a type of standardized arrow of Joseon) with a maximum fatal range of 200 footsteps (≈250 meters). Initially, Joseon considered the gun as a failed project due to its short effective range, but se-chongtong quickly saw usage after fielding to the frontier provinces starting in June 1437. Se-chongtong was used by both soldiers of different units and civilians, including women and children, as a personal defense weapon. The gun was notably used by chetamja (체탐자) spies, whose mission was to infiltrate enemy territory, and by carabiniers carrying multiple guns benefited from its compact size.
Matchlocks
The matchlock appeared in Europe in the mid-15th century.
The matchlock was the first mechanism invented to facilitate the firing of a hand-held firearm. The classic European matchlock gun held a burning slow match in a clamp at the end of a small curved lever known as the serpentine. Upon the pulling of a lever (or in later models a trigger) protruding from the bottom of the gun and connected to the serpentine, the clamp dropped down, lowering the smoldering match into the flash pan and igniting the priming powder. The flash from the primer traveled through the touch hole igniting the main charge of propellant in the gun barrel.
On the release of the lever or trigger, the spring-loaded serpentine would move in reverse to clear the pan. For obvious safety reasons, the match would be removed before reloading the gun. Both ends of the match were usually kept alight in case one end should be accidentally extinguished.
Wheellocks
The wheellock was the next major development in firearms technology after the matchlock and the first self-igniting firearm. Its name comes from the rotating steel wheel which generates the ignition. Developed in Europe around 1500, it was used alongside the matchlock.
The wheellock works by spinning a spring-loaded steel wheel against a piece of pyrite to generate intense sparks, which ignite gunpowder in a pan, which flashes through a small touchhole to ignite the main charge in the firearm's barrel. The pyrite is clamped in vise jaws on a spring-loaded arm (or "dog"), which rests on the pan cover. When the trigger is pulled, the pan cover is opened, and the wheel is rotated, with the pyrite pressed into contact.
A close modern analogy of the wheellock mechanism is the operation of a cigarette lighter, where a toothed steel wheel is spun in contact with a piece of sparking material to ignite the liquid or gaseous fuel.
A wheellock firearm had the advantage that it could be instantly readied and fired even with one hand, in contrast to the then-common matchlock firearms, which required an operator to prepare a burning cord of slow match and demanded the operator's full attention and two hands to operate. On the other hand, wheellock mechanisms were complex to make, making them relatively expensive.
Flintlocks
A flintlock is a general term for any firearm that uses a flint-striking ignition mechanism. The term may also apply to a particular form of the mechanism itself, which was introduced in the early 17th century, and rapidly replaced earlier firearm-ignition technologies.
Flintlock pistols were used as self-defense weapons and as military arm. Their effective range was short, and they were frequently used as an adjunct to a sword or cutlass. These pistols were usually smoothbore although some rifled pistols were produced.
Flintlock pistols came in a variety of sizes and styles which often overlap and are not well defined; many of the names used were applied by collectors and dealers long after the pistols were obsolete. The smallest were less than 15cm (5.9 inches) long and the largest were over 51cm (20 inches). From around the beginning of the 1700s the larger pistols got shorter so that by the late 1700s the largest were closer to 41cm (16 inches) long. The smallest would fit into a typical pocket or a hand-warming muff and could easily be carried.
The largest sizes would be carried in holsters across a horse's back just ahead of the saddle. In-between sizes included the coat pocket pistol, or coat pistol, which would fit into a large pocket, coach pistols, meant to be carried on or under the seat of a coach in a bag or box, and belt pistols, sometimes equipped with a hook designed to slip over a belt or waistband. Larger pistols were called horse pistols.
A notable mechanical development of the flintlock pistol was the English duelling pistol; it was highly reliable, water resistant, and accurate. External decoration was typically minimal but the internal works were often finished to a higher degree of craftsmanship than the exterior. Duelling pistols were the size of the horse pistols of the late 1700s, around 41cm (16 inches) long and were usually sold in pairs along with accessories in a wooden case with compartments for each piece.
Caplocks
The caplock mechanism or percussion lock was developed in the early 19th century and used a percussion cap struck by the hammer to set off the main charge, rather than using a piece of flint to strike a steel frizzen. They succeeded the flintlock mechanism in firearm technology.
The rudimentary percussion system was developed by Reverend Alexander John Forsyth as a solution to the problem that birds would startle when smoke puffed from the powder pan of his flintlock shotgun, giving them sufficient warning to escape the shot.
His invention of a fulminate-primed firing mechanism deprived the birds of their early warning system, both by avoiding the initial puff of smoke from the flintlock powder pan, as well as shortening the interval between the trigger pull and the shot leaving the muzzle. Forsyth patented his ignition system in 1807. However, it was not until after Forsyth's patents expired that the conventional percussion cap system was developed.
The caplock offered many improvements over the flintlock. The caplock was easier to load, more resistant to weather, and was much more reliable than the flintlock. Many older flintlock weapons were later converted into caplocks so that they could take advantage of this increased reliability.
The caplock mechanism consists of a hammer, similar to the hammer used in a flintlock, and a nipple (sometimes referred to as a "cone"), which holds a small percussion cap. The nipple contains a tube that goes into the barrel. The percussion cap contains a chemical compound called mercury fulminate or fulminate of mercury, the chemical formula of which is Hg(ONC). It is made from mercury, nitric acid, and alcohol. When the trigger releases the hammer, it strikes the cap, causing the mercuric fulminate to explode. The flames from this explosion travel down the tube in the nipple and enter the barrel, where they ignite the main powder charge.
Revolvers
Percussion era
In 1836, Samuel Colt patented the Colt Paterson, the first practical mass-produced revolver. It uses a revolving cylinder with multiple chambers aligned with a single, stationary barrel. Initially, this 5-shot revolver was produced in .28 caliber, with a .36 caliber model following a year later. As originally designed and produced, no loading lever was included with the revolver; a user had to partially disassemble the revolver to re-load it. Starting in 1839, a reloading lever and a capping window were incorporated into the design, allowing reloading without requiring partial disassembly of the revolver. This loading lever and capping window design change was also incorporated into most Colt Paterson revolvers that had been produced from 1836 until 1839. Unlike later revolvers, a folding trigger was incorporated into the Colt Paterson. The trigger only became visible upon cocking the hammer.
Colt would go on to make a series of improved revolvers. The Colt Walker, was a single-action revolver with a revolving cylinder holding six charges of black powder behind six bullets (typically .44 caliber lead balls). It was designed in 1846 as a collaboration between Captain Samuel Hamilton Walker and American firearms inventor Samuel Colt.
The Colt 1851 Navy Revolver is a cap and ball revolver that was designed by Samuel Colt between 1847 and 1850. The six-round .36 caliber Navy revolver was much lighter than the contemporary Colt Dragoon Revolvers developed from the .44 Walker Colt revolvers of 1847, which, given their size and weight, were generally carried in saddle holsters. It is an enlarged version of the .31 caliber Colt Pocket Percussion Revolvers, that evolved from the earlier Baby Dragoon and, like them, is a mechanically improved and simplified descendant of the 1836 Paterson revolver. As the factory designation implied, the Navy revolver was suitably sized for carrying in a belt holster. It became very popular in North America at the time of Western expansion. Colt's aggressive promotions distributed the Navy and his other revolvers across Europe, Asia, and Africa. The .36 caliber (.375–.380 inch) round lead ball weighs 80 grains and, at a velocity of 1,000 feet per second, is comparable to the modern .380 pistol cartridge in power. Loads consist of loose powder and ball or bullet, metallic foil cartridges (early), and combustible paper cartridges (Civil War era), all combinations being ignited by a fulminate percussion cap applied to the nipples at the rear of the chamber.
The Colt Army Model 1860 is a 6-shot muzzle-loaded cap & ball .44-caliber single-action revolver used during the American Civil War made by Colt's Manufacturing Company. It was used as a side arm by cavalry, infantry, artillery troops, and naval forces. More than 200,000 were manufactured from 1860 through 1873. Colt's biggest customer was the US Government with more than 129,730 units being purchased and issued to the troops. The weapon was a single-action, six-shot weapon, accurate up to 75 to 100 yards, where the fixed sights were typically set when manufactured. The rear sight was a notch in the hammer, only usable when the revolver was fully cocked. The Colt .44-caliber "Army" Model was the most widely used revolver of the Civil War. It had a six-shot, rotating cylinder, and fired a round spherical lead ball, or a conical-tipped bullet, typically propelled by a 30-grain charge of black powder, which was ignited by a small copper percussion cap that contained a volatile charge of fulminate of mercury (a substance that explodes upon being subjected to a sharp impact). The percussion cap, when struck by the hammer, ignited the powder charge. When fired, the balls had a muzzle velocity of about 900 feet per second (274 meters/second), although this depended on how much powder was loaded.
Metallic cartridge era
The Smith & Wesson Model 1 was the first firearm manufactured by Smith & Wesson, with production spanning the years 1857 through 1882. It was the first commercially successful revolver to use metallic rimfire cartridges instead of loose powder, musket ball, and percussion caps. It is a single-action, tip-up revolver holding seven .22 Short black powder cartridges.
The Smith & Wesson Model No. 2 Army is a 6-shot, .32 caliber revolver, intended to combine the small size and convenience of the Smith & Wesson Model 1 .22 rimfire with a larger more effective cartridge. It was manufactured 1861–1874, with a total production of 77,020 units.
The Smith & Wesson Model 3 was a 6-shot, single-action, cartridge-firing, top-break revolver produced by Smith & Wesson from to 1915, and was recently offered again as a reproduction by Smith & Wesson and Uberti. The S&W Model 3 was originally chambered for the .44 American and .44 Russian cartridges, and typically did not have the cartridge information stamped on the gun (as is standard practice for most commercial firearms). Model 3 revolvers were later produced in an assortment of calibers, including .44 Henry Rimfire, .44-40, .32–44, .38–44, and .45 Schofield. The design would influence the smaller S&W .38 Single Action that is retroactively referred to as the Model 2. All of these revolvers would automatically eject the spent shell cases when opened.
The Colt Single Action Army, also known as the Single Action Army, SAA, Model P, Peacemaker, M1873, and Colt .45 is a single-action revolver with a revolving cylinder holding six metallic cartridges. It was designed for the U.S. government service revolver trials of 1872 by Colt's Patent Firearms Manufacturing Company – today's Colt's Manufacturing Company – and was adopted as the standard military service revolver until 1892. The Colt SAA has been offered in over 30 different calibers and various barrel lengths. Its overall appearance has remained consistent since 1873. Colt has discontinued its production twice, but brought it back due to popular demand. The revolver was popular with ranchers, lawmen, and outlaws alike, but as of the early 21st century, models are mostly bought by collectors and Cowboy Action Shooters. Its design has influenced the production of numerous other models from other companies. The Colt SAA "Peacemaker" revolver is a famous piece of Americana known as "The Gun That Won the West".
In 1889, Colt introduced the Model 1889, the first truly modern double action revolver, which differed from earlier double action revolvers by having a "swing-out" cylinder, as opposed to a "top-break" or "side-loading" cylinder. Swing out cylinders quickly caught on, because they combined the best features of earlier designs. Top-break actions gave the ability to eject all empty shells simultaneously and exposed all chambers for easy reloading, but having the frame hinged into two halves weakened the gun and negatively affected accuracy due to lack of rigidity. "Side-loaders", like the earlier Colt Model 1871 and 1873 gave a rigid frame, but required the user to eject and load one cylinder at a time, as they rotated the cylinder to line each chamber up with the side-mounted loading gate.
Smith & Wesson followed 7 years later with the ''Hand Ejector, Model 1896'' in .32 S&W Long caliber, followed by the very similar, yet improved, Model 1899 (later known as the Model 10), which introduced the new .38 Special cartridge. The Model 10 went on to become the best selling handgun of the 20th century, at 6,000,000 units, and the .38 Special is still the most popular chambering for revolvers in the world. These new guns were an improvement over the Colt 1889 design since they incorporated a combined center-pin and ejector rod to lock the cylinder in position. The 1889 did not use a center pin and the cylinder was prone to move out of alignment.
The Smith & Wesson Model 36 is a 5 shot, revolver chambered for .38 Special. It is one of several models of "J-frame" Smith & Wesson revolvers. It was introduced in 1950, and is still in production. The Model 36 was designed in the era just after World War II, when Smith & Wesson stopped producing war materials and resumed normal production. For the Model 36, they sought to design a revolver that could fire the more powerful .38 Special round in a small, concealable package. Since the older I-frame was not able to handle this load, a new frame was designed, which became the Smith & Wesson J-frame.
Magnum era
The inventions of the metallic cartridge and then smokeless powder had allowed for dramatic improvements in handgun ballistics. Standardization and adherence to cartridge standards originating in the black powder cartridge era resulted in cartridge capacity in excess of peak combustion pressure standards. Smokeless powder did not take up as much volume as black powder and cases like the .38 Special and .45 Colt were much larger than 9×19mm and .45 ACP which had similar ballistics respectively. As metallurgy was improved, handloaders began experimenting with loading the .38 Special and .44 Special cartridges with fuller cases of smokeless propellants. By 1929, the ".38-44" cartridge and a large N-framed Smith & Wesson .38/44 revolver chambered for this cartridge was available. By the 1930s, automobiles with heavy steel bodies had become popular and the improved ballistics of more powerful handguns was in demand. Colt had similarly introduced the .38 Super, simply a .38 ACP loaded to higher pressure with more powder. In 1935, Smith & Wesson released the Registered Magnum (later referred to as the Smith & Wesson Model 27) which was the first revolver chambered for .357 Magnum. It was designed as a more powerful handgun for law enforcement officers. The Registered Magnum marked the beginning of the "Magnum Era" of handguns. Notably, Elmer Keith continued to demonstrate and advocate the use of the .44 Special at higher pressures. The high point of the Magnum Era was in 1955 when Smith & Wesson released the Smith & Wesson Model 29 in .44 Magnum. Two decades later the Dirty Harry movies made this gun a cultural icon. The S&W Model 19 was also introduced in 1955, it is a .357 Magnum revolver produced by Smith & Wesson on its K-frame design. The Model 19 is smaller and lighter than the original the S&W Model 27 .357 Magnums. It was made at the behest of retired Assistant Chief Patrol Inspector of the U.S. Border Patrol, famous gunfighter, and noted firearms and shooting skills writer Bill Jordan.
Derringers
The original Philadelphia Deringer was a single-shot muzzleloading percussion cap pistol introduced in 1852, by Henry Deringer. In total, approximately 15,000 Deringer pistols were manufactured. All were single barrel pistols with back action percussion locks, typically .41 caliber with rifled bores, and walnut stocks. Barrel length varied from 1" to 6", and the hardware was commonly a copper-nickel alloy known as "German silver".
The term derringer () has become a genericized misspelling of the last name of Henry Deringer. Many copies of the original Philadelphia Deringer pistol were made by other gun makers worldwide, and the name was often misspelled; this misspelling soon became an alternative generic term for any pocket pistol, along with the generic phrase palm pistol, which Deringer's competitors invented and used in their advertising. With the advent of metallic cartridges, pistols produced in the modern form are still commonly called "derringers".
Daniel Moore patented a single shot metallic cartridge .38 Rimfire derringer in 1861. These pistols have barrels that pivoted sideways on the frame to allow access to the breech for reloading. Moore would manufacture them until 1865, when he sold out to National Arms Company which produced single shot .41 Rimfire derringers until 1870, when it was acquired by Colt's Patent Firearms Manufacturing Company. Colt continued to produce the .41 Rimfire derringer after the acquisition, as an effort to help break into the metallic-cartridge gun market, but also introduced its own three single shot Colt Derringer Models, all of them also chambered in the .41 Rimfire cartridge. The last model to be in production, the third Colt Derringer, was not dropped until 1912. The third Colt Derringer Model was re-released in the 1950s for western movies, under the name of Fourth Model Colt Deringer.
The Remington Model 95 derringer was one of the first metallic cartridge handguns. Small and easy to use, Remington manufactured more than 150,000 of these over-under, double-barreled derringers from 1866 until the end of their production in 1935. The Remington derringer doubled the capacity of the derringers designed by Daniel Moore, while maintaining a compact size. The Remington Model 95 has achieved such widespread popularity, that it has completely overshadowed its predecessors, becoming synonymous with the word "derringer". The Model 95 was made only in .41 Rimfire. Its barrels pivoted upwards to reload and a cam on the hammer alternated between top and bottom barrels. The .41 Rimfire bullet moved very slowly, at about , around half the speed of a modern .45 ACP. It could be seen in flight, but at very close range, such as at a casino or saloon card table, it could easily kill. There were four models with several variations. The Remington derringer design is still being made; in a variety of calibers from .22 long rifle to .45 Long Colt and .410 gauge, by several manufacturers. The current production of derringers are used by Cowboy Action Shooters as well as a concealed-carry weapon.
While the classic Remington design is a single-action derringer with a hammer and tip-up action, the High Standard D-100 introduced in 1962, is a hammer-less, double-action derringer with a half-trigger-guard and a standard break action design. These double-barrel derringers were chambered for .22 Long Rifle and .22 Magnum and were available in blued, nickel, silver, and gold plated finishes. Although, they were discontinued in 1984, American Derringer would obtain the High Standard design in 1990 and produce a larger .38 Special version. These derringers called the DS22 and DA38 are still being made and are popular concealed carry handguns.
The COP .357 is a modern 4-shot Derringer-type pistol chambered for .357 Magnum. Introduced in 1983, it is a double-action weapon about twice as wide, and substantially heavier than the typical .25 automatic pistol. Still, it is relatively compact size and a powerful cartridge makes it an effective defensive weapon or a police backup gun. The COP .357 is quite robust in design and construction. It is made of solid stainless steel components. Cartridges are loaded into the four separate chambers by sliding a latch that "pops-up" the barrel for loading purposes, similar to top-break shotguns. Each of the four chambers has its own dedicated firing pin. It uses an internal hammer, which is activated by depressing the trigger to hit a ratcheting/rotating striker that in turn strikes one firing pin at a time. Older "pepperboxes" also used multiple barrels, but the barrels were the part that rotated. The COP .357 operates similarly to the Sharps derringer of the 1850s, in that it uses the ratcheting/rotating striker, which is completely internal, to fire each chamber in sequence.
Semi-automatic pistols
In 1896, Paul Mauser introduced the Mauser C96, the first mass-produced and commercially successful semi-automatic pistol, which uses the recoil energy of one shot to reload the next. The distinctive characteristics of the C96 are the integral 10-round, box magazine in front of the trigger, the long barrel, the wooden shoulder stock which gives it the stability of a short-barreled rifle and doubles as a holster or carrying case, and a unique grip shaped like the handle of a broom. The grip earned the gun the nickname "broomhandle" in the English-speaking world, because of its round wooden handle.
The Pistole Parabellum, also known in the United States as just the Luger, is a toggle-locked recoil-operated semi-automatic pistol produced in several models and by several nations from 1898 to 1948. It was one of the first semi-auto pistols to use a detachable magazine housed in the pistol-grip. The design was first patented by Georg Luger as an improvement upon the Borchardt Automatic Pistol, and was produced as the Parabellum Automatic Pistol, Borchardt-Luger System by the German arms manufacturer Deutsche Waffen und Munitionsfabriken (DWM). The first production model was known as the Modell 1900 Parabellum. Later versions included the Pistol Parabellum Model 1908 or P08 which was produced by DWM and other manufacturers. The first Parabellum pistol was adopted by the Swiss army in May 1900. In German Army service, the Parabellum was later adopted in modified form as the Pistol Model 1908 (P08) in caliber 9×19mm Parabellum.
The Colt Model 1911 is a 7+1-round, single-action, semi-automatic, magazine-fed, recoil-operated pistol chambered for the .45 ACP cartridge. It served as the standard-issue sidearm for the United States Armed Forces from 1911 to 1986, however, due to its popularity, it has not been completely phased out. Designed by John Browning, the M1911 is the best-known of his designs to use the short recoil principle in its basic design. The pistol was widely copied, and this operating system rose to become the preeminent type of the 20th century and of nearly all modern centerfire pistols. It is popular with civilian shooters in competitive events such as USPSA, IDPA, International Practical Shooting Confederation, and Bullseye shooting. Compact variants are popular civilian concealed carry weapons in the U.S. because of the design's relatively slim width and stopping power of the .45 ACP cartridge.
The Walther PP (Polizeipistole, or 'police pistol') series were introduced in 1929 and are among the world's first successful double action, blowback-operated, semi-automatic pistols developed by the German arms manufacturer Carl Walther GmbH Sportwaffen. They feature exposed hammers, a traditional double-action trigger mechanism, a single-stack 8-round magazine (for .32 ACP version), and a fixed barrel that also acts as the guide rod for the recoil spring. The Walther PP and smaller PPK models were both popular with European police and civilians for being reliable and concealable. They would remain the standard issue police pistol for much of Europe well into the 1970s and 80s. During World War II, they were issued to the German military, including the Luftwaffe.
The Browning Hi Power is a 13+1-round, single-action, semi-automatic handgun available in 9mm. Introduced in 1935, it is based on a design by American firearms inventor John Browning, and completed by Dieudonné Saive at Fabrique Nationale (FN) of Herstal, Belgium. Browning died in 1926, several years before the design was finalized. The Hi-Power is one of the most widely used military pistols in history, having been used by over 50 countries' armed forces. The name "Hi Power" alludes to the 13-round magazine capacity, almost twice that of contemporary designs such as the Luger or Colt M1911. The Browning was one of the first pistols to use high capacity, detachable magazines.
The Heckler & Koch VP70 is an 18+1 round, 9×19mm, blowback-operated, double-action-only, select-fire, polymer frame pistol manufactured by German arms firm Heckler & Koch GmbH, introduced in 1970. The VP70 was a revolutionary pistol, introducing the polymer frame, predating the Glock by 12 years. It also uses a spring-loaded striker, instead of a conventional firing pin and has a relatively heavy double-action-only trigger pull. It also uses a high-capacity 18-round magazine, twice as many rounds as the single-column magazine designs of the era, and 5 more rounds than the Browning Hi-Power. In lieu of a blade front sight, the VP70 uses a polished ramp with a central notch in the middle to provide the illusion of a dark front post. Contrary to a common misconception, the VP70 does indeed have a manual safety. It is the circular button located immediately behind the trigger and it is a common crossblock safety. One unique feature of this weapon involved the combination stock/holster for the military version of the VP70. The stock incorporates a selector switch that, when mounted, allows for a three-round-burst mode of fire. Cyclic rate of fire for the burst is 2200 rounds per minute. When not mounted, the stock acts as a holster. VP stands for Volkspistole (literally 'People's Pistol'), and the designation 70 was for the first year of production: 1970.
The Smith & Wesson Model 59 was a 14+1-round, semi-automatic pistol introduced in 1971. It was the first standard double-action pistol to use a high-capacity 14-round staggered-magazine. It went out of production a decade later in 1980 when the improved second generation series was introduced (the Model 459). The Model 459 was again improved into the third generation series, the 5904. Stainless steel versions of the second and third generation models were also widely popular, and were designated the Models 659 and 5906, respectively. The original Model 59 was manufactured in 9×19mm Parabellum caliber with a wider anodized aluminum frame (to accommodate a double-stack magazine), a straight backstrap, a magazine disconnect (the pistol will not fire unless a magazine is in place), and a blued carbon steel slide that carries the manual safety. The grip is of three pieces made of two nylon plastic panels joined by a metal backstrap. It uses a magazine release located to the rear of the trigger guard, similar to the Colt M1911.
The Beretta 92 is a 15+1-round, 9mm Parabellum, double-action, semi-automatic pistol introduced in 1975. It has an open slide design, an alloy frame and locking block barrel, originally used on the Walther P38, and previously used on the M1951. The grip angle and the front sight integrated with the slide were also common to earlier Beretta pistols. What may be the Beretta 92's two most important advanced design features had first appeared on its immediate predecessor, the 1974 .380 caliber Model 84. These improvements both involved the magazine, which featured direct feed; that is,
There was no feed ramp between the magazine and the chamber (a Beretta innovation in pistols). In addition, to a 15-round "double-stacked" magazine design,
It was the first Beretta design to use a magazine release located to the rear of the trigger guard, similar to the Colt M1911.
The United States' military replaced the M1911A1 .45 ACP pistol with the Beretta 92FS, designated as the M9 in 1985.
The Glock 17, is a 17+1-round, 9mm Parabellum, polymer–framed, safe-action, short recoil-operated, locked-breech semi-automatic pistol designed and produced by Glock Ges.m.b.H., located in Deutsch-Wagram, Austria. In 1982, the Glock 17 entered the Austrian military and police service after performing the best out of other models in an exhaustive series of reliability and safety tests. Despite initial hesitation from the market to accept a "plastic gun" due to durability and reliability concerns, as well as fears that metal detectors in airports may not detect the polymer frame, Glock pistols have become the company's most profitable line of products, commanding 65% of the market share of handguns for United States law enforcement agencies, as well as supplying numerous national armed forces, security agencies, and police forces in at least 48 countries. Glocks are also popular firearms among civilians for recreational and competition shooting, home and self-defense, and concealed carry or open carry.
The FN Five-seveN, is a 20+1-round, semi-automatic pistol designed and manufactured by Fabrique Nationale d'Armes de Guerre-Herstal (FN Herstal) in Belgium. The Five-seveN pistol was introduced in 1998. It was developed together with the FN P90 personal defense weapon and the FN 5.7×28mm cartridge. Developed as a companion pistol to the P90, the Five-seveN shares many of its design features:
Lightweight polymer-based weapon with a large magazine capacity.
Ambidextrous controls, low recoil.
Ability to penetrate body armor when using certain types of ammunition.
The Five-seveN is currently in service with military and police forces in over 40 nations. In the United States, the Five-seveN is in use with numerous law enforcement agencies, including the U.S. Secret Service. In the years since the pistol's introduction to the civilian market in the United States, it has also become increasingly popular with civilian shooters.
Machine pistols
A machine pistol is generally defined as a handgun capable of fully automatic or selective fire. During World War I, the Austrians introduced the world's first machine pistol, the Steyr Repetierpistole M1912/P16. The Germans would quickly follow suit with machine pistol versions of the Luger P08 "Artillery Pistol" and later models of the Mauser C96. Some machine pistols support a shoulder stock to improve control, like the Heckler & Koch VP70. Others, such as the Beretta 93R also have a forward hand-grip.
3D printed handguns
3D printed firearms are firearms that can be produced with a 3D printer.
Overview of gun laws by nation
Many handguns are easily concealed – this has led to laws applying specifically to civilian handgun ownership and the legality of carrying or concealing a handgun whilst in a public setting. Gun laws broadly group gun types into handgun, long guns and to what degree they are automatic.
| Technology | Projectile weapons | null |
9959000 | https://en.wikipedia.org/wiki/Pistol | Pistol | A pistol is a type of handgun, characterised by a barrel with an integral chamber. The word "pistol" derives from the Middle French pistolet (), meaning a small gun or knife, and first appeared in the English language when early handguns were produced in Europe. In colloquial usage, the word "pistol" is often used as a generic term to describe any type of handgun, inclusive of revolvers (which have a single barrel and a separate cylinder housing multiple chambers) and the pocket-sized derringers (which are often multi-barrelled).
The most common type of pistol used in the contemporary era is the semi-automatic pistol. The older single-shot and lever-action pistols are now rarely seen and used primarily for nostalgic hunting and historical reenactment. Fully-automatic machine pistols are uncommon in civilian usage because of their generally poor recoil-controllability (due to the lack of a buttstock) and strict laws and regulations governing their manufacture and sale (where they are regarded as submachine gun equivalents).
Terminology
Technically speaking, the term "pistol" is a hypernym generally referring to a handgun and predates the existence of the type of guns to which it is now applied as a specific term; that is, in colloquial usage it is used specifically to describe a handgun with a single integral chamber within its barrel. Webster's Dictionary defines it as "a handgun whose chamber is integral with the barrel". This makes it distinct from the other types of handgun, such as the revolver, which has multiple chambers within a rotating cylinder that is separately aligned with a single barrel; and the derringer, which is a short pocket gun often with multiple single-shot barrels and no reciprocating action. The 18 U.S. Code § 921 legally defines the term "pistol" as "a weapon originally designed, made, and intended to fire a projectile (bullet) from one or more barrels when held in one hand, and having: a chamber(s) as an integral part(s) of, or permanently aligned with, the bore(s); and a short stock designed to be gripped by one hand at an angle to and extending below the line of the bore(s)", which includes derringers but excludes revolvers.
Commonwealth usage, for instance, does not usually make distinction, particularly when the terms are used by the military. For example, the official designation of the Webley Mk VI revolver was "Pistol, Revolver, Webley, No. 1 Mk VI". In contrast to the Merriam-Webster definition, the Oxford English Dictionary (a descriptive dictionary) describes "pistol" as "a small firearm designed to be held in one hand", which is similar to the Webster definition for "handgun"; and "revolver" as "a pistol with revolving chambers enabling several shots to be fired without reloading", giving its original form as "revolving pistol".
History and etymology
The pistol originates in the 16th century, when early handguns were produced in Europe. The English word was introduced in from the Middle French pistolet (). The etymology of the French word pistolet is disputed. It may be from a Czech word for early hand cannons, píšťala ("whistle" or "pipe"), used in the Hussite Wars during the 1420s. The Czech word was adopted in German as pitschale, pitschole, petsole, and variants. Alternatively the word originated from Italian pistolese, after Pistoia, a city renowned for Renaissance-era gunsmithing, where hand-held guns (designed to be fired from horseback) were first produced in the 1540s. However, the use of the word as a designation of a gun is not documented before 1605 in Italy, long after it was used in French and German.
Action
Single-shot
Single-shot handguns were mainly used during the era of flintlock and musket weaponry where the pistol was loaded with a lead ball and fired by a flint striker, and then later a percussion cap. The handgun required a reload every time it was shot. However, as technology improved, so did the single-shot pistol. New operating mechanisms were created, and some are still made today. They are the oldest type of pistol and are often used to hunt wild game. Additionally, their compact size compared to most other types of handgun makes them more concealable.
Revolver
With the development of the revolver, short for revolving pistol, in the 19th century, gunsmiths had finally achieved the goal of a practical capability for delivering multiple loads to one handgun barrel in quick succession. Revolvers feed ammunition via the rotation of a cartridge-filled cylinder, in which each cartridge is contained within its own ignition chamber and is sequentially brought into alignment with the weapon's barrel by an indexing mechanism linked to the weapon's trigger (double-action) or its hammer (single-action). These nominally cylindrical chambers, usually numbering between five and eight depending on the size of the revolver and the size of the cartridge being fired, are bored through the cylinder so that their axes are parallel to the cylinder's axis of rotation; thus, as the cylinder rotates, the chambers revolve about the cylinder's axis.
Semi-automatic
After the revolver, the semi-automatic pistol was the next step in the development of the pistol. By avoiding multiple chambers—which need to be individually reloaded—semi-automatic pistols delivered faster rates of fire and required only a few seconds to reload, by pushing a button or flipping a switch, and the magazine slides out to be replaced by a fully-loaded one. In blowback-type semi-automatics, the recoil force is used to push the slide back and eject the shell (if any) so that the magazine spring can push another round up; then as the slide returns, it chambers the round. An example of a modern blowback action semi-automatic pistol is the Walther PPK. Blowback pistols are some of the more simply designed handguns. Many semi-automatic pistols today operate using short recoil. This design is often coupled with the Browning type tilting barrel.
Machine pistol
A machine pistol is a pistol that is capable of burst-fire or fully automatic fire. The first machine pistol was produced by Austria-Hungary in 1916, as the Steyr Repetierpistole M1912/P16, and the term is derived from the German word maschinenpistolen. Though it is often used interchangeably with submachine gun, a machine pistol is generally used to describe a weapon that is more compact than a typical submachine gun.
Multi-barreled
Multi-barreled pistols, such as the pepper-box, were common during the same time as single shot pistols. As designers looked for ways to increase fire rates, multiple barrels were added to pistols. One example of a multi-barreled pistol is the COP .357 Derringer.
Harmonica pistol
Around 1850, pistols such as the Jarre harmonica gun were produced that had a sliding magazine. The sliding magazine contained pinfire cartridges or speedloaders. The magazine needed to be moved manually in many designs, hence distinguishing them from semi-automatic pistols.
Lever-action
Lever action pistols are very rare, the most notable of which is the Volcanic pistol and Pistola Herval.
Gallery
| Technology | Firearms | null |
12356743 | https://en.wikipedia.org/wiki/Bite%20angle | Bite angle | In coordination chemistry, the bite angle is the angle on a central atom between two bonds to a bidentate ligand. This ligand–metal–ligand geometric parameter is used to classify chelating ligands, including those in organometallic complexes. It is most often discussed in terms of catalysis, as changes in bite angle can affect not just the activity and selectivity of a catalytic reaction but even allow alternative reaction pathways to become accessible.
Although the parameter can be applied generally to any chelating ligand, it is commonly applied to describe diphosphine ligands, as they can adopt a wide range of bite angles.
Diamines
Diamines form a wide range of coordination complexes. They typically form 5- and 6-membered chelate rings. Examples of the former include ethylenediamine and 2,2′-bipyridine. Six-membered chelate rings are formed by 1,3-diaminopropane. The bite angle in such complexes is usually near 90°. Longer chain diamines, which are "floppy", tend not to form chelate rings.
Diphosphines
Diphosphines are a class of chelating ligands that contain two phosphine groups connected by a bridge (also referred to as a backbone). The bridge, for instance, might consist of one or more methylene groups or multiple aromatic rings with heteroatoms attached. Examples of common diphosphines are dppe, dcpm (Figure 1), and DPEphos (Figure 2). The structure of the backbone and the substituents attached to the phosphorus atoms influence the chemical reactivity of the diphosphine ligand in metal complexes through steric and electronic effects.
Examples
Steric characteristics of the diphosphine ligand that influence the regioselectivity and rate of catalysis include the pocket angle, solid angle, repulsive energy, and accessible molecular surface. Also of importance is the cone angle, which in diphosphines is defined as the average of the cone angle for the two substituents attached to the phosphorus atoms, the bisector of the P–M–P angle, and the angle between each M–P bond. Larger cone angles usually result in faster dissociation of phosphine ligands because of steric crowding.
The natural bite angle
The natural bite angle (βn) of diphosphines, obtained using molecular mechanics calculations, is defined as the preferred chelation angle determined only by ligand backbone and not by metal valence angles (Figure 3).
Both steric bite angle effect and the electronic bite angle effects are recognized. The steric bite angle effect involves the steric interactions between ligands or between a ligand and a substrate. The electronic bite angle effect, on the other hand, relates to the electronic changes that occur when the bite angle is modified. This effect is sensitive to the hybridization of metal orbitals. This flexibility range accounts for the diverse conformations of the ligand with energies slightly above the strain energy of the natural bite angle.
The bite angle of a diphosphine ligand also indicates the distortion from the ideal geometry of a complex based on VSEPR models. Octahedral and square planar complexes prefer angles near 90° while tetrahedral complexes prefer angles near 110°. Since catalysts often interconvert between various geometries, the rigidity of the chelate ring can be decisive. A bidentate phosphine with a natural bite angle of 120° may preferentially occupy two equatorial sites in a trigonal bipyramidal complex whereas a bidentate phosphine with a natural bite angle of 90° may preferentially occupy apical-equatorial positions. Diphosphine ligands with bite angles of over 120° are obtained using a bulky, stiff diphosphine backbones. Diphosphines of wide bite angles are used in some industrial processes.
A case study: hydroformylation
The hydroformylation of alkenes to give aldehydes is an important industrial process. Almost 6 million tons of aldehydes are produced by this method annually.
Rhodium complexes containing diphosphine ligands are active hydroformylation catalysts.
The ratio of linear to branched aldehyde product depends on the structure of the catalyst.
One intermediate, [Rh(H)(alkene)(CO)L], exists in two different isomers, depending on the position of phosphine ligands (Figure 4).
Diphosphine ligands such as dppe, which has a bite angle of about 90°, span the equatorial and apical positions (AE isomer). Diphosphines with larger bite angles (above 120°) preferentially occupy a pair of equatorial positions (EE isomer). It is believed that the EE isomer favors formation of linear aldehydes, the desired product. In an effort to create rhodium complexes in which the phosphine ligands preferentially occupy the equatorial positions, the use of diphosphine ligands with wide bite angles such as BISBI (Figure 5) has been investigated.
With a bite angle of approximately 113°, BISBI spans sites on equatorial plane of the trigonal bipyramidal intermediate complex (Figure 6).
The structure of the intermediate [Rh(H)(diphosphine)(CO)2] does not however determine the regioselectivity of the hydroformylation. Instead, the formation of the linear vs. branched aldehydes is determined upon formation of [Rh(H)(diphosphine)CO(alkene)] and the subsequent hydride migration step. The bite angle affect the steric crowding at the Rh atom that results from the interactions of the bulky backbone of the ligand with substrate. The wide bite angle that results from the backbone allows the five-coordinate [Rh(H)(diphosphine)CO(alkene)] intermediate to adopt a structure that relieves steric hindrance. Thus, BISBI occupies the equatorial positions, where it has the most space. This preference of a transition state that relieves steric hindrance favors the formation of the linear aldehyde. The regioselectivity is also controlled by the hydride migration, which is usually irreversible in the formation of linear aldehydes.
Furthermore, studies using Xantphos ligands (ligands with bulky backbones) in hydroformylation have indicated an increase in the rate of catalysis in metal complexes that contain diphosphine ligands with larger bite angles. The electronic effect of this increase in reaction rate is uncertain since it mainly depends on the bonding between the alkene and rhodium. Large bite angles promote alkene to rhodium electron donation, which results in an accumulation of electron density on the rhodium atom. This increased electron density would be available for π-donation into the anti-bonding orbitals of other ligands, which could weaken other M-L bonds within the catalyst, leading to higher rates.
The application of diphosphine ligands to catalysts is not limited to the process of hydroformylation. Hydrocyanation and hydrogenation reactions also implement phosphine-mediated catalysts.
| Physical sciences | Bond structure | Chemistry |
204658 | https://en.wikipedia.org/wiki/Coupe | Coupe | A coupe or coupé (, ) is a passenger car with a sloping or truncated rear roofline and typically with two doors.
The term coupé was first applied to horse-drawn carriages for two passengers without rear-facing seats. It comes from the French past participle of , "cut".
Some coupé cars only have two seats, while some also feature rear seats. However, these rear seats are usually lower quality and much smaller than those in the front. Furthermore, "A fixed-top two-door sports car would be best and most appropriately be termed a 'sports coupe' or 'sports coupé'".
Etymology and pronunciation
() is based on the past participle of the French verb ("to cut") and thus indicates a car which has been "cut" or made shorter than standard. It was first applied to horse-drawn carriages for two passengers without rear-facing seats. These or ("clipped carriages") were eventually clipped to .
There are two common pronunciations in English:
() – the anglicized version of the French pronunciation of coupé.
() – as a spelling pronunciation when the word is written without an accent. This is the usual pronunciation and spelling in the United States and much of Canada, with the pronunciation entering American vernacular no later than 1936 and featuring in the Beach Boys' hit 1963 song "Little Deuce Coupe".
Definition
A coupé is a fixed-roof car with a sloping rear roofline and one or two rows of seats. However, there is some debate surrounding whether a coupe must have two doors for passenger egress or whether cars with four doors can also be considered coupés. This debate has arisen since the early 2000s, when four-door cars such as the Mazda RX-8 and Mercedes-Benz CLS-Class have been marketed as "four-door coupés" or "quad coupés", although the Rover P5 was a much earlier example, with a variant introduced in 1962 having a lower, sleeker roofline marketed as the Rover P5 Coupé.
In the 1940s and 1950s, coupés were distinguished from sedans by their shorter roof area and sportier profile. Similarly, in more recent times, when a model is sold in both coupé and sedan body styles, generally the coupe is sportier and more compact. There have been a number of two-door sedans built as well, a bodystyle the French call a coach.
The 1977 version of International Standard ISO3833—Road vehicles - Types - Terms and definitions—defines a coupe as having two doors (along with a fixed roof, usually with limited rear volume, at least two seats in at least one row and at least two side windows). On the other hand, the United States Society of Automotive Engineers publication J1100 does not specify the number of doors, instead defining a coupé as having a rear interior volume of less than .
The definition of coupé started to blur when manufacturers began to produce cars with a 2+2 body style (which have a sleek, sloping roofline, two doors, and two functional seats up front, plus two small seats in the back).
Some manufacturers also blur the definition of a coupé by applying this description to models featuring a hatchback or a rear cargo area access door that opens upwards. Most often also featuring a fold-down back seat, the hatchback or liftback layout of these cars improves their practicality and cargo room.
Horse-drawn carriages
The coupe carriage body style originated from the berline horse-drawn carriage. The coupe version of the berline was introduced in the 18th century as a shortened ("cut") version with no rear-facing seat. Normally, a coupé had a fixed glass window in the front of the passenger compartment. The coupé was considered an ideal vehicle for women to use to go shopping or to make social visits.
History
The early coupé automobile's passenger compartment followed in general conception the design of horse-drawn coupés, with the driver in the open at the front and an enclosure behind him for two passengers on one bench seat. The French variant for this word thus denoted a car with a small passenger compartment.
By the 1910s, the term had evolved to denote a two-door car with the driver and up to two passengers in an enclosure with a single bench seat. The coupé de ville, or coupé chauffeur, was an exception, retaining the open driver's section at front.
In 1916, the Society of Automobile Engineers suggested nomenclature for car bodies that included the following:
During the 20th century, the term coupé was applied to various close-coupled cars (where the rear seat is located further forward than usual and the front seat further back than usual).
Since the 1960s the term coupé has generally referred to a two-door car with a fixed roof.
Since 2005, several models with four doors have been marketed as "four-door coupés", however, reactions are mixed about whether these models are actually sedans instead of coupés. According to Edmunds, an American automotive guide, "the four-door coupe category doesn't really exist."
Variations
Berlinetta
A berlinetta is a lightweight sporty two-door car, typically with two seats but also including 2+2 cars.
Club coupe
A club coupe is a two-door car with a larger rear-seat passenger area, compared with the smaller rear-seat area in a 2+2 body style. Thus, club coupes resemble coupes as both have two doors, but feature a full-width rear seat that is accessible by tilting forward the backs of the front seats.
Hardtop coupé
A hardtop coupe is a two-door car that lacks a structural pillar ("B" pillar) between the front and rear side windows. When these windows are lowered, the effect is like that of a convertible coupé with the windows down. The hardtop body style was popular in the United States from the early 1950s until the 2000s. It was also available in European and Japanese markets. Safety regulations for roof structures to protect passengers in a rollover were proposed, limiting the development of new models. The hardtop body style went out of style with consumers while the automakers focused on cost reduction and increasing efficiencies.
Combi coupé
Saab used the term "combi coupé" for a car body similar to the liftback.
Business coupe
A two-door car with no rear seat or with a removable rear seat intended for traveling salespeople and other vendors carrying their wares with them. American manufacturers developed this style of coupe in the late 1930s.
Four-door coupe / quad coupe
The 1921 and 1922 LaFayette models were available in a variety of open and closed body styles that included a close-coupled version featuring two center-opening doors on each side that was marketed as a Four-Door Coupe. The 1927 Nash Advanced Six was available in four-door coupe body style.
More recently, the description has been applied by marketers to describe four-door cars with a coupe-like roofline at the rear. The low-roof design reduces back-seat passenger access and headroom. The designation was used for the low-roof model of the 1962–1973 Rover P5, followed by the 1992–1996 Nissan Leopard / Infiniti J30. Recent examples include the 2005 Mercedes-Benz CLS, 2010 Audi A7, Volkswagen CC, Volkswagen Arteon, and 2012 BMW 6 Series Gran Coupe.
Similarly, several cars with one or two small rear doors for rear seat passenger egress and no B-pillar have been marketed as "quad coupes". For example, the 2003 Saturn Ion, the 2003 Mazda RX-8, and the 2011-2022 Hyundai Veloster.
Three-door coupe
Particularly popular in Europe, many cars are designed with coupe styling, but a three-door hatchback/liftback layout to improve practicality, including cars such as the Jaguar E-Type, Mitsubishi 3000GT, Datsun 240Z, Toyota Supra, Mazda RX-7, Alfa Romeo Brera, Ford/Mercury Cougar and Volkswagen Scirocco.
Opera coupe
A two-door car designed for driving to the opera with easy access to the rear seats. Features sometimes included a folding front seat next to the driver or a compartment to store top hats.
Often they would have solid rear-quarter panels, with small, circular windows, to enable the occupants to see out without being seen. These opera windows were revived on many U.S. automobiles during the 1970s and early 1980s.
Three-window coupe
The three-window coupe (commonly just "three-window") is a style of automobile characterized by two side windows and a backlight (rear window). The front windscreens are not counted. The three-window coupe has a distinct difference from the five-window coupe, which has an additional window on each side behind the front doors. These two-door cars typically have small-sized bodies with only a front seat and an occasional small rear seat.
The style was popular from the 1920s until the beginning of World War II. While many manufacturers produced three-window coupes, the 1932 Ford coupe is often considered the classic hot rod.
Coupe SUV
Some SUVs or crossovers with sloping rear rooflines are marketed as "coupe crossover SUVs" or "coupe SUVs", even though they have four side doors for passenger egress to the seats and rear hatches for cargo area access however only a car with 2 doors and no B style are considered a true coupe.
Positioning in model range
In the United States, some coupes are "simply line-extenders two-door variants of family sedans", while others have significant differences from their four-door counterparts.
The AMC Matador coupe (1974–1978) has a shorter wheelbase with a distinct aerodynamic design and fastback styling, sharing almost nothing with the conventional three-box design and more "conservative" four-door versions.
Similarly, the Chrysler Sebring and Dodge Stratus coupes and sedans (late-1990 through 2000s), have little in common except their names. The coupes were engineered by Mitsubishi and built in Illinois, while the sedans were developed by Chrysler and built in Michigan. Some coupes may share platforms with contemporary sedans.
Coupes may also exist as model lines in their own right, either closely related to other models, but named differently – such as the Alfa Romeo GT or Infiniti Q60 – or have little engineering in common with other vehicles from the manufacturer – such as the Toyota GT86.
Gallery
| Technology | Motorized road transport | null |
204680 | https://en.wikipedia.org/wiki/Four-momentum | Four-momentum | In special relativity, four-momentum (also called momentum–energy or momenergy) is the generalization of the classical three-dimensional momentum to four-dimensional spacetime. Momentum is a vector in three dimensions; similarly four-momentum is a four-vector in spacetime. The contravariant four-momentum of a particle with relativistic energy and three-momentum , where is the particle's three-velocity and the Lorentz factor, is
The quantity of above is the ordinary non-relativistic momentum of the particle and its rest mass. The four-momentum is useful in relativistic calculations because it is a Lorentz covariant vector. This means that it is easy to keep track of how it transforms under Lorentz transformations.
Minkowski norm
Calculating the Minkowski norm squared of the four-momentum gives a Lorentz invariant quantity equal (up to factors of the speed of light ) to the square of the particle's proper mass:
where
is the metric tensor of special relativity with metric signature for definiteness chosen to be . The negativity of the norm reflects that the momentum is a timelike four-vector for massive particles. The other choice of signature would flip signs in certain formulas (like for the norm here). This choice is not important, but once made it must for consistency be kept throughout.
The Minkowski norm is Lorentz invariant, meaning its value is not changed by Lorentz transformations/boosting into different frames of reference. More generally, for any two four-momenta and , the quantity is invariant.
Relation to four-velocity
For a massive particle, the four-momentum is given by the particle's invariant mass multiplied by the particle's four-velocity,
where the four-velocity is
and
is the Lorentz factor (associated with the speed ), is the speed of light.
Derivation
There are several ways to arrive at the correct expression for four-momentum. One way is to first define the four-velocity and simply define , being content that it is a four-vector with the correct units and correct behavior. Another, more satisfactory, approach is to begin with the principle of least action and use the Lagrangian framework to derive the four-momentum, including the expression for the energy. One may at once, using the observations detailed below, define four-momentum from the action . Given that in general for a closed system with generalized coordinates and canonical momenta ,
it is immediate (recalling , , , and , , , in the present metric convention) that
is a covariant four-vector with the three-vector part being the (negative of) canonical momentum.
Consider initially a system of one degree of freedom . In the derivation of the equations of motion from the action using Hamilton's principle, one finds (generally) in an intermediate stage for the variation of the action,
The assumption is then that the varied paths satisfy , from which Lagrange's equations follow at once. When the equations of motion are known (or simply assumed to be satisfied), one may let go of the requirement . In this case the path is assumed to satisfy the equations of motion, and the action is a function of the upper integration limit , but is still fixed. The above equation becomes with , and defining , and letting in more degrees of freedom,
Observing that
one concludes
In a similar fashion, keep endpoints fixed, but let vary. This time, the system is allowed to move through configuration space at "arbitrary speed" or with "more or less energy", the field equations still assumed to hold and variation can be carried out on the integral, but instead observe
by the fundamental theorem of calculus. Compute using the above expression for canonical momenta,
Now using
where is the Hamiltonian, leads to, since in the present case,
Incidentally, using with in the above equation yields the Hamilton–Jacobi equations. In this context, is called Hamilton's principal function.
The action is given by
where is the relativistic Lagrangian for a free particle. From this,
The variation of the action is
To calculate , observe first that and that
So
or
and thus
which is just
where the second step employs the field equations , , and as in the observations above. Now compare the last three expressions to find
with norm , and the famed result for the relativistic energy,
where is the now unfashionable relativistic mass, follows. By comparing the expressions for momentum and energy directly, one has
that holds for massless particles as well. Squaring the expressions for energy and three-momentum and relating them gives the energy–momentum relation,
Substituting
in the equation for the norm gives the relativistic Hamilton–Jacobi equation,
It is also possible to derive the results from the Lagrangian directly. By definition,
which constitute the standard formulae for canonical momentum and energy of a closed (time-independent Lagrangian) system. With this approach it is less clear that the energy and momentum are parts of a four-vector.
The energy and the three-momentum are separately conserved quantities for isolated systems in the Lagrangian framework. Hence four-momentum is conserved as well. More on this below.
More pedestrian approaches include expected behavior in electrodynamics. In this approach, the starting point is application of Lorentz force law and Newton's second law in the rest frame of the particle. The transformation properties of the electromagnetic field tensor, including invariance of electric charge, are then used to transform to the lab frame, and the resulting expression (again Lorentz force law) is interpreted in the spirit of Newton's second law, leading to the correct expression for the relativistic three- momentum. The disadvantage, of course, is that it isn't immediately clear that the result applies to all particles, whether charged or not, and that it doesn't yield the complete four-vector.
It is also possible to avoid electromagnetism and use well tuned experiments of thought involving well-trained physicists throwing billiard balls, utilizing knowledge of the velocity addition formula and assuming conservation of momentum. This too gives only the three-vector part.
Conservation of four-momentum
As shown above, there are three conservation laws (not independent, the last two imply the first and vice versa):
The four-momentum (either covariant or contravariant) is conserved.
The total energy is conserved.
The 3-space momentum is conserved (not to be confused with the classic non-relativistic momentum ).
Note that the invariant mass of a system of particles may be more than the sum of the particles' rest masses, since kinetic energy in the system center-of-mass frame and potential energy from forces between the particles contribute to the invariant mass. As an example, two particles with four-momenta and each have (rest) mass 3GeV/c2 separately, but their total mass (the system mass) is 10GeV/c2. If these particles were to collide and stick, the mass of the composite object would be 10GeV/c2.
One practical application from particle physics of the conservation of the invariant mass involves combining the four-momenta and of two daughter particles produced in the decay of a heavier particle with four-momentum to find the mass of the heavier particle. Conservation of four-momentum gives , while the mass of the heavier particle is given by . By measuring the energies and three-momenta of the daughter particles, one can reconstruct the invariant mass of the two-particle system, which must be equal to . This technique is used, e.g., in experimental searches for Z′ bosons at high-energy particle colliders, where the Z′ boson would show up as a bump in the invariant mass spectrum of electron–positron or muon–antimuon pairs.
If the mass of an object does not change, the Minkowski inner product of its four-momentum and corresponding four-acceleration is simply zero. The four-acceleration is proportional to the proper time derivative of the four-momentum divided by the particle's mass, so
Canonical momentum in the presence of an electromagnetic potential
For a charged particle of charge , moving in an electromagnetic field given by the electromagnetic four-potential:
where is the scalar potential and the vector potential, the components of the (not gauge-invariant) canonical momentum four-vector is
This, in turn, allows the potential energy from the charged particle in an electrostatic potential and the Lorentz force on the charged particle moving in a magnetic field to be incorporated in a compact way, in relativistic quantum mechanics.
Four-momentum in curved spacetime
In the case when there is a moving physical system with a continuous distribution of matter in curved spacetime, the primary expression for four-momentum is four-vector with covariant index:
Four-momentum is expressed through the energy of physical system and relativistic momentum . At the same time, the four-momentum can be represented as the sum of two non-local four-vectors of integral type:
Four-vector is the generalized four-momentum associated with the action of fields on particles; four-vector is the four-momentum of the fields arising from the action of particles on the fields.
Energy and momentum , as well as components of four-vectors and can be calculated if the Lagrangian density of the system is given. The following formulas are obtained for the energy and momentum of the system:
Here is that part of the Lagrangian density that contains terms with four-currents; is the velocity of matter particles; is the time component of four-velocity of particles; is determinant of metric tensor; is the part of the Lagrangian associated with the Lagrangian density ; is velocity of a particle of matter with number .
| Physical sciences | Theory of relativity | Physics |
204682 | https://en.wikipedia.org/wiki/Translation%20%28geometry%29 | Translation (geometry) | In Euclidean geometry, a translation is a geometric transformation that moves every point of a figure, shape or space by the same distance in a given direction. A translation can also be interpreted as the addition of a constant vector to every point, or as shifting the origin of the coordinate system. In a Euclidean space, any translation is an isometry.
As a function
If is a fixed vector, known as the translation vector, and is the initial position of some object, then the translation function will work as .
If is a translation, then the image of a subset under the function is the translate of by . The translate of by is often written as .
Application in classical physics
In classical physics, translational motion is movement that changes the position of an object, as opposed to rotation. For example, according to Whittaker:
A translation is the operation changing the positions of all points of an object according to the formula
where is the same vector for each point of the object. The translation vector common to all points of the object describes a particular type of displacement of the object, usually called a linear displacement to distinguish it from displacements involving rotation, called angular displacements.
When considering spacetime, a change of time coordinate is considered to be a translation.
As an operator
The translation operator turns a function of the original position, , into a function of the final position, . In other words, is defined such that This operator is more abstract than a function, since defines a relationship between two functions, rather than the underlying vectors themselves. The translation operator can act on many kinds of functions, such as when the translation operator acts on a wavefunction, which is studied in the field of quantum mechanics.
As a group
The set of all translations forms the translation group , which is isomorphic to the space itself, and a normal subgroup of Euclidean group . The quotient group of by is isomorphic to the group of rigid motions which fix a particular origin point, the orthogonal group :
Because translation is commutative, the translation group is abelian. There are an infinite number of possible translations, so the translation group is an infinite group.
In the theory of relativity, due to the treatment of space and time as a single spacetime, translations can also refer to changes in the time coordinate. For example, the Galilean group and the Poincaré group include translations with respect to time.
Lattice groups
One kind of subgroup of the three-dimensional translation group are the lattice groups, which are infinite groups, but unlike the translation groups, are finitely generated. That is, a finite generating set generates the entire group.
Matrix representation
A translation is an affine transformation with no fixed points. Matrix multiplications always have the origin as a fixed point. Nevertheless, there is a common workaround using homogeneous coordinates to represent a translation of a vector space with matrix multiplication: Write the 3-dimensional vector using 4 homogeneous coordinates as .
To translate an object by a vector , each homogeneous vector (written in homogeneous coordinates) can be multiplied by this translation matrix:
As shown below, the multiplication will give the expected result:
The inverse of a translation matrix can be obtained by reversing the direction of the vector:
Similarly, the product of translation matrices is given by adding the vectors:
Because addition of vectors is commutative, multiplication of translation matrices is therefore also commutative (unlike multiplication of arbitrary matrices).
Translation of axes
While geometric translation is often viewed as an active transformation that changes the position of a geometric object, a similar result can be achieved by a passive transformation that moves the coordinate system itself but leaves the object fixed. The passive version of an active geometric translation is known as a translation of axes.
Translational symmetry
An object that looks the same before and after translation is said to have translational symmetry. A common example is a periodic function, which is an eigenfunction of a translation operator.
Translations of a graph
The graph of a real function , the set of points , is often pictured in the real coordinate plane with as the horizontal coordinate and as the vertical coordinate.
Starting from the graph of , a horizontal translation means composing with a function , for some constant number , resulting in a graph consisting of points . Each point of the original graph corresponds to the point in the new graph, which pictorially results in a horizontal shift.
A vertical translation means composing the function with , for some constant , resulting in a graph consisting of the points . Each point of the original graph corresponds to the point in the new graph, which pictorially results in a vertical shift.
For example, taking the quadratic function , whose graph is a parabola with vertex at , a horizontal translation 5 units to the right would be the new function whose vertex has coordinates . A vertical translation 3 units upward would be the new function whose vertex has coordinates .
The antiderivatives of a function all differ from each other by a constant of integration and are therefore vertical translates of each other.
Applications
For describing vehicle dynamics (or movement of any rigid body), including ship dynamics and aircraft dynamics, it is common to use a mechanical model consisting of six degrees of freedom, which includes translations along three reference axes (as well as rotations about those three axes). These translations are often called surge, sway, and heave.
| Mathematics | Geometry: General | null |
204762 | https://en.wikipedia.org/wiki/Vaporization | Vaporization | Vaporization (or vapo(u)risation) of an element or compound is a phase transition from the liquid phase to vapor. There are two types of vaporization: evaporation and boiling. Evaporation is a surface phenomenon, whereas boiling is a bulk phenomenon (a phenomenon in which the whole object or substance is involved in the process).
Evaporation
Evaporation is a phase transition from the liquid phase to vapor (a state of substance below critical temperature) that occurs at temperatures below the boiling temperature at a given pressure. Evaporation occurs on the surface. Evaporation only occurs when the partial pressure of vapor of a substance is less than the equilibrium vapor pressure. For example, due to constantly decreasing pressures, vapor pumped out of a solution will eventually leave behind a cryogenic liquid.
Boiling
Boiling is also a phase transition from the liquid phase to gas phase, but boiling is the formation of vapor as bubbles of vapor below the surface of the liquid. Boiling occurs when the equilibrium vapor pressure of the substance is greater than or equal to the atmospheric pressure. The temperature at which boiling occurs is the boiling temperature, or boiling point. The boiling point varies with the pressure of the environment.
Sublimation
Sublimation is a direct phase transition from the solid phase to the gas phase, skipping the intermediate liquid phase.
Other uses of the term 'vaporization'
The term vaporization has also been used in a colloquial or hyperbolic way to refer to the physical destruction of an object that is exposed to intense heat or explosive force, where the object is actually blasted into small pieces rather than literally converted to gaseous form. Examples of this usage include the "vaporization" of the uninhabited Marshall Island of Elugelab in the 1952 Ivy Mike thermonuclear test. Many other examples can be found throughout the various MythBusters episodes that have involved explosives, chief among them being Cement Mix-Up, where they "vaporized" a cement truck with ANFO.
At the moment of a large enough meteor or comet impact, bolide detonation, a nuclear fission, thermonuclear fusion, or theoretical antimatter weapon detonation, a flux of so many gamma ray, x-ray, ultraviolet, visual light and heat photons strikes matter in a such brief amount of time (a great number of high-energy photons, many overlapping in the same physical space) that all molecules lose their atomic bonds and "fly apart". All atoms lose their electron shells and become positively charged ions, in turn emitting photons of a slightly lower energy than they had absorbed. All such matter becomes a gas of nuclei and electrons which rise into the air due to the extremely high temperature or bond to each other as they cool. The matter vaporized this way is immediately a plasma in a state of maximum entropy and this state steadily reduces via the factor of passing time due to natural processes in the biosphere and the effects of physics at normal temperatures and pressures.
A similar process occurs during ultrashort pulse laser ablation, where the high flux of incoming electromagnetic radiation strips the target material's surface of electrons, leaving positively charged atoms which undergo a coulomb explosion.
Table
| Physical sciences | Phase transitions | Physics |
204924 | https://en.wikipedia.org/wiki/Stone-curlew | Stone-curlew | The stone-curlews, also known as dikkops or thick-knees, consist of 10 species within the family Burhinidae, and are found throughout the tropical and temperate parts of the world, with two or more species occurring in some areas of Africa, Asia, and Australia. Despite the group being classified as waders, most species have a preference for arid or semiarid habitats.
Taxonomy
The family Burhinidae was introduced in 1912 for the stone-curlews by Australian ornithologist Gregory Mathews. The family contains three genera: Hesperoburhinus, Burhinus and Esacus. The name Burhinus combines the Ancient Greek bous meaning "ox" and rhis, rhinos meaning "nose" (or "bill").
Molecular phylogenetic studies have shown that the family Burhinidae is sister to a clade containing the sheathbills in the family Chionidae and the Magellanic plover in its own family Pluvianellidae. The stone-curlews are not closely related to the curlews, genus Numenius, that belong to the sandpiper family Scolopacidae.
Description
They are medium to large birds with strong black or yellow black bills, large yellow eyes—which give them a reptilian appearance—and cryptic plumage. The names thick-knee and stone-curlew are both in common use. The term stone-curlew owes its origin to the broad similarities with true curlews. Thick-knee refers to the prominent joints in the long yellow or greenish legs and apparently originated with a name coined in 1776 for B. oedicnemus, the Eurasian stone-curlew.
Obviously the heel (ankle) and the knee are confused here.
Behaviour
They are largely nocturnal, particularly when singing their loud, wailing songs, which are reminiscent of true curlews. Their diet consists mainly of insects and other invertebrates. Larger species also take lizards and even small mammals. Most species are sedentary, but the Eurasian stone-curlew is a summer migrant in the temperate European part of its range, wintering in Africa.
Species
The earliest definitive stone-curlew is Genucrassum bransatensis from the Late Oligocene of France. Wilaru, described from the Late Oligocene to the Early Miocene of Australia, was originally classified as a stone-curlew, but was subsequently argued to be a member of the extinct anseriform family Presbyornithidae, instead. The living species are:
| Biology and health sciences | Charadriiformes | Animals |
204967 | https://en.wikipedia.org/wiki/Regolith | Regolith | Regolith () is a blanket of unconsolidated, loose, heterogeneous superficial deposits covering solid rock. It includes dust, broken rocks, and other related materials and is present on Earth, the Moon, Mars, some asteroids, and other terrestrial planets and moons.
Etymology
The term regolith combines two Greek words: (), 'blanket', and (), 'rock'. The American geologist George P. Merrill first defined the term in 1897, writing:
Earth
Earth's regolith includes the following subdivisions and components:
soil or pedolith
alluvium and other transported cover, including that transported by aeolian, glacial, marine, and gravity flow processes.
"saprolith'", generally divided into the
upper saprolite: completely oxidised bedrock
lower saprolite: chemically reduced partially weathered rocks
saprock: fractured bedrock with weathering restricted to fracture margins
volcanic ash and lava flows that are interbedded with unconsolidated material
duricrust, formed by cementation of soils, saprolith and transported material like clays, silicates, iron oxides, oxyhydroxides, carbonates, sulfates and less common agents, into indurated layers resistant to weathering and erosion.
groundwater- and water-deposited salts.
biota and organic components derived from it.
Regolith can vary from being essentially absent to hundreds of metres in thickness. Its age can vary from instantaneous (for an ash fall or alluvium just deposited) to hundreds of millions of years old (regolith of Precambrian age occurs in parts of Australia, though this may have been buried and subsequently exhumed.)
Regolith on Earth originates from weathering and biological processes. The uppermost part of the regolith, which typically contains significant organic matter, is more conventionally referred to as soil. The presence of regolith is one of the important factors for most life, since few plants can grow on or within solid rock and animals would be unable to burrow or build shelter without loose material.
Regolith is also important to engineers constructing buildings, roads and other civil works. The mechanical properties of regolith vary considerably and need to be documented if the construction is to withstand the rigors of use.
Regolith may host mineral deposits, such as mineral sands, calcrete uranium, and lateritic nickel deposits. Understanding regolith properties, especially geochemical composition, is critical to geochemical and geophysical exploration for mineral deposits beneath it. The regolith is also an important source of construction material, including sand, gravel, crushed stone, lime, and gypsum.
The regolith is the zone through which aquifers are recharged and through which aquifer discharge occurs. Many aquifers, such as alluvial aquifers, occur entirely within regolith. The composition of the regolith can also strongly influence water composition through the presence of salts and acid-generating materials.
Moon
Regolith covers almost the entire lunar surface, bedrock protruding only on very steep-sided crater walls and the occasional lava channel. This regolith has formed over the last 4.6 billion years from the impact of large and small meteoroids, from the steady bombardment of micrometeoroids and from solar and galactic charged particles breaking down surface rocks. Regolith production by rock erosion can lead to fillet buildup around lunar rocks.
The impact of micrometeoroids, sometimes travelling faster than , generates enough heat to melt or partially vaporize dust particles. This melting and refreezing welds particles together into glassy, jagged-edged agglutinates, reminiscent of tektites found on Earth.
The regolith is generally from 4 to 5 m thick in mare areas and from 10 to 15 m in the older highland regions. Below this true regolith is a region of blocky and fractured bedrock created by larger impacts, which is often referred to as the "megaregolith".
The density of regolith at the Apollo 15 landing site () averages approximately 1.35 g/cm3 for the top 30 cm, and it is approximately 1.85g/cm3 at a depth of 60 cm.
The term lunar soil is often used interchangeably with "lunar regolith" but typically refers to the finer fraction of regolith, that which is composed of grains one centimetre in diameter or less. Some have argued that the term "soil" is not correct in reference to the Moon because soil is defined as having organic content, whereas the Moon has none. However, standard usage among lunar scientists is to ignore that distinction. "Lunar dust" generally connotes even finer materials than lunar soil, the fraction which is less than 30 micrometers in diameter. The average chemical composition of regolith might be estimated from the relative concentration of elements in lunar soil.
The physical and optical properties of lunar regolith are altered through a process known as space weathering, which darkens the regolith over time, causing crater rays to fade and disappear.
During the early phases of the Apollo Moon landing program, Thomas Gold of Cornell University and part of President's Science Advisory Committee raised a concern that the thick dust layer at the top of the regolith would not support the weight of the lunar module and that the module might sink beneath the surface. However, Joseph Veverka (also of Cornell) pointed out that Gold had miscalculated the depth of the overlying dust, which was only a couple of centimeters thick. Indeed, the regolith was found to be quite firm by the robotic Surveyor spacecraft that preceded Apollo, and during the Apollo landings the astronauts often found it necessary to use a hammer to drive a core sampling tool into it.
Mars
Mars is covered with vast expanses of sand and dust, and its surface is littered with rocks and boulders. The dust is occasionally picked up in vast planet-wide dust storms. Mars dust is very fine and enough remains suspended in the atmosphere to give the sky a reddish hue.
The sand is believed to move only slowly in the Martian winds due to the very low density of the atmosphere in the present epoch. In the past, liquid water flowing in gullies and river valleys may have shaped the Martian regolith. Mars researchers are studying whether groundwater sapping is shaping the Martian regolith in the present epoch and whether carbon dioxide hydrates exist on Mars and play a role. It is believed that large quantities of water and carbon dioxide ices remain frozen within the regolith in the equatorial parts of Mars and on its surface at higher latitudes.
Asteroids
Asteroids have regoliths developed by meteoroid impact. The final images taken by the NEAR Shoemaker spacecraft of the surface of Eros are the best images of the regolith of an asteroid. The recent Japanese Hayabusa mission also returned clear images of regolith on an asteroid so small it was thought that gravity was too low to develop and maintain a regolith. The asteroid 21 Lutetia has a layer of regolith near its north pole, which flows in landslides associated with variations in albedo.
Titan
Saturn's largest moon Titan is known to have extensive fields of dunes. However, the origin of the material forming the dunes is unknown - it could be small fragments of water ice eroded by flowing methane or particulate organic matter that formed in Titan's atmosphere and rained down on the surface. Scientists are beginning to call this loose icy material regolith because of the mechanical similarity with regolith on other bodies. However, traditionally (and etymologically), the term had been applied only when the loose layer was composed of mineral grains like quartz or plagioclase or rock fragments that were in turn composed of such minerals. Loose blankets of ice grains were not considered regolith because when they appear on Earth in the form of snow, they behave differently from regolith, the grains melting and fusing with only slight changes in pressure or temperature. However, Titan is so cold that ice behaves like rock. Thus, there is an ice-regolith complete with erosion and aeolian and/or sedimentary processes.
The Huygens probe used a penetrometer on landing to characterize the mechanical properties of the local regolith. The surface itself was reported to be a clay-like "material which might have a thin crust followed by a region of relative uniform consistency." Subsequent data analysis suggests that surface consistency readings were likely caused by Huygens displacing a large pebble as it landed and that the surface is better described as a 'sand' made of ice grains. The images taken after the probe's landing show a flat plain covered in pebbles. The pebbles, which may be made of water ice, are somewhat rounded, which may indicate the action of fluids on them.
| Physical sciences | Sedimentology | Earth science |
204994 | https://en.wikipedia.org/wiki/Wisteria | Wisteria | Wisteria is a genus of flowering plants in the legume family, Fabaceae (Leguminosae). The genus includes four species of woody twining vines that are native to China, Japan, Korea, Vietnam, southern Canada, the Eastern United States, and north of Iran. They were later introduced to France, Germany and various other countries in Europe. Some species are popular ornamental plants. The genus name is also used as the English name, and may then be spelt 'wistaria'. In some countries in Western and Central Europe, Wisteria is also known by a variant spelling of the genus in which species were formerly placed, Glycine. Examples include the French glycines, the German Glyzinie, and the Polish glicynia.
The aquatic flowering plant commonly called wisteria or 'water wisteria' is Hygrophila difformis, in the family Acanthaceae.
Description
Wisterias climb by twining their stems around any available support. W. floribunda (Japanese wisteria) twines clockwise when viewed from above, while W. sinensis (Chinese wisteria) twines counterclockwise. This is an aid in identifying the two most common species of wisteria. They can climb as high as above the ground and spread out laterally. The world's largest known wisteria is the Sierra Madre Wisteria in Sierra Madre, California, measuring more than in size and weighing 250 tons. Planted in 1894, it is of the 'Chinese lavender' variety.
The leaves are alternate, 15 to 35 cm long, pinnate, with 9 to 19 leaflets.
The flowers have drooping racemes that vary in length from species to species. W. frutescens (American wisteria) has the shortest racemes, . W. floribunda (Japanese wisteria) has the longest racemes, in some varieties and or in some cultivars. The flowers come in a variety of colors, including white, lilac, purple, and pink, and some W. brachybotrys (Silky wisteria) and W. floribunda cultivars have particularly remarkable colors. The flowers are fragrant, and especially cultivars of W. brachybotrys, W. floribunda, and W. sinensis are noted for their sweet and musky scents. Flowering is in spring (just before or as the leaves open) in some Asian species, and in mid to late summer in the American species.
Taxonomy
The genus Wisteria was established by Thomas Nuttall in 1818. He based the genus on Wisteria frutescens, previously included in the genus Glycine. Nuttall stated that he named the genus in memory of the American physician and anatomist Caspar Wistar (1761–1818). Both men were living in Philadelphia at the time, where Wistar was a professor in the School of Medicine at the University of Pennsylvania. Questioned about the spelling later, Nuttall said it was for "euphony", but his biographer speculated that it may have something to do with Nuttall's friend Charles Jones Wister Sr., of Grumblethorpe, the grandson of the merchant John Wister. Various sources assert that the naming occurred in Philadelphia. It has been suggested that the Portuguese botanist and geologist José Francisco Corrêa da Serra, who lived in Philadelphia beginning in 1812, four years before his appointment as ambassador of Portugal to the United States, and a friend of Wistar, proposed the name "Wistaria" in his obituary of Wistar.
As the spelling is apparently deliberate, there is no justification for changing the genus name under the International Code of Nomenclature for algae, fungi, and plants.
Classification
The genus was previously placed in the tribe Millettieae. Molecular phylogenetic studies from 2000 onwards showed that Wisteria, along with other genera such as Callerya and Afgekia, were related and quite distinct from other members of the Millettieae. A more detailed study in 2019 reached the same conclusion, and moved Wisteria to the expanded tribe Wisterieae.
Species
, Plants of the World Online accepted four species:
Ecology
Wisteria species are used as food by the larvae of some Lepidoptera species including the brown-tail moth.
Toxicity
The seeds are produced in pods similar to those of Laburnum, and, like the seeds of that genus, are poisonous. All parts of the plant contain a saponin called wisterin, which is toxic if ingested, and may cause dizziness, confusion, speech problems, nausea, vomiting, stomach pains, diarrhea and collapse. There is debate over whether the concentration outside of the seeds is sufficient to cause poisoning. Wisteria seeds have caused poisoning in children and pets of many countries, producing mild to severe gastroenteritis and other effects.
Cultivation
In North America, W. floribunda (Japanese wisteria) and W. sinensis (Chinese wisteria) are far more popular than other species for their abundance of flowers, clusters of large flowers, variety of flower colors, and fragrance. W. sinensis was brought to the United States for horticultural purposes in 1816, while W. floribunda was introduced around 1830. Because of their hardiness and tendency to escape cultivation, these non-native wisterias are considered invasive species in many parts of the U.S., especially the Southeast, due to their ability to overtake and choke out other native plant species.
W. floribunda (Japanese wisteria), which has the longest racemes of wisteria species, is decorative and has given rise to many cultivars that have won the prestigious Award of Garden Merit.
Wisteria, especially W. sinensis (Chinese wisteria), is very hardy and fast-growing. It can grow in fairly poor-quality soils, but prefers fertile, moist, well-drained soil. It thrives in full sun. It can be propagated via hardwood cutting, softwood cuttings, or seed. However, specimens grown from seed can take decades to bloom; for this reason, gardeners usually grow plants that have been started from rooted cuttings or grafted cultivars known to flower well.
Another reason for failure to bloom can be excessive fertilizer (particularly nitrogen). Wisteria has nitrogen fixing capability (provided by Rhizobia bacteria in root nodules), and thus mature plants may benefit from added potassium and phosphate, but not nitrogen. Finally, wisteria can be reluctant to bloom before it has reached maturity. Maturation may require only a few years, as in W. macrostachya (Kentucky wisteria), or nearly twenty, as in W. sinensis. Maturation can be forced by physically abusing the main trunk, root pruning, or drought stress.
Wisteria can grow into a mound when unsupported, but is at its best when allowed to clamber up a tree, pergola, wall, or other supporting structure. W. floribunda (Japanese wisteria) with longer racemes is the best choice to grow along a pergola. W. sinensis (Chinese wisteria) with shorter racemes is the best choice for growing along a wall. Whatever the case, the support must be very sturdy, because mature wisteria can become immensely strong with heavy wrist-thick trunks and stems. These can collapse latticework, crush thin wooden posts, and even strangle large trees. Wisteria allowed to grow on houses can cause damage to gutters, downspouts, and similar structures. Wisteria flowers develop in buds near the base of the previous year's growth, so pruning back side shoots to the basal few buds in early spring can enhance the visibility of the flowers. If it is desired to control the size of the plant, the side shoots can be shortened to between 20 and 40 cm long in midsummer, and back to in the fall. Once the plant is a few years old, a relatively compact, free-flowering form can be achieved by pruning off the new tendrils three times during the growing season in the summer months. The flowers of some varieties are edible, and can even be used to make wine. Others are said to be toxic. Careful identification by an expert is strongly recommended before consuming this or any wild plant.
In the United Kingdom, the national collection of wisteria is held by Chris Lane at the Witch Hazel Nursery in Newington, near Sittingbourne in Kent.
Art and symbolism
Wisteria and their racemes have been widely used in Japan throughout the centuries and were a popular symbol in and heraldry. Wisteria is one of the five most commonly used motifs in the , and there are more than 150 types of wisteria . Because of its longevity and fertility, wisteria was considered an auspicious plant and was favored as a , and was adopted and popularized by the Fujiwara clan as their .
One popular dance in kabuki known as the Fuji Musume (or 'The Wisteria Maiden'), is the sole extant dance of a series of five personifying dances in which a maiden becomes the embodiment of the spirit of wisteria. In the West, both in building materials such as tile, as well as stained glass, wisterias have been used both in realism and stylistically in artistic works and industrial design.
| Biology and health sciences | Fabales | Plants |
204996 | https://en.wikipedia.org/wiki/Medulla%20oblongata | Medulla oblongata | The medulla oblongata or simply medulla is a long stem-like structure which makes up the lower part of the brainstem. It is anterior and partially inferior to the cerebellum. It is a cone-shaped neuronal mass responsible for autonomic (involuntary) functions, ranging from vomiting to sneezing. The medulla contains the cardiovascular center, the respiratory center, vomiting and vasomotor centers, responsible for the autonomic functions of breathing, heart rate and blood pressure as well as the sleep–wake cycle. "Medulla" is from Latin, ‘pith or marrow’. And "oblongata" is from Latin, ‘lengthened or longish or elongated'.
During embryonic development, the medulla oblongata develops from the myelencephalon. The myelencephalon is a secondary brain vesicle which forms during the maturation of the rhombencephalon, also referred to as the hindbrain.
The bulb is an archaic term for the medulla oblongata. In modern clinical usage, the word bulbar (as in bulbar palsy) is retained for terms that relate to the medulla oblongata, particularly in reference to medical conditions. The word bulbar can refer to the nerves and tracts connected to the medulla such as the corticobulbar tract, and also by association to those muscles innervated, including those of the tongue, pharynx and larynx.
Anatomy
The medulla can be thought of as being in two parts:
an upper open part or superior part where the dorsal surface of the medulla is formed by the fourth ventricle.
a lower closed part or inferior part where the fourth ventricle has narrowed at the obex in the caudal medulla, and surrounds part of the central canal.
External surfaces
The anterior median fissure contains a fold of pia mater, and extends along the length of the medulla oblongata. It ends at the lower border of the pons in a small triangular area, termed the foramen cecum. On either side of this fissure are raised areas termed the medullary pyramids. The pyramids house the pyramidal tracts–the corticospinal tract, and the corticobulbar tract of the nervous system. At the caudal part of the medulla these tracts cross over in the decussation of the pyramids obscuring the fissure at this point. Some other fibers that originate from the anterior median fissure above the decussation of the pyramids and run laterally across the surface of the pons are known as the anterior external arcuate fibers.
The region between the anterolateral and posterolateral sulcus in the upper part of the medulla is marked by a pair of swellings known as olivary bodies (also called olives). They are caused by the largest nuclei of the olivary bodies, the inferior olivary nuclei.
The posterior part of the medulla between the posterior median sulcus and the posterolateral sulcus contains tracts that enter it from the posterior funiculus of the spinal cord. These are the gracile fasciculus, lying medially next to the midline, and the cuneate fasciculus, lying laterally. These fasciculi end in rounded elevations known as the gracile and the cuneate tubercles. They are caused by masses of gray matter known as the gracile nucleus and the cuneate nucleus. The soma (cell bodies) in these nuclei are the second-order neurons of the posterior column-medial lemniscus pathway, and their axons, called the internal arcuate fibers or fasciculi, decussate from one side of the medulla to the other to form the medial lemniscus.
Just above the tubercles, the posterior aspect of the medulla is occupied by a triangular fossa, which forms the lower part of the floor of the fourth ventricle. The fossa is bounded on either side by the inferior cerebellar peduncle, which connects the medulla to the cerebellum.
The lower part of the medulla, immediately lateral to the cuneate fasciculus, is marked by another longitudinal elevation known as the tuberculum cinereum. It is caused by an underlying collection of gray matter known as the spinal trigeminal nucleus. The gray matter of this nucleus is covered by a layer of nerve fibers that form the spinal tract of the trigeminal nerve. The base of the medulla is defined by the commissural fibers, crossing over from the ipsilateral side in the spinal cord to the contralateral side in the brain stem; below this is the spinal cord.
Blood supply
Blood to the medulla is supplied by a number of arteries.
Anterior spinal artery: This supplies the whole medial part of the medulla oblongata.
Posterior inferior cerebellar artery: This is a major branch of the vertebral artery, and supplies the posterolateral part of the medulla, where the main sensory tracts run and synapse. It also supplies part of the cerebellum.
Direct branches of the vertebral artery: The vertebral artery supplies an area between the anterior spinal and posterior inferior cerebellar arteries, including the solitary nucleus and other sensory nuclei and fibers.
Posterior spinal artery: This supplies the dorsal column of the closed medulla containing fasciculus gracilis, gracile nucleus, fasciculus cuneatus, and cuneate nucleus.
Development
The medulla oblongata forms in fetal development from the myelencephalon. The final differentiation of the medulla is seen at week 20 gestation.
Neuroblasts from the alar plate of the neural tube at this level will produce the sensory nuclei of the medulla. The basal plate neuroblasts will give rise to the motor nuclei.
Alar plate neuroblasts give rise to:
The solitary nucleus, which contains the general visceral afferent fibers for taste, as well as the special visceral afferent column.
The spinal trigeminal nerve nuclei which contains the general somatic afferent column.
The cochlear and vestibular nuclei, which contain the special somatic afferent column.
The inferior olivary nucleus, which relays to the cerebellum.
The dorsal column nuclei, which contain the gracile and cuneate nuclei.
Basal plate neuroblasts give rise to:
The hypoglossal nucleus, which contains general somatic efferent fibers.
The nucleus ambiguus, which form the special visceral efferent.
The dorsal nucleus of vagus nerve and the inferior salivatory nucleus, both of which form the general visceral efferent fibers.
Function
The medulla oblongata connects the higher levels of the brain to the spinal cord, and is responsible for several functions of the autonomous nervous system which include:
The control of ventilation via signals from the carotid and aortic bodies. Respiration is regulated by groups of chemoreceptors. These sensors detect changes in the acidity of the blood; if, for example, the blood becomes too acidic, the medulla oblongata sends electrical signals to intercostal and phrenical muscle tissue to increase their contraction rate and increase oxygenation of the blood. The ventral respiratory group and the dorsal respiratory group are neurons involved in this regulation. The pre-Bötzinger complex is a cluster of interneurons involved in the respiratory function of the medulla.
Cardiovascular center – sympathetic, parasympathetic nervous system
Vasomotor center – baroreceptors
Reflex centers of vomiting, coughing, sneezing and swallowing. These reflexes which include the pharyngeal reflex, the swallowing reflex (also known as the palatal reflex), and the masseter reflex can be termed bulbar reflexes.
Clinical significance
A blood vessel blockage (such as in a stroke) will injure the pyramidal tract, medial lemniscus, and the hypoglossal nucleus. This causes a syndrome called medial medullary syndrome.
Lateral medullary syndrome can be caused by the blockage of either the posterior inferior cerebellar artery or of the vertebral arteries.
Progressive bulbar palsy (PBP) is a disease that attacks the nerves supplying the bulbar muscles. Infantile progressive bulbar palsy is progressive bulbar palsy in children.
Other animals
Both lampreys and hagfish possess a fully developed medulla oblongata. Since these are both very similar to early agnathans, it has been suggested that the medulla evolved in these early fish, approximately 505 million years ago. The status of the medulla as part of the primordial reptilian brain is confirmed by its disproportionate size in modern reptiles such as the crocodile, alligator, and monitor lizard.
Additional images
| Biology and health sciences | Nervous system | Biology |
205012 | https://en.wikipedia.org/wiki/Sewerage | Sewerage | Sewerage (or sewage system) is the infrastructure that conveys sewage or surface runoff (stormwater, meltwater, rainwater) using sewers. It encompasses components such as receiving drains, manholes, pumping stations, storm overflows, and screening chambers of the combined sewer or sanitary sewer. Sewerage ends at the entry to a sewage treatment plant or at the point of discharge into the environment. It is the system of pipes, chambers, manholes or inspection chamber, etc. that conveys the sewage or storm water.
In many cities, sewage (municipal wastewater or municipal sewage) is carried together with stormwater, in a combined sewer system, to a sewage treatment plant. In some urban areas, sewage is carried separately in sanitary sewers and runoff from streets is carried in storm drains. Access to these systems, for maintenance purposes, is typically through a manhole. During high precipitation periods a sewer system may experience a combined sewer overflow event or a sanitary sewer overflow event, which forces untreated sewage to flow directly to receiving waters. This can pose a serious threat to public health and the surrounding environment.
The system of sewers is called sewerage or sewerage system in British English and sewage system or sewer system in American English.
History
It was probably the need to get rid of foul smells rather than an understanding of the health hazards of human waste that led to the first proper sewage systems. Most settlements grew next to natural waterways into which waste from latrines was readily channeled, but the emergence of major cities exposed the inadequacy of this approach. Early civilizations like the Babylonians dug cesspits below floor level in their houses and created crude drainage systems for removing storm water. But it was not until 2000 BC in the Indus valley civilization that networks of precisely made brick-lined sewage drains were constructed along the streets to convey waste from homes. Toilets in homes on the street side were connected directly to these street sewers and were flushed manually with clean water. Centuries later, major cities such as Rome and Constantinople built increasingly complex networked sewer systems, some of which are still in use. It was after the construction of the sewer systems that people realized the reduction of health hazards.
Components and types
The main part of such a system is made up of large pipes (i.e. the sewers, or "sanitary sewers") that convey the sewage from the point of production to the point of treatment or discharge.
Types of sanitary sewer systems that all usually are gravity sewers include:
Combined sewer
Simplified sewerage
Storm drain
Sanitary sewers not relying solely on gravity include:
Vacuum sewer
Effluent sewer
Where a sewerage system has not been installed, sewage may be collected from homes by pipes into septic tanks or cesspits, where it may be treated or collected in vehicles and taken for treatment or disposal (a process known as fecal sludge management).
Maintenance and rehabilitation
Severe constraints are applied to sewerage, which may result in premature deterioration. These include root intrusion, joint displacement, cracks, and hole formations that lead to a significant volume of leakage with an overall risk for the environment and public health. For example, it is estimated that 500 million m3 of contaminated water per year can leak into soil and ground-water in Germany. The rehabilitation and replacement of damaged sewers is very costly. Annual rehabilitation costs for Los Angeles County are about €400 million, and in Germany, these costs are estimated to be €100 million.
Hydrogen sulfide (H2S) is indirectly responsible for biogenic sulfide corrosion and consequently, sewers need rehabilitation work. Various repair options are available to owners over a large range of costs and potential durability. One option is the application of a cementitious material based on calcium aluminate cement, after a cleaning of the corroded structure to remove loose material and contaminants in order to expose a sound, rough and clean substrate. Depending on the concrete condition and contamination, the cleaning can range from simple high pressure jet water cleaning (200 bar) up to real hydro-demolition (2000 bars).
One method to ensure sound concrete is exposed is to verify that the surface pH is superior to 10.
As for any concrete repair, the state-of-the-art rules must be followed. After this cleaning step, the cementitious material is applied to the saturated-surface-dry substrate using either:
Low pressure wet spray: this method is the more common because it does not produce dust and virtually no material is lost by rebound. It utilizes classical facade rotor pump, easily available in the market. The main drawback is the limited pumping distance that cannot exceed 75 meters.
Spinning head wet spray: this method is similar to the first, but the manual spraying is replaced by a spinning head projecting the mortar onto the repaired surface. This method is fast and especially suited for cylindrical chambers such as manholes. When a structure is so severely corroded that human entry is a risk, spinning head application permits an “un-manned” consolidation of the manhole.
High pressure dry spray: this method, also called “shotcrete” or “gunite” is allowing a faster rate of rehabilitation, and also to make a thicker application in a single pass. The main interest of dry shotcrete is the capacity to pump the mortar over a long distance and this is needed when the access points are distant. Perhaps the longest dry shotcrete distance is a job site in Australia in 2014, where 100% calcium aluminate mortar was air transported over 800 meters before being sprayed. The main drawback with dry shotcrete is the generation of dust and rebound; these could be limited and controlled with appropriate means (pre-moisture ring, adapted aggregate grading, experienced nozzleman, water mist cut-off walls, etc.).
Challenges
Water table
Sewer system infrastructure often reduces the water table in areas, especially in densely populated areas where rainwater (from house roofs) is directly piped into the system, as opposed to being allowed to be absorbed by the soil. In certain areas it has resulted in a significant lowering of the water table. In the example of Belgium, a lowering of the water table by 100 meters has been the result. The freshwater that is accumulated by the system is then piped to the sea. In areas where this is a concern, vacuum sewers may be used instead, due to the shallow excavation that is possible for them.
Lack of infrastructure
In many low-income countries, sewage may in some cases drain directly into receiving water bodies without the existence of sewerage systems. This can cause water pollution. Pathogens can cause a variety of illnesses. Some chemicals pose risks even at very low concentrations and can remain a threat for long periods of time because of bioaccumulation in animal or human tissue.
Regulations
In many European countries, citizens are obliged to connect their home sanitation to the national sewerage where possible. This has resulted in large percentages of the population being connected. For example, the Netherlands have 99% of the population connected to the system, and 1% has an individual sewage disposal system or treatment system, e.g., septic tank. Others have slightly lower (although still substantial) percentages; e.g., 96% for Germany.
Trends
Current approaches to sewage management may include handling surface runoff separately from sewage, handling greywater separately from blackwater (flush toilets), and coping better with abnormal events (such as peaks stormwater volumes from extreme weather).
| Technology | Food, water and health | null |
205140 | https://en.wikipedia.org/wiki/Sandpiper | Sandpiper | Scolopacidae is a large family of shorebirds, or waders, which mainly includes many species known as sandpipers, but also others such as woodcocks, curlews and snipes. Most of these species eat small invertebrates picked out of the mud or soil. Different lengths of bills enable multiple species to feed in the same habitat, particularly on the coast, without direct competition for food.
Sandpipers have long bodies and legs, and narrow wings. Most species have a narrow bill, but the form and length are variable. They are small to medium-sized birds, measuring in length. The bills are sensitive, allowing the birds to feel the mud and sand as they probe for food. They generally have dull plumage, with cryptic brown, grey, or streaked patterns, although some display brighter colours during the breeding season.
Most species nest in open areas and defend their territories with aerial displays. The nest itself is a simple scrape in the ground, in which the bird typically lays three or four eggs. The young of most species are precocial.
Taxonomy
The family Scolopacidae was introduced (as Scolopacea) by the French polymath Constantine Samuel Rafinesque in 1815. The family contains 98 extant or recently extinct species divided into 15 genera. For more details, see the article List of sandpiper species.
The following genus-level cladogram of the Scolopacidae is based on a study by David Černý and Rossy Natale that was published in 2022.
Evolution
The early fossil record is scant for a group that was probably present at the non-avian dinosaurs' extinction. "Totanus" teruelensis (Late Miocene of Los Mansuetos (Spain) is sometimes considered a scolopacid – maybe a shank – but may well be a larid; little is known of it.
Paractitis has been named from the Early Oligocene of Saskatchewan (Canada), while Mirolia is known from the Middle Miocene at Deiningen in the Nördlinger Ries (Germany). Most living genera would seem to have evolved throughout the Oligocene to Miocene with the waders perhaps a bit later; see the genus accounts for the fossil record.
In addition there are some indeterminable remains that might belong to extant genera or their extinct relatives:
Scolopacidae gen. et sp. indet. (Middle Miocene of Františkovy Lázně, Czech Republic – Late Miocene of Kohfidisch, Austria)
Scolopacidae gen. et sp. indet. (Edson Early Pliocene of Sherman County, USA)
Description
The sandpipers exhibit considerable range in size and appearance, the wide range of body forms reflecting a wide range of ecological niches. Sandpipers range in size from the least sandpiper, at as little as and in length, to the Far Eastern curlew, at up to in length, and the Eurasian curlew, at up to . Within species there is considerable variation in patterns of sexual dimorphism. Males are larger than females in ruffs and several sandpipers, but are smaller than females in the knots, curlews, phalaropes and godwits. The sexes are similarly sized in the snipes, woodcock and tringine sandpipers. Compared to the other large family of wading birds, the plovers (Charadriidae), they tend to have smaller eyes, more slender heads, and longer thinner bills. Some are quite long-legged, and most species have three forward pointing toes with a smaller hind toe (the exception is the sanderling, which lacks a hind toe).
Sandpipers are more geared towards tactile foraging methods than the plovers, which favour more visual foraging methods, and this is reflected in the high density of tactile receptors in the tips of their bills. These receptors are housed in a slight horny swelling at the tip of the bill (except for the surfbird and the two turnstones). Bill shape is highly variable within the family, reflecting differences in feeding ecology. Bill length relative to head length varies from three times the length of the head in the long-billed curlew to just under half the head length in the Tuamotu sandpiper. Bills may be straight, slightly upcurled or strongly downcurved. Like all birds, the bills of sandpipers are capable of cranial kinesis, literally being able to move the bones of the skull (other than the obvious movement of the lower jaw) and specifically bending the upper jaw without opening the entire jaw, an act known as rhynchokinesis. It has been hypothesized this helps when probing by allowing the bill to be partly opened with less force and improving manipulation of prey items in the substrate. Rhynchokinesis is also used by sandpipers feeding on prey in water to catch and manipulate prey.
Distribution, habitat, and movements
The sandpipers have a cosmopolitan distribution, occurring across most of the world's land surfaces except for Antarctica and the driest deserts. A majority of the family breed at moderate to high latitudes in the Northern Hemisphere, in fact accounting for the most northerly breeding birds in the world. Only a few species breed in tropical regions, ten of which are snipes and woodcocks and the remaining species being the unusual Tuamotu sandpiper, which breeds in French Polynesia (although prior to the arrival of humans in the Pacific there were several other closely related species of Polynesian sandpiper).
Diet and feeding
There are broadly four feeding styles employed by the sandpipers, although many species are flexible and may use more than one style. The first is pecking with occasional probing, usually done by species in drier habitats that do not have soft soils or mud. The second, and most frequent, method employed is probing soft soils, muds and sands for prey. The third, used by Tringa shanks, involves running in shallow water with the bill under the water chasing fish, a method that uses sight as well as tactile senses. The final method, employed by the phalaropes and some Calidris sandpipers, involves pecking at the water for small prey. A few species of scolopacids are omnivorous to some extent, taking seeds and shoots as well as invertebrates.
Breeding
Many sandpipers form monogamous pairs, but some sandpipers have female-only parental care, some male-only parental care, some sequential polyandry and other compete for the mate on the lek. Sandpipers lay three or four eggs into the nest, which is usually a vague depression or scrape in the open ground, scarcely lined with soft vegetation. In species where both parents incubate the eggs, females and males share their incubation duties in various ways both within and between species. In some pairs, parents exchange on the nest in the morning and in the evening so that their incubation rhythm follows a 24-hour day, in others each sex may sit on the nest continuously for up to 24 hours before it is exchanged by its partner. In species where only a single parent incubates the eggs, during the night the parent sits on the eggs nearly continuously and then during the warmest part of a day leaves the nest for short feeding bouts. Chicks hatch after about three weeks of incubation and are able to walk and forage within a few hours of hatching. A single parent or both parents guide and brood the chicks.
Gallery
| Biology and health sciences | Charadriiformes | null |
205156 | https://en.wikipedia.org/wiki/Dysgeusia | Dysgeusia | Dysgeusia, also known as parageusia, is a distortion of the sense of taste. Dysgeusia is also often associated with ageusia, which is the complete lack of taste, and hypogeusia, which is a decrease in taste sensitivity. An alteration in taste or smell may be a secondary process in various disease states, or it may be the primary symptom. The distortion in the sense of taste is the only symptom, and diagnosis is usually complicated since the sense of taste is tied together with other sensory systems. Common causes of dysgeusia include chemotherapy, asthma treatment with albuterol, and zinc deficiency. Liver disease, hypothyroidism, and rarely, certain types of seizures can also lead to dysgeusia. Different drugs can also be responsible for altering taste and resulting in dysgeusia. Due to the variety of causes of dysgeusia, there are many possible treatments that are effective in alleviating or terminating the symptoms. These include artificial saliva, pilocarpine, zinc supplementation, alterations in drug therapy, and alpha lipoic acid.
Signs and symptoms
The alterations in the sense of taste, usually a metallic taste, and sometimes smell are the only symptoms.
Causes
Chemotherapy
A major cause of dysgeusia is chemotherapy for cancer. Chemotherapy often induces damage to the oral cavity, resulting in oral mucositis, oral infection, and salivary gland dysfunction. Oral mucositis consists of inflammation of the mouth, along with sores and ulcers in the tissues. Healthy individuals normally have a diverse range of microbial organisms residing in their oral cavities; however, chemotherapy can permit these typically non-pathogenic agents to cause serious infection, which may result in a decrease in saliva. In addition, patients who undergo radiation therapy also lose salivary tissues. Saliva is an important component of the taste mechanism. Saliva both interacts with and protects the taste receptors in the mouth. Saliva mediates sour and sweet tastes through bicarbonate ions and glutamate, respectively. The salt taste is induced when sodium chloride levels surpass the concentration in the saliva. It has been reported that 50% of chemotherapy patients have had either dysgeusia or another form of taste impairment. Examples of chemotherapy treatments that can lead to dysgeusia are cyclophosphamide, cisplatin, vismodegib, and etoposide. The exact mechanism of chemotherapy-induced dysgeusia is unknown.
Taste buds
Distortions in the taste buds may give rise to dysgeusia. In a study conducted by Masahide Yasuda and Hitoshi Tomita from Nihon University of Japan, it has been observed that patients with this taste disorder have fewer microvilli than normal. In addition, the nucleus and cytoplasm of the taste bud cells have been reduced. Based on their findings, dysgeusia results from loss of microvilli and the reduction of Type III intracellular vesicles, all of which could potentially interfere with the gustatory pathway. Radiation to head and neck also results in direct destruction of taste buds, apart from effects of altered salivary output.
Zinc deficiency
Another primary cause of dysgeusia is zinc deficiency. While the exact role of zinc in dysgeusia is unknown, it has been cited that zinc is partly responsible for the repair and production of taste buds. Zinc somehow directly or indirectly interacts with carbonic anhydrase VI, influencing the concentration of gustin, which is linked to the production of taste buds. It has also been reported that patients treated with zinc experience an elevation in calcium concentration in the saliva. In order to work properly, taste buds rely on calcium receptors. Zinc "is an important cofactor for alkaline phosphatase, the most abundant enzyme in taste bud membranes; it is also a component of a parotid salivary protein important to the development and maintenance of normal taste buds".
Taste Modifiers
Miraculin Found in miracle berries, sweetens nonsweet food and beverages.
Gymnema sylvestre Blocks the ability to taste sweetness.
Drugs
There are also a wide variety of drugs that can trigger dysgeusia, including zopiclone, H1-antihistamines, such as azelastine and emedastine. Approximately 250 drugs affect taste, including Paxlovid, a drug used to treat COVID-19. Some describe so-called "Paxlovid mouth" as like a "mouthful of dirty pennies and rotten soymilk", according to the Wall Street Journal.
The sodium channels linked to taste receptors can be inhibited by amiloride, and the creation of new taste buds and saliva can be impeded by antiproliferative drugs. Saliva can have traces of the drug, giving rise to a metallic flavor in the mouth; examples include lithium carbonate and tetracyclines. Drugs containing sulfhydryl groups, including penicillamine and captopril, may react with zinc and cause deficiency. Metronidazole and chlorhexidine have been found to interact with metal ions that associate with the cell membrane. Drugs that act by blocking the renin–angiotensin–aldosterone system, for example by antagonizing the angiotensin II receptor (as eprosartan does), have been linked to dysgeusia. There are few case reports claiming calcium channel blockers like Amlodipine also cause dysgeusia by blocking calcium sensitive taste buds.
Pregnancy
Changes in hormone levels during pregnancy, such as estrogen, can affect the sense of taste. A study found that 93 percent of pregnant women reported some change in taste during pregnancy.
Miscellaneous causes
Xerostomia, also known as dry mouth syndrome, can precipitate dysgeusia because normal salivary flow and concentration are necessary for taste. Injury to the glossopharyngeal nerve can result in dysgeusia. In addition, damage done to the pons, thalamus, and midbrain, all of which compose the gustatory pathway, can be potential factors. In a case study, 22% of patients who were experiencing a bladder obstruction were also experiencing dysgeusia. Dysgeusia was eliminated in 100% of these patients once the obstruction was removed. Although it is uncertain what the relationship between bladder relief and dysgeusia entails, it has been observed that the areas responsible for urinary system and taste in the pons and cerebral cortex in the brain are close in proximity.
Dysgeusia can be a symptom of head and neck cancer. In this case it often present together with having dry mouth.
Dysgeusia often occurs for unknown reasons. A wide range of miscellaneous factors may contribute to this taste disorder, such as gastric reflux, lead poisoning, and diabetes mellitus. A minority of pine nuts can apparently cause taste disturbances, for reasons which are not entirely proven. Certain pesticides can have damaging effects on the taste buds and nerves in the mouth. These pesticides include organochloride compounds and carbamate pesticides. Damage to the peripheral nerves, along with injury to the chorda tympani branch of the facial nerve, also cause dysgeusia. A surgical risk for laryngoscopy and tonsillectomy include dysgeusia. Patients with burning mouth syndrome, primarily menopausal women, often have dysgeusia as well.
Normal function
The sense of taste is based on the detection of chemicals by specialized taste cells in the mouth. The mouth, throat, larynx, and esophagus all have taste buds, which are replaced every ten days. Each taste bud contains receptor cells. Afferent nerves make contact with the receptor cells at the base of the taste bud. A single taste bud is innervated by several afferent nerves, while a single efferent fiber innervates several taste buds. Fungiform papillae are present on the anterior portion of the tongue while circumvallate papillae and foliate papillae are found on the posterior portion of the tongue. The salivary glands are responsible for keeping the taste buds moist with saliva.
A single taste bud is composed of four types of cells, and each taste bud has between 30 and 80 cells. Type I cells are thinly shaped, usually in the periphery of other cells. They also contain high amounts of chromatin. Type II cells have prominent nuclei and nucleoli with much less chromatin than Type I cells. Type III cells have multiple mitochondria and large vesicles. Type I, II, and III cells also contain synapses. Type IV cells are normally rooted at the posterior end of the taste bud. Every cell in the taste bud forms microvilli at the ends.
Diagnosis
In general, gustatory disorders are challenging to diagnose and evaluate. Because gustatory functions are tied to the sense of smell, the somatosensory system, and the perception of pain (such as in tasting spicy foods), it is difficult to examine sensations mediated through an individual system. In addition, gustatory dysfunction is rare when compared to olfactory disorders.
Diagnosis of dysgeusia begins with the patient being questioned about salivation, swallowing, chewing, oral pain, previous ear infections (possibly indicated by hearing or balance problems), oral hygiene, and stomach problems. The initial history assessment also considers the possibility of accompanying diseases such as diabetes mellitus, hypothyroidism, or cancer. A clinical examination is conducted and includes an inspection of the tongue and the oral cavity. Furthermore, the ear canal is inspected, as lesions of the chorda tympani have a predilection for this site.
Gustatory testing
In order to further classify the extent of dysgeusia and clinically measure the sense of taste, gustatory testing may be performed. Gustatory testing is performed either as a whole-mouth procedure or as a regional test. In both techniques, natural or electrical stimuli can be used. In regional testing, 20 to 50 μL of liquid stimulus is presented to the anterior and posterior tongue using a pipette, soaked filter-paper disks, or cotton swabs. In whole mouth testing, small quantities (2-10 mL) of solution are administered, and the patient is asked to swish the solution around in the mouth.
Threshold tests for sucrose (sweet), citric acid (sour), sodium chloride (salty), and quinine or caffeine (bitter) are frequently performed with natural stimuli. One of the most frequently used techniques is the "three-drop test". In this test, three drops of liquid are presented to the subject. One of the drops is of the taste stimulus, and the other two drops are pure water. Threshold is defined as the concentration at which the patient identifies the taste correctly three times in a row.
Suprathreshold tests, which provide intensities of taste stimuli above threshold levels, are used to assess the patient's ability to differentiate between different intensities of taste and to estimate the magnitude of suprathreshold loss of taste. From these tests, ratings of pleasantness can be obtained using either the direct scaling or magnitude matching method and may be of value in the diagnosis of dysgeusia. Direct scaling tests show the ability to discriminate among different intensities of stimuli and whether a stimulus of one quality (sweet) is stronger or weaker than a stimulus of another quality (sour). Direct scaling cannot be used to determine if a taste stimulus is being perceived at abnormal levels. In this case, magnitude matching is used, in which a patient is asked to rate the intensities of taste stimuli and stimuli of another sensory system, such as the loudness of a tone, on a similar scale. For example, the Connecticut Chemosensory Clinical Research Center asks patients to rate the intensities of NaCl, sucrose, citric acid and quinine-HCl stimuli, and the loudness of 1000 Hz tones.
Other tests include identification or discrimination of common taste substances. Topical anesthesia of the tongue has been reported to be of use in the diagnosis of dysgeusia as well, since it has been shown to relieve the symptoms of dysgeusia temporarily. In addition to techniques based on the administration of chemicals to the tongue, electrogustometry is frequently used. It is based on the induction of gustatory sensations by means of an anodal electrical direct current. Patients usually report sour or metallic sensations similar to those associated with touching both poles of a live battery to the tongue. Although electrogustometry is widely used, there seems to be a poor correlation between electrically and chemically induced sensations.
Diagnostic tools
Certain diagnostic tools can also be used to help determine the extent of dysgeusia. Electrophysiological tests and simple reflex tests may be applied to identify abnormalities in the nerve-to-brainstem pathways. For example, the blink reflex may be used to evaluate the integrity of the trigeminal nerve–pontine brainstem–facial nerve pathway, which may play a role in gustatory function.
Structural imaging is routinely used to investigate lesions in the taste pathway. Magnetic resonance imaging allows direct visualization of the cranial nerves. Furthermore, it provides significant information about the type and cause of a lesion. Analysis of mucosal blood flow in the oral cavity in combination with the assessment of autonomous cardiovascular factors appears to be useful in the diagnosis of autonomic nervous system disorders in burning mouth syndrome and in patients with inborn disorders, both of which are associated with gustatory dysfunction. Cell cultures may also be used.
In addition, the analysis of saliva should be performed, as it constitutes the environment of taste receptors, including transport of tastes to the receptor and protection of the taste receptor. Typical clinical investigations involve sialometry and sialochemistry. Studies have shown that electron micrographs of taste receptors obtained from saliva samples indicate pathological changes in the taste buds of patients with dysgeusia and other gustatory disorders.
Treatments
Artificial saliva and pilocarpine
Because medications have been linked to approximately 22% to 28% of all cases of dysgeusia, researching a treatment for this particular cause has been important. Xerostomia, or a decrease in saliva flow, can be a side effect of many drugs, which, in turn, can lead to the development of taste disturbances such as dysgeusia. Patients can lessen the effects of xerostomia with breath mints, sugarless gum, or lozenges; or physicians can increase saliva flow with artificial saliva or oral pilocarpine. Artificial saliva mimics the characteristics of natural saliva by lubricating and protecting the mouth, but does not provide any digestive or enzymatic benefits. Pilocarpine is a cholinergic drug, meaning it has the same effects as the neurotransmitter acetylcholine. Acetylcholine has the function of stimulating the salivary glands to actively produce saliva. The increase in saliva flow is effective in improving the movement of tastants to the taste buds.
Zinc deficiency
Zinc supplementation
Approximately one half of drug-related taste distortions are caused by a zinc deficiency. Many medications are known to chelate, or bind, zinc, preventing the element from functioning properly. Due to the causal relationship of insufficient zinc levels to taste disorders, research has been conducted to test the efficacy of zinc supplementation as a possible treatment for dysgeusia. In a randomized clinical trial, fifty patients with idiopathic dysgeusia were given either zinc or a lactose placebo. The patients prescribed the zinc reported experiencing improved taste function and less severe symptoms compared to the control group, suggesting that zinc may be a beneficial treatment. The efficacy of zinc, however, has been ambiguous in the past. In a second study, 94% of patients who were provided with zinc supplementation did not experience any improvement in their condition. This ambiguity is most likely due to small sample sizes and the wide range of causes of dysgeusia. A recommended daily oral dose of 25–100 mg, as zinc gluconate, appears to be an effective treatment for taste dysfunction provided that there are low levels of zinc in the blood serum. There is not a sufficient amount of evidence to determine whether or not zinc supplementation is able to treat dysgeusia when low zinc concentrations are not detected in the blood.
A Cochrane Review in 2017 assessed the effects of different interventions for the management of taste disturbances. There was very low-quality evidence to support the role of zinc supplementation in the improvement of taste acuity and taste discrimination in patients with zinc deficiency or idiopathic taste disorders. Further research is required to improve the quality of evidence for zinc supplementation as an effective intervention for the management of dysgeusia.
Zinc infusion in chemotherapy
It has been reported that approximately 68% of cancer patients undergoing chemotherapy experience disturbances in sensory perception such as dysgeusia. In a pilot study involving twelve lung cancer patients, chemotherapy drugs were infused with zinc in order to test its potential as a treatment. The results indicated that, after two weeks, no taste disturbances were reported by the patients who received the zinc-supplemented treatment while most of the patients in the control group who did not receive the zinc reported taste alterations. A multi-institutional study involving a larger sample size of 169 patients, however, indicated that zinc-infused chemotherapy did not have an effect on the development of taste disorders in cancer patients. An excess amount of zinc in the body can have negative effects on the immune system, and physicians must use caution when administering zinc to immunocompromised cancer patients. Because taste disorders can have detrimental effects on a patient's quality of life, more research needs to be conducted concerning possible treatments such as zinc supplementation.
Altering drug therapy
The effects of drug-related dysgeusia can often be reversed by stopping the patient's regimen of the taste altering medication. In one case, a forty-eight-year-old woman who had hypertension was being treated with valsartan. Due to this drug's inability to treat her condition, she began taking a regimen of eprosartan, an angiotensin II receptor antagonist. Within three weeks, she began experiencing a metallic taste and a burning sensation in her mouth that ceased when she stopped taking the medication. When she began taking eprosartan on a second occasion, her dysgeusia returned. In a second case, a fifty-nine-year-old man was prescribed amlodipine in order to treat his hypertension. After eight years of taking the drug, he developed a loss of taste sensation and numbness in his tongue. When he ran out of his medication, he decided not to obtain a refill and stopped taking amlodipine. Following this self-removal, he reported experiencing a return of his taste sensation. Once he refilled his prescription and began taking amlodipine a second time, his taste disturbance reoccurred. These two cases suggest that there is an association between these drugs and taste disorders. This link is supported by the "de-challenge" and "re-challenge" that took place in both instances. It appears that drug-induced dysgeusia can be alleviated by reducing the drug's dose or by substituting a second drug from the same class.
Alpha lipoic acid
Alpha lipoic acid (ALA) is an antioxidant that is made naturally by human cells. It can also be administered in capsules or can be found in foods such as red meat, organ meats, and yeast. Like other antioxidants, it functions by ridding the body of harmful free radicals that can cause damage to tissues and organs. It has an important role in the Krebs cycle as a coenzyme leading to the production of antioxidants, intracellular glutathione, and nerve-growth factors. Animal research has also uncovered the ability of ALA to improve nerve conduction velocity. Because flavors are perceived by differences in electric potential through specific nerves innervating the tongue, idiopathic dysgeusia may be a form of a neuropathy. ALA has proven to be an effective treatment for burning mouth syndrome, spurring studies in its potential to treat dysgeusia. In a study of forty-four patients diagnosed with the disorder, one half was treated with the drug for two months, while the other half, the control group, was given a placebo for two months, followed by a two-month treatment of ALA. The results showed that 91% of the group initially treated with ALA reported an improvement in their condition compared to only 36% of the control group. After the control group was treated with ALA, 72% reported an improvement. This study suggests that ALA may be a potential treatment for patients, and supports that full double blind randomized studies should be performed.
Managing dysgeusia
In addition to the aforementioned treatments, there are also many management approaches that can alleviate the symptoms of dysgeusia. These include using non-metallic silverware, avoiding metallic- or bitter-tasting foods, increasing the consumption of foods high in protein, flavoring foods with spices and seasonings, serving foods cold in order to reduce any unpleasant taste or odor, frequently brushing one's teeth and utilizing mouthwash, or using sialogogues such as sugar-free gum or sour-tasting drops that stimulate the production of saliva. When taste is impeded, the food experience can also be improved through means other than taste, such as texture, aroma, temperature, and color.
Psychological impacts
People with dysgeusia are also forced to manage the impact that the disorder has on their quality of life. An altered sense of taste has effects on food choice and intake, and can lead to weight loss, malnutrition, impaired immunity, and a decline in health. Patients diagnosed with dysgeusia must use caution when adding sugar and salt to food, and must be sure not to overcompensate for their lack of taste with excess amounts. Since the elderly are often on multiple medications, they are at risk for taste disturbances, increasing the chances of developing depression, loss of appetite, and extreme weight loss. This is cause for evaluation and management of their dysgeusia. In patients undergoing chemotherapy, taste distortions can often be severe, and make compliance with cancer treatment difficult. Other problems that may arise include anorexia, and behavioral changes that can be misinterpreted as psychiatric delusions regarding food. Symptoms including paranoia, amnesia, cerebellar malfunction, and lethargy can also manifest when undergoing histidine treatment.
Future research
Every year, more than 200,000 individuals see their physicians concerning chemosensory problems, and many more taste disturbances are never reported. Due to the large number of persons affected by taste disorders, basic and clinical research are both receiving support at different institutions and chemosensory research centers across the United States. These taste and smell clinics are focusing their research on better understanding the mechanisms involved in gustatory function and taste disorders such as dysgeusia. For example, the National Institute on Deafness and Other Communication Disorders is looking into the mechanisms underlying the key receptors on taste cells, and applying this knowledge to the future of medications and artificial food products. Meanwhile, the Taste and Smell Clinic at the University of Connecticut Health Center is integrating behavioral, neurophysiological, and genetic studies involving stimulus concentrations and intensities, in order to better understand taste function.
| Biology and health sciences | Symptoms and signs | Health |
205264 | https://en.wikipedia.org/wiki/Internal%20medicine | Internal medicine | Internal medicine, also known as general medicine in Commonwealth nations, is a medical specialty for medical doctors focused on the prevention, diagnosis, and treatment of internal diseases in adults. Medical practitioners of internal medicine are referred to as internists, or physicians in Commonwealth nations. Internists possess specialized skills in managing patients with undifferentiated or multi-system disease processes. They provide care to both hospitalized (inpatient) and ambulatory (outpatient) patients and often contribute significantly to teaching and research. Internists are qualified physicians who have undergone postgraduate training in internal medicine, and should not be confused with "interns", a term commonly used for a medical doctor who has obtained a medical degree but does not yet have a license to practice medicine unsupervised.
In the United States and Commonwealth nations, there is often confusion between internal medicine and family medicine, with people mistakenly considering them equivalent.
Internists primarily work in hospitals, as their patients are frequently seriously ill or require extensive medical tests. Internists often have subspecialty interests in diseases affecting particular organs or organ systems. The certification process and available subspecialties may vary across different countries.
Additionally, internal medicine is recognized as a specialty within clinical pharmacy and veterinary medicine.
Etymology and historical development
The term internal medicine in English has its etymology in the 19th-century German term . Originally, internal medicine focused on determining the underlying "internal" or pathological causes of symptoms and syndromes through a combination of medical tests and bedside clinical examination of patients. This approach differed from earlier generations of physicians, such as the 17th-century English physician Thomas Sydenham, known as the father of English medicine or "the English Hippocrates." Sydenham developed the field of nosology (the study of diseases) through a clinical approach that involved diagnosing and managing diseases based on careful bedside observation of the natural history of disease and their treatment. Sydenham emphasized understanding the internal mechanisms and causes of symptoms rather than dissecting cadavers and scrutinizing the internal workings of the body.
In the 17th century, there was a shift towards anatomical pathology and laboratory studies, and Giovanni Battista Morgagni, an Italian anatomist of the 18th century, is considered the father of anatomical pathology. Laboratory investigations gained increasing significance, with contributions from physicians like German physician and bacteriologist Robert Koch in the 19th century. During this time, internal medicine emerged as a field that integrated the clinical approach with the use of investigations. Many American physicians of the early 20th century studied medicine in Germany and introduced this medical field to the United States, adopting the name "internal medicine" in imitation of the existing German term.
Internal medicine has historical roots in ancient India and ancient China. The earliest texts about internal medicine can be found in the Ayurvedic anthologies of Charaka.
Role of internal medicine specialists
Internal medicine specialists, also referred to as general internal medicine specialists or general medicine physicians in Commonwealth countries, are specialized physicians trained to manage complex or multisystem disease conditions that single-organ specialists may not be equipped to handle. They are often called upon to address undifferentiated presentations that do not fit neatly within the scope of a single-organ specialty, such as shortness of breath, fatigue, weight loss, chest pain, confusion, or alterations in conscious state. They may manage serious acute illnesses that affect multiple organ systems concurrently within a single patient, as well as the management of multiple chronic diseases in a single patient.
While many internal medicine physicians choose to subspecialize in specific organ systems, general internal medicine specialists do not necessarily possess any lesser expertise than single-organ specialists. Rather, they are specifically trained to care for patients with multiple simultaneous problems or complex comorbidities.
Due to the complexity involved in explaining the treatment of diseases that are not localized to a single organ, there has been some confusion surrounding the meaning of internal medicine and the role of an "internist". Although internists may serve as primary care physicians, they are not synonymous with "family physicians", "family practitioners", "general practitioners", or "GPs". The training of internists is solely focused on adults and does not typically include surgery, obstetrics, or pediatrics. According to the American College of Physicians, internists are defined as "physicians who specialize in the prevention, detection, and treatment of illnesses in adults." While there may be some overlap in the patient population served by both internal medicine and family medicine physicians, internists primarily focus on adult care with an emphasis on diagnosis, whereas family medicine incorporates a holistic approach to care for the entire family unit. Internists also receive substantial training in various recognized subspecialties within the field and are experienced in both inpatient and outpatient settings. On the other hand, family medicine physicians receive education covering a wide range of conditions and typically train in an outpatient setting with less exposure to hospital settings. The historical roots of internal medicine can be traced back to the incorporation of scientific principles into medical practice in the 1800s, while family medicine emerged as part of the primary care movement in the 1960s.
Education and training
The training and career pathways for internists vary considerably across different countries.
Many programs require previous undergraduate education prior to medical school admission. This "pre-medical" education is typically four or five years in length. Graduate medical education programs vary in length by country. Medical education programs are tertiary-level courses, undertaken at a medical school attached to a university. In the US, medical school consists of four years. Hence, gaining a basic medical education may typically take eight years, depending on jurisdiction and university.
Following completion of entry-level training, newly graduated medical practitioners are often required to undertake a period of supervised practice before their licensure, or registration, is granted, typically one or two years. This period may be referred to as "internship", "conditional registration", or "foundation programme". Then, doctors may follow specialty training in internal medicine if they wish, typically being selected to training programs through competition. In North America, this period of postgraduate training is referred to as residency training, followed by an optional fellowship if the internist decides to train in a subspecialty.
In most countries, residency training for internal medicine lasts three years and centers on secondary and tertiary levels of health care, as opposed to primary health care. In Commonwealth countries, trainees are often called senior house officers for four years after the completion of their medical degree (foundation and core years). After this period, they are able to advance to registrar grade when they undergo a compulsory subspecialty training (including acute internal medicine or a dual subspecialty including internal medicine). This latter stage of training is achieved through competition rather than just by yearly progress as the first years of postgraduate training.
Certification
In the US, three organizations are responsible for the certification of trained internists (i.e., doctors who have completed an accredited residency training program) in terms of their knowledge, skills, and attitudes that are essential for patient care: the American Board of Internal Medicine, the American Osteopathic Board of Internal Medicine and the Board of Certification in Internal Medicine. In the UK, the General Medical Council oversees licensing and certification of internal medicine physicians. The Royal Australasian College of Physicians confers fellowship to internists (and sub-specialists) in Australia. The Medical Council of Canada oversees licensing of internists in Canada.
Subspecialties
United States of America
In the US, two organizations are responsible for certification of subspecialists within the field: the American Board of Internal Medicine and the American Osteopathic Board of Internal Medicine. Physicians (not only internists) who successfully pass board exams receive "board certified" status.
American Board of Internal Medicine
The following are the subspecialties recognized by the American Board of Internal Medicine.
Adolescent medicine
Adult congenital heart disease
Advanced heart failure and transplant cardiology
Allergy and immunology, concerned with the diagnosis, treatment and management of allergies, asthma and disorders of the immune system.
Cardiovascular disease, dealing with disorders of the heart and blood vessels*
Clinical cardiac electrophysiology
Critical care medicine, is dealing with life-threatening conditions requiring intensive monitoring and treatment.
Endocrinology, diabetes & metabolism, dealing with disorders of the endocrine system and its specific secretions called hormones
Gastroenterology, concerned with the field of digestive diseases
Geriatric medicine
Hematology, concerned with blood, the blood-forming organs and its disorders.
Hospice & palliative medicine
Infectious disease, concerned with disease caused by a biological agent such as by a virus, bacterium or parasite
Interventional cardiology
Medical oncology, dealing with the chemotherapeutic (chemical) and/or immunotherapeutic (immunological) treatment of cancer
Nephrology, dealing with the study of the function and diseases of the kidney
Neurocritical care
Pulmonary disease, dealing with diseases of the lungs and the respiratory tract
Rheumatology, devoted to the diagnosis and therapy of rheumatic diseases
Sleep medicine
Sports medicine
Transplant hepatology
American College of Osteopathic Internists
The American College of Osteopathic Internists recognizes the following subspecialties:
Allergy/immunology
Cardiology
Cardiac electrophysiology
Critical care medicine
Endocrinology
Gastroenterology
Geriatrics
Hematology/oncology
Interventional cardiology
Infectious diseases
Nephrology
Oncology
Palliative care medicine
Pulmonary Diseases
Pulmonology
Rheumatology
Sleep medicine
United Kingdom
In the United Kingdom, the three medical Royal Colleges (the Royal College of Physicians of London, the Royal College of Physicians of Edinburgh and the Royal College of Physicians and Surgeons of Glasgow) are responsible for setting curricula and training programmes through the Joint Royal Colleges Postgraduate Training Board (JRCPTB), although the process is monitored and accredited by the independent General Medical Council (which also maintains the specialist register).
Doctors who have completed medical school spend two years in foundation training completing a basic postgraduate curriculum. After two years of Core Medical Training (CT1/CT2), or three years of Internal Medicine Training (IMT1/IMT2/IMT3) as of 2019, since and attaining the Membership of the Royal College of Physicians, physicians commit to one of the medical specialties:
Acute internal medicine (with possible subspecialty in stroke medicine)
Allergy
Audio vestibular medicine
Aviation and space medicine
Cardiology (with possible subspecialty in stroke medicine)
Clinical genetics
Clinical neurophysiology
Clinical oncology
Clinical pharmacology and therapeutics (with possible subspecialty in stroke medicine)
Dermatology
Endocrinology and diabetes mellitus
Gastroenterology (with possible subspecialty in hepatology)
General (internal) medicine (with possible subspecialty in metabolic medicine or stroke medicine)
Genito-urinary medicine
Geriatric medicine (with possible subspecialty in stroke medicine)
Haematology
Immunology
Infectious diseases
Intensive care medicine
Medical microbiology
Medical oncology (clinical or radiation oncology falls under the Royal College of Radiologists, although entry is through CMT and MRCP is required)
Medical ophthalmology
Medical virology
Neurology (with possible subspecialty in stroke medicine)
Nuclear medicine
Occupational medicine
Paediatric cardiology (the only pediatric subspecialty not under the Royal College of Paediatrics and Child Health)
Palliative medicine
Rehabilitation medicine (with possible subspecialty in stroke medicine)
Renal medicine
Respiratory medicine
Rheumatology
Sport and exercise medicine
Tropical medicine
Many training programmes provide dual accreditation with general (internal) medicine and are involved in the general care to hospitalised patients. These are acute medicine, cardiology, Clinical Pharmacology and Therapeutics, endocrinology and diabetes mellitus, gastroenterology, infectious diseases, renal medicine, respiratory medicine and often, rheumatology. The role of general medicine, after a period of decline, was reemphasised by the Royal College of Physicians of London report from the Future Hospital Commission (2013).
European Union
The European Board of Internal Medicine (EBIM) was formed as a collaborative effort between the European Union of Medical Specialists (UEMS) - Internal Medicine Section and the European Federation of Internal Medicine (EFIM) to provide guidance on standardizing training and practice of internal medicine throughout Europe. The EBIM published training requirements in 2016 for postgraduate education in internal medicine, and efforts to create a European Certificate of Internal Medicine (ECIM) to facilitate the free movement of medical professionals with the EU are currently underway.
The internal medicine specialist is recognized in every country in the European Union and typically requires five years of multi-disciplinary post-graduate education. The specialty of internal medicine is seen as providing care in a wide variety of conditions involving every organ system and is distinguished from family medicine in that the latter provides a broader model of care the includes both surgery and obstetrics in both adults and children.
Australia
Accreditation for medical education and training programs in Australia is provided by the Australian Medical Council (AMC) and the Medical Council of New Zealeand (MCNZ). The Medical Board of Australia (MBA) is the registering body for Australian doctors and provides information to the Australian Health Practitioner Regulation Agency (AHPRA). Medical graduates apply for provisional registration in order to complete intern training. Those completing an accredited internship program are then eligible to apply for general registration. Once the candidate completes the required basic and advanced post-graduate training and a written and clinical examination, the Royal Australasian College of Physicians confers designation Fellow of the Royal Australasian College of Physicians (FRACP). Basic training consists of three years of full-time equivalent (FTE) training (including intern year) and advanced training consists of 3–4 years, depending on specialty. The fields of specialty practice are approved by the Council of Australian Governments (COAG) and managed by the MBA. The following is a list of currently recognized specialist physicians.
Cardiology
Clinical genetics
Clinical pharmacology
Endocrinology
Gastroenterology and hepatology
General medicine
Geriatric medicine
Haemotology
Immunology and allergy
Infectious diseases
Medical oncology
Nephrology
Neurology
Nuclear medicine
Respiratory and sleep medicine
Rheumatology
Canada
After completing medical school, internists in Canada require an additional four years of training. Internists desiring to subspecialize are required to complete two additional years of training that may begin after the third year of internist training. The Royal College of Physicians and Surgeons of Canada (RCPSC) is a national non-profit agency that oversees and accredits medical education in Canada. A full medical license in Internal Medicine in Canada requires a medical degree, a license from the Medical Council of Canada, completion of the required post-graduate education, and certification from the RCPSC. Any additional requirements from separate medical regulatory authorities in each province or territory is also required. Internists may practice in Canada as generalists in Internal Medicine or serve in one of seventeen subspecialty areas. Internists may work in many settings including outpatient clinics, inpatient wards, critical care units, and emergency departments. The currently recognized subspecialties include the following:
Critical care medicine
Cardiology
Infectious diseases
Neurology
Respiratory medicine
Rheumatology
Endocrinology and metabolism
Gastroenterology
General internal medicine
Geriatrics
Hematology
Medical oncology
Clinical allergy and immunology
Dermatology
Nephrology
Medical diagnosis and treatment
Medicine is mainly focused on the art of diagnosis and treatment with medication. The diagnostic process involves gathering data, generating one or more diagnostic hypotheses, and iteratively testing these potential diagnoses against dynamic disease profiles to determine the best course of action for the patient.
Gathering data
Data may be gathered directly from the patient in medical history-taking and physical examination. Previous medical records including laboratory findings, imaging, and clinical notes from other physicians is also an important source of information; however, it is vital to talk to and examine the patient to find out what the patient is currently experiencing to make an accurate diagnosis.
Internists often can perform and interpret diagnostic tests like EKGs and ultrasound imaging (Point-of-care Ultrasound – PoCUS).
Internists who pursue sub-specialties have additional diagnostic tools, including those listed below.
Cardiology: angioplasty, cardioversion, cardiac ablation, intra-aortic balloon pump
Critical care medicine: mechanical ventilation
Gastroenterology: endoscopy and ERCP
Nephrology: dialysis
Pulmonology: bronchoscopy
Other tests are ordered, and patients are also referred to specialists for further evaluation. The effectiveness and efficiency of the specialist referral process is an area of potential improvement.
Generating diagnostic hypotheses
Determining which pieces of information are most important to the next phase of the diagnostic process is of vital importance. It is during this stage that clinical bias like anchoring or premature closure may be introduced. Once key findings are determined, they are compared to profiles of possible diseases. These profiles include findings that are typically associated with the disease and are based on the likelihood that someone with the disease has a particular symptom. A list of potential diagnoses is termed the "differential diagnosis" for the patient and is typically ordered from most likely to least likely, with special attention given to those conditions that have dire consequences for the patient if they were missed. Epidemiology and endemic conditions are also considered in creating and evaluating the list of diagnoses.
The list is dynamic and changes as the physician obtains additional information that makes a condition more ("rule-in") or less ("rule-out") likely based on the disease profile. The list is used to determine what information will be acquired next, including which diagnostic test or imaging modality to order. The selection of tests is also based on the physician's knowledge of the specificity and sensitivity of a particular test.
An important part of this process is knowledge of the various ways that a disease can present in a patient. This knowledge is gathered and shared to add to the database of disease profiles used by physicians. This is especially important in rare diseases.
Communication
Communication is a vital part of the diagnostic process. The Internist uses both synchronous and asynchronous communication with other members of the medical care team, including other internists, radiologists, specialists, and laboratory technicians. Tools to evaluate teamwork exist and have been employed in multiple settings.
Communication to the patient is also important to ensure there is informed consent and shared decision-making throughout the diagnostic process.
Treatment
Treatment modalities generally include both pharmacological and non-pharmacological, depending on the primary diagnosis. Additional treatment options include referral to specialist care including physical therapy and rehabilitation. Treatment recommendations differ in the acute inpatient and outpatient settings. Continuity of care and long-term follow-up is crucial in successful patient outcomes.
Prevention and other services
Aside from diagnosing and treating acute conditions, the Internist may also assess disease risk and recommend preventive screening and intervention. Some of the tools available to the Internist include genetic evaluation.
Internists also routinely provide pre-operative medical evaluations including individualized assessment and communication of operative risk.
Training the next generation of internists is an important part of the profession. As mentioned above, post-graduate medical education is provided by licensed physicians as part of accredited education programs that are usually affiliated with teaching hospitals. Studies show that there are no differences in patient outcomes in teaching versus non-teaching facilities. Medical research is an important part of most post-graduate education programs, and many licensed physicians continue to be involved in research activities after completing post-graduate training.
Ethics
Inherent in any medical profession are legal and ethical considerations. Specific laws vary by jurisdiction and may or may not be congruent with ethical considerations. Thus, a strong ethical foundation is paramount to any medical profession. Medical ethics guidelines in the Western world typically follow four principles including beneficence, non-maleficence, patient autonomy, and justice. These principles underlie the patient-physician relationship and the obligation to put the welfare and interests of the patient above their own.
Patient-physician relationship
The relationship is built upon the physician obligations of competency, respect for the patient, and appropriate referrals while the patient requirements include decision-making and provides or withdraws consent for any treatment plan. Good communication is key to a strong relationship but has ethical considerations as well, including proper use of electronic communication and clear documentation.
Treatment and telemedicine
Providing treatment including prescribing medications based on remote information gathering without a proper established relationship is not accepted as good practice with few exceptions. These exceptions include cross-coverage within a practice and certain public health urgent or emergent issues.
The ethics of telemedicine including questions on its impact to diagnosis, physician-patient relationship, and continuity of care have been raised. However, with appropriate use and specific guidelines, risks may be minimized and the benefits including increased access to care may be realized.
Financial issues and conflicts of interest
Ethical considerations in financial include accurate billing practices and clearly defined financial relationships. Physicians have both a professional duty and obligation under the justice principle to ensure that patients are provided the same care regardless of status or ability to pay. However, informal copayment forgiveness may have legal ramifications and the providing professional courtesy may have negatively impact care.
Physicians must disclose all possible conflicts of interest including financial relationships, investments, research and referral relationships, and any other instances that may subjugate or give the appearance of subjugating patient care to self-interest.
Other topics
Other foundational ethical considerations include privacy, confidentiality, accurate and complete medical records, electronic health records, disclosure, and informed decision-making and consent.
Electronic health records have been shown to improve patient care but have risks including data breaches and inappropriate and/or unauthorized disclosure of protected health information.
Withholding information from a patient is typically seen as unethical and in violation of a patient's right to make informed decisions. However, in situations where a patient has requested not to be informed or to have the information provided to a second party or in an emergency situation in which the patient does not have decision-making capacity, withholding information may be appropriate.
| Biology and health sciences | Fields of medicine | Health |
205326 | https://en.wikipedia.org/wiki/Kookaburra | Kookaburra | Kookaburras (pronounced ) are terrestrial tree kingfishers of the genus Dacelo native to Australia and New Guinea, which grow to between in length and weigh around . The name is a loanword from Wiradjuri guuguubarra, onomatopoeic of its call. The loud, distinctive call of the laughing kookaburra is widely used as a stock sound effect in situations that involve an Australian bush setting or tropical jungle, especially in older movies.
They are found in habitats ranging from humid forest to arid savannah, as well as in suburban areas with tall trees or near running water. Though they belong to the larger group known as "kingfishers", kookaburras are not closely associated with water.
Taxonomy
The genus Dacelo was introduced by English zoologist William Elford Leach in 1815. The type species is the laughing kookaburra. The name Dacelo is an anagram of alcedo, the Latin word for a kingfisher. A molecular study published in 2017 found that the genus Dacelo, as then defined, was paraphyletic. The shovel-billed kookaburra was previously classified in the monotypic genus Clytoceyx, but was reclassified into Dacelo based on phylogenetic evidence.
Classification and species
Five species of kookaburra can be found in Australia, New Guinea, and the Aru Islands:
The laughing and blue-winged species are direct competitors in the area where their ranges now overlap. This suggests that these two species evolved in isolation, possibly during a period when Australia and New Guinea were more distant.
The Kamilaroi/Gamilaraay and Wiradjuri people named this bird “guuguubarra”. It is native to the eastern mainland part of Australia.
Kookaburras are sexually dimorphic. This is noticeable in the blue-winged and the rufous-bellied, where males have blue tails and females have reddish-brown tails.
Behaviour
Kookaburras are almost exclusively carnivorous, eating mice, snakes, insects, small reptiles, and the young of other birds. Unlike many other kingfishers, they rarely eat fish, although they have been known to take goldfish from garden ponds. In zoos, they are usually fed food suitable for birds of prey.
Although most birds will accept handouts and take meat from barbecues, feeding kookaburras ground beef or pet food is not advised, because they do not include enough calcium and roughage.
Hunting
Kookaburras are usually seen waiting for their prey on powerlines or low tree branches. When they see their prey they dive down and grab them with their strong beak. If the prey is small it will be eaten whole, but if the prey is larger then the kookaburra bashes it against a tree or the ground to make it softer and easier to eat.
They are territorial, except for the rufous-bellied, which often live with their young from the previous season. They often sing as a chorus to mark their territory.
Diet
A Kookaburra's diet includes lizards, snakes, frogs, rodents, beetles, worms, bugs, and other small mammals.
Habitat
They live in sclerophyll woodland and open forests, in almost any area with trees large enough to hold the nests and open patches with hunting areas. The kookaburras are declining in population because of predators, lack of prey, and the environment.
Conservation
All kookaburra species are listed as least concern. Australian law protects native birds, including kookaburras.
In popular culture
The distinctive sound of the laughing kookaburra's call resembles human laughter, is widely used in filmmaking and television productions, as well as certain Disney theme-park attractions, regardless of African, Asian, or South American jungle settings. Kookaburras have also appeared in several video games, including (Lineage II, Battletoads, and World of Warcraft). The children's television series Splatalot! includes an Australian character called "Kookaburra" (or "Kook"), whose costume includes decorative wings that recall the bird's plumage, and who is noted for his distinctive, high-pitched laugh. Olly the Kookaburra was one of the three mascots chosen for the 2000 Summer Olympics in Sydney. The other mascots were Millie the Echidna and Syd the Platypus. The call of a kookaburra nicknamed "Jacko" was for many years used as the morning opening theme by ABC radio stations, and for Radio Australia's overseas broadcasts.
Book
The opening theme from ABC was the basis for a children's book by Brooke Nicholls titled Jacko, the Broadcasting Kookaburra — His Life and Adventures.
In William Arden's 1969 book, The Mystery of the Laughing Shadow (one of the Three Investigators series for young readers), the laughing kookaburra is integral to the plot.
Film
Heard in some of the early Johnny Weissmuller films, the first occurrence was in Tarzan and the Green Goddess (1938).
The call is heard in The Wizard of Oz (1939), The Treasure of the Sierra Madre (1948), Swiss Family Robinson (1960), Cape Fear (1962), The Lost World: Jurassic Park, and other films.
The dolphin call in the television series Flipper (1964-7) is a modified kookaburra call.
Music
"Kookaburra [sits in the old gum tree]", a well-known children's song, was written in 1932 by Marion Sinclair.
Postage stamps
A six-pence () stamp was issued in 1914.
A three-pence () commemorative Australian stamp was issued for the 1928 Melbourne International Philatelic Exhibition.
A six-pence () stamp was issued in 1932.
A 38¢ () Australian stamp issued in 1990 features a pair of kookaburras.
An international $1.70 () Australian stamp featuring an illustrated kookaburra was released in 2013.
A $1.10 () laughing kookaburra stamp issued in 2020.
Money
An Australian coin known as the Silver Kookaburra has been minted annually since 1990.
The kookaburra is featured multiple times on the Australian twenty-dollar note.
Usage across sport
The Australian 12-m yacht Kookaburra III lost the America's Cup in 1987.
The Australia men's national field hockey team is nicknamed after the kookaburra. They were world champions in field hockey in 1986, 2010 and 2014.
Australian sports equipment company Kookaburra Sport is named after the bird.
| Biology and health sciences | Coraciiformes | Animals |
205372 | https://en.wikipedia.org/wiki/Hamerkop | Hamerkop | The hamerkop (Scopus umbretta) is a medium-sized wading bird. It is the only living species in the genus Scopus and the family Scopidae. The species and family was long thought to sit with the Ciconiiformes but is now placed with the Pelecaniformes, and its closest relatives are thought to be the pelicans and the shoebill. The shape of its head with a long bill and crest at the back is reminiscent of a hammer, which has given this species its name after the Afrikaans word for hammerhead. It is a medium-sized waterbird with brown plumage. It is found in Africa, Madagascar and Arabia, living in a wide variety of wetlands, including estuaries, lakesides, fish ponds, riverbanks, and rocky coasts. The hamerkop is a sedentary bird that often shows local movements.
The hamerkop takes a wide range of prey, mostly fish and amphibians, but shrimps, insects and rodents are taken too. Prey is usually hunted in shallow water, either by sight or touch, but the species is adaptable and will take any prey it can. The species is renowned for its enormous nests, several of which are built during the breeding season. Unusually for a wading bird the nest has an internal nesting chamber where the eggs are laid. Both parents incubate the eggs, and raise the chicks.
The species is not globally threatened and is locally abundant in Africa and Madagascar,. The International Union for Conservation of Nature (IUCN) has assessed it as being of least concern.
Taxonomy and systematics
The hamerkop was first described by the French zoologist Mathurin Jacques Brisson in 1760 in his landmark Ornithologia which was published two years after the tenth edition of Carl Linnaeus' Systema Naturae. The species was subsequently described and illustrated by French polymath Comte de Buffon. When the German naturalist Johann Friedrich Gmelin revised and expanded Carl Linnaeus's Systema Naturae in 1788 he included the hamerkop and cited the earlier authors. He placed the species in the genus Scopus that had been introduced by Brisson and coined the binomial name Scopus umbretta.
Brisson's names for bird genera were widely adopted by the ornithological community despite the fact that he did not use Linnaeus' binomial system. The International Commission on Zoological Nomenclature ruled in 1911 that Brisson's genera were available under the International Code of Zoological Nomenclature, so Brisson is considered to be the genus authority for the hamerkop. The generic name, Scopus, is derived from the Ancient Greek for shadow. The specific name umbretta is modified from the Latin for umber or dark brown.
The hamerkop is sufficiently distinct to be placed in its own family, although the relationships of this species to other families has been a longstanding mystery. The hamerkop was usually included in the Ciconiiformes, but is now thought to be closer to the Pelecaniformes. Recent studies have found that its closest relatives are the pelicans and shoebill. Although the hamerkop is the only living member of its family, one extinct species is known from the fossil record. Scopus xenopus was described by ornithologist Storrs Olson in 1984 based on two bones found in Pliocene deposits from South Africa. Scopus xenopus was slightly larger than the hamerkop and Olson speculated based on the shape of the tarsus that the species may have been more aquatic.
The hamerkop is also known as the hammerkop, hammerkopf, hammerhead, hammerhead stork, umbrette, umber bird, tufted umber, or anvilhead.
Subspecies
Two subspecies are recognized - the widespread nominate race S. u. umbretta and the smaller of West African S. u. minor, described by George Latimer Bates in 1931. Two other subspecies have been proposed. S. u. bannermani of south west Kenya is usually lumped with the nominate race. Birds in Madagascar have been suggested to be distinct, in which case they would be placed in the subspecies S. u. tenuirostris. That proposed subspecies was described by Austin L. Rand in 1936. It has also been suggested that birds near the Kavango River in Namibia may be distinct, but no formal description has been made.
Description
The hamerkop is a medium-sized waterbird, standing high and weighing , although the subspecies S. u. minor is smaller. Its plumage is a drab brown with purple iridescence on the back; S. u. minor is darker. The tail is faintly barred with darker brown. The sexes are alike and fledglings resemble adults. The bill is long, , and slightly hooked at the end. It resembles the bill of a shoebill, and is quite compressed and thin, particularly at the lower half of the mandible. The bill is brown in young birds, but becomes black by the time a bird fledges.
The neck and legs are proportionately shorter than those of similar looking Pelecaniformes. The bare parts of the legs are black and the legs are feathered only to the upper part of the tibia. The hamerkop has, for unknown reasons, partially webbed feet. The middle toe is comb-like (pectinated) like a heron's. Its tail is short and its wings are big, wide, and round-tipped; it soars well, although it does so less than the shoebill or storks. When it does so, it stretches its neck forward like a stork or ibis, but when it flaps, it coils its neck back something like a heron. Its gait when walking is jerky and rapid, with its head and neck moving back and forth with each step. It may hold its wings out when running for extra stability.
Distribution and habitat
The hamerkop occurs in Africa south of the Sahara, Madagascar, and coastal south-west Arabia. It requires shallow water in which to forage, and is found in all wetland habitats, including rivers, streams, seasonal pools, estuaries, reservoirs, marshes, mangroves, irrigated land such as rice paddies, savannahs, and forests. In Tanzania, it has also recently begun to feed on rocky shores. In Arabia, it is found in rocky wadis with running water and trees. Most are sedentary within their territories, which are held by pairs, but some migrate into suitable habitat during the wet season only. The species is very tolerant of humans and readily feeds and breeds in villages and other human-created habitats.
Behaviour and ecology
The hamerkop is mostly active during the day, often resting at noon during the heat of the day. They can be somewhat crepuscular, being active around dusk, but are not nocturnal as has sometimes been reported.
Social behaviour and calls
The hamerkop is mostly silent when alone, but is fairly vocal when in pairs or in groups. The only call it usually makes when alone is a flight-call, a shrill "nyip" or "kek". In groups, vocalisations include a range of calls including cackles and nasal rattles. One highly social call is the "yip-purr" call. This call is only made in a social context, when at least three birds, but up to 20 are gathered in a flock. Birds start by giving a number of "yip" calls, eventually giving way to purring notes. This call is made with the neck extended and sometimes accompanied by wing flapping, and becomes more vigorous when larger numbers of birds are present.
Another common social behaviour is "false mounting", in which one bird stands on top of another and appears to mount it, but they do not copulate. This behaviour has been noted between both mated pairs and unmated birds, and even between members of the same sex and in reversed mountings, where females mount males. Because of this, the behaviour is thought to be social and not related to the pair bond. Dominant birds may signal to subordinates by opening their bills slightly and erecting their crests, but the species is not very aggressive in general towards others of its species. Birds in groups also engage in social allopreening. One bird presents its face of back of the head to the other to be preened.
Food and feeding
This species normally feeds alone or in pairs, but also feeds in large flocks sometimes. It is a generalist, although amphibians and fish form the larger part of its diet. The diet also includes shrimp, insects, and rodents. The type of food they take seems to vary by location, with clawed frogs and tadpoles being important parts of the diet in East and Southern Africa and small fish being almost the only prey taken in Mali. Because it is willing to take a wide range of food items and also take very small prey, it is not resource-limited and only feeds for part of the day.
The usual method of hunting is to walk in shallow water looking for prey. Prey is located differently depending on circumstances; if the water is clear, it may hunt by sight, but if the water is very muddy, it probes its open bill into water or mud and shuts it. It may shuffle one foot at a time on the bottom or suddenly open its wings to flush prey out of hiding. Prey caught in mud is shaken before swallowing to clean it, or if available, taken to clearer water to do so. The species also feeds while in flight. A bird flies slowly low over the water with legs dangling and head looking down, then dipping feet down and hovering momentarily when prey is sighted. The prey is then snatched with the bill and swallowed in flight. This method of hunting can be very successful, with one birds catching prey on 27 of 33 attempts during one 45-minute session. It is also opportunistic, and feeds on swarming termites when they conduct their nuptial flights, snatching as many as 47 alates (flying termites) in five minutes.
This species has been recorded foraging for insects flushed by grazing cattle and buffalo, in a manner similar cattle egrets, and has been observed fishing off the backs of hippopotamuses. It has also been recorded feeding in association with banded mongooses; when a band of mongooses began hunting frogs in dried mud at the side of a pool of water a pair of hamerkops attended the feeding group, catching frogs that escaped the mongooses.
Breeding
The strangest aspect of hamerkop behaviour is the huge nest, sometimes more than across, and strong enough to support a man's weight. When possible, it is built in the fork of a tree, often over water, but if necessary, it is built on a bank, a cliff, a human-built wall or dam, or on the ground. A pair starts by making a platform of sticks held together with mud, then builds walls and a domed roof. A mud-plastered entrance wide in the bottom leads through a tunnel up to long to a nesting chamber big enough for the parents and young. Nests have been recorded to take between 10 and 14 weeks to build, and one researcher estimated that they would require around 8,000 sticks or bunches of grass to complete. Nesting material may still be added by the pair after the nest has been completed and eggs have been laid. Much of the nesting material added after completion is not sticks, but an odd collection of random items including bones, hide, and human waste.
Pairs of hamerkop are compulsive nest builders, constructing three to five nests per year whether they are breeding or not. Both members of the pair build the nest, and the building of nests may have a function in creating or maintaining the pair bond between them. Barn owls and eagle owls may force them out and take over the nests, but when the owls leave, the pair may reuse the nest. Owls may also use abandoned nests, as may snakes, small mammals such as genets, and various birds, and weaver birds, starlings, and pigeons may attach their nests to the outside. A few reports exist of hamerkops nesting close together, including in Uganda, where 639 nests were seen in an area of ; even if each pair had made seven nests, this would mean 80 pairs were nesting in that area. The species is not treated as colonial, as it does not habitually nest close together, but is not thought to be highly territorial, either. Even where pairs have home ranges that are more spread out those home ranges overlap and are the boundaries are poorly defined.
Breeding happens year-round in East Africa, and in the rest of its range, it peaks at different times, with a slight bias towards the dry season. Pairs engage in a breeding display, then copulate on the nest or on the ground nearby. The clutch consists of three to seven eggs which start chalky white, but soon become stained. The eggs measure on average, and weight around , but considerable variation is seen. Egg size varies by season, by the overall size of the clutch, and from bird to bird. Both sexes incubate the eggs, but the female seems to do most of the work. Incubation takes around 30 days from the first egg being laid to hatching, eggs are laid with intervals of one to three days, and they hatch asynchronously.
Both parents feed the young, often leaving them alone for long times. This habit, which is unusual for wading birds, may be made possible because of the thick nest walls. The young hatch covered with grey down. By 17 days after hatching, their head and crest plumage is developed, and in a month, their body plumage. They first leave the nest around 44 to 50 days after hatching, but continue to use the nest for roosting at night until they are two months old.
Relationship with humans
Many legends exist about the hamerkop. In some regions, people state that other birds help it build its nest. The ǀXam informants of Wilhelm Bleek said that when a hamerkop flew and called over their camp, they knew that someone close to them had died.
It is known in some cultures as the lightning bird, and the Kalahari Bushmen believe or believed that being hit by lightning resulted from trying to rob a hamerkop's nest. They also believe that the inimical god Khauna would not like anyone to kill a hamerkop. According to an old Malagasy belief, anyone who destroys its nest will get leprosy, and a Malagasy poem calls it an "evil bird". Such beliefs have given the bird some protection. A south African name Njaka meaning "rain doctor" is derived from its habit of calling loudly prior to rain.
Scopus, a database of abstracts and citations for scholarly journal articles, received its name in honour of this bird, as did the journal of the East African Natural History Society, Scopus.
| Biology and health sciences | Pelecanimorphae | Animals |
205406 | https://en.wikipedia.org/wiki/Microgram | Microgram | In the metric system, a microgram or microgramme is a unit of mass equal to one millionth () of a gram. The unit symbol is μg according to the International System of Units (SI); the recommended symbol in the United States and United Kingdom when communicating medical information is mcg. In μg, the prefix symbol for micro- is the Greek letter μ (mu).
Abbreviation and symbol confusion
When the Greek lowercase "μ" (mu) is typographically unavailable, it is occasionally – although not properly – replaced by the Latin lowercase "u".
The United States–based Institute for Safe Medication Practices (ISMP) and the U.S. Food and Drug Administration (FDA) recommend that the symbol μg should not be used when communicating medical information due to the risk that the prefix μ (micro-) might be misread as the prefix m (milli-), resulting in a thousandfold overdose. The ISMP recommends the non-SI symbol mcg instead. However, the abbreviation mcg is also the symbol for an obsolete centimetre–gram–second system of units unit of measure known as millicentigram, which is equal to 10 μg.
Gamma (symbol: γ) is a deprecated non-SI unit of mass equal to 1 μg.
A fullwidth version of the "microgram" symbol is encoded by Unicode at code point for use in CJK contexts. In other contexts, a sequence of the Greek letter mu (U+03BC) and Latin letter g (U+0067) should be used.
| Physical sciences | Mass and weight | Basics and measurement |
205592 | https://en.wikipedia.org/wiki/Highway%20engineering | Highway engineering | Highway engineering (also known as roadway engineering and street engineering) is a professional engineering discipline branching from the civil engineering subdiscipline of transportation engineering that involves the planning, design, construction, operation, and maintenance of roads, highways, streets, bridges, and tunnels to ensure safe and effective transportation of people and goods. Highway engineering became prominent towards the latter half of the 20th century after World War II. Standards of highway engineering are continuously being improved. Highway engineers must take into account future traffic flows, design of highway intersections/interchanges, geometric alignment and design, highway pavement materials and design, structural design of pavement thickness, and pavement maintenance.
History
The beginning of road construction could be dated to the time of the Romans. With the advancement of technology from carriages pulled by two horses to vehicles with power equivalent to 100 horses, road development had to follow suit. The construction of modern highways did not begin until the late 19th to early 20th century.
The first research dedicated to highway engineering was initiated in the United Kingdom with the introduction of the Transport Research Laboratory (TRL), in 1930. In the US, highway engineering became an important discipline with the passing of the Federal-Aid Highway Act of 1944, which aimed to connect 90% of cities with a population of 50,000 or more. With constant stress from vehicles which grew larger as time passed, improvements to pavements were needed. With technology out of date, in 1958 the construction of the first motorway in Great Britain (the Preston bypass) played a major role in the development of new pavement technology.
Planning and development
Highway planning involves the estimation of current and future traffic volumes on a road network. The Highway planning is also a basic need for the Highway development. Highway engineers strive to predict and analyze all possible civil impacts of highway systems. Some considerations are the adverse effects on the environment, such as noise pollution, air pollution, water pollution, and other ecological impacts.
Financing
Developed countries are constantly faced with high maintenance cost of aging transportation highways. The growth of the motor vehicle industry and accompanying economic growth has generated a demand for safer, better performing, less congested highways. The growth of commerce, educational institutions, housing, and defense have largely drawn from government budgets in the past, making the financing of public highways a challenge.
The multipurpose characteristics of highways, economic environment, and the advances in highway pricing technology are constantly changing. Therefore, the approaches to highway financing, management, and maintenance are constantly changing as well.
Environmental impact assessment
The economic growth of a community is dependent upon highway development to enhance mobility. However, improperly planned, designed, constructed, and maintained highways can disrupt the social and economic characteristics of any size community. Common adverse impacts to highway development include damage of habitat and bio-diversity, creation of air and water pollution, noise and vibration generation, damage of natural landscape, and the destruction of a community's social and cultural structure. Highway infrastructure must be constructed and maintained to high qualities and standards.
There are three key steps for integrating environmental considerations into the planning, scheduling, construction, and maintenance of highways. This process is known as an Environmental Impact Assessment, or EIA, as it systematically deals with the following elements:
Identification of the full range of possible impacts on the natural and socio-economic environment
Evaluation and quantification of these impacts
Formulation of measures to avoid, mitigate, and compensate for the anticipated impacts.
Highway safety
Highway systems generate the highest price in human injury and death, as nearly 50 million persons are injured in traffic accidents every year, not including the 1.2 million deaths. Road traffic injury is the single leading cause of unintentional death in the first five decades of human life.
Management of safety is a systematic process that strives to reduce the occurrence and severity of traffic accidents. The man/machine interaction with road traffic systems is unstable and poses a challenge to highway safety management. The key for increasing the safety of highway systems is to design, build, and maintain them to be far more tolerant of the average range of this man/machine interaction with highways. Technological advancements in highway engineering have improved the design, construction, and maintenance methods used over the years. These advancements have allowed for newer highway safety innovations.
By ensuring that all situations and opportunities are identified, considered, and implemented as appropriate, they can be evaluated in every phase of highway planning, design, construction, maintenance, and operation to increase the safety of our highway systems.
Design
The most appropriate location, alignment, and shape of a highway are selected during the design stage. Highway design involves the consideration of three major factors (human, vehicular, and roadway) and how these factors interact to provide a safe highway. Human factors include reaction time for braking and steering, visual acuity for traffic signs and signals, and car-following behaviour. Vehicle considerations include vehicle size and dynamics that are essential for determining lane width and maximum slopes, and for the selection of design vehicles. Highway engineers design road geometry to ensure stability of vehicles when negotiating curves and grades and to provide adequate sight distances for undertaking passing maneuvers along curves on two-lane, two-way roads.
Geometric design
Highway and transportation engineers must meet many safety, service, and performance standards when designing highways for certain site topography. Highway geometric design primarily refers to the visible elements of the highways. Highway engineers who design the geometry of highways must also consider environmental and social effects of the design on the surrounding infrastructure.
There are certain considerations that must be properly addressed in the design process to successfully fit a highway to a site's topography and maintain its safety. Some of these design considerations are:
Design speed
Design traffic volume
Number of lanes
Level of service (LOS)
Sight distance
Alignment, super-elevation, and grades
Cross section
Lane width
Structure gauge, Horizontal and vertical clearance
The operational performance of a highway can be seen through drivers' reactions to the design considerations and their interaction.
Materials
The materials used for roadway construction have progressed with time, dating back to the early days of the Roman Empire. Advancements in methods with which these materials are characterized and applied to pavement structural design has accompanied this advancement in materials.
There are three major types of pavement surfaces - pavement quality concrete (PQC), Portland cement concrete (PCC) and hot-mix asphalt (HMA). Underneath this wearing course are material layers that give structural support for the pavement system. These underlying surfaces may include either the aggregate base and sub base layers, or treated base and sub base layers, and additionally the underlying natural or treated sub grade. These treated layers may be cement-treated, asphalt-treated, or lime-treated for additional support. New Material
Flexible pavement design
A flexible, or asphalt, or Tarmac pavement typically consists of three or four layers. For a four layer flexible pavement, there is a surface course, base course, and subbase course constructed over a compacted, natural soil subgrade. When building a three layer flexible pavement, the subbase layer is not used and the base course is placed directly on the natural subgrade.
A flexible pavement's surface layer is constructed of hot-mix asphalt (HMA).Unstabilized aggregates are typically used for the base course; however, the base course could also be stabilized with asphalt, Foamed Bitumen,<Roadstone Recycling> Portland cement, or another stabilizing agent. The subbase is generally constructed from local aggregate material, while the top of the subgrade is often stabilized with cement or lime.
With flexible pavement, the highest stress occurs at the surface and the stress decreases as the depth of the pavement increases. Therefore, the highest quality material needs to be used for the surface, while lower quality materials can be used as the depth of the pavement increases. The term "flexible" is used because of the asphalts ability to bend and deform slightly, then return to its original position as each traffic load is applied and removed. It is possible for these small deformations to become permanent, which can lead to rutting in the wheel path over an extended time.
The service life of a flexible pavement is typically designed in the range of 20 to 30 years. Required thicknesses of each layer of a flexible pavement vary widely depending on the materials used, magnitude, number of repetitions of traffic loads, environmental conditions, and the desired service life of the pavement. Factors such as these are taken into consideration during the design process so that the pavement will last for the designed life without excessive distresses.
Rigid pavement design
Rigid pavements are generally used in constructing airports and major highways, such as those in the interstate highway system. In addition, they commonly serve as heavy-duty industrial floor slabs, port and harbor yard pavements, and heavy-vehicle park or terminal pavements. Like flexible pavements, rigid highway pavements are designed as all-weather, long-lasting structures to serve modern day high-speed traffic. Offering high quality riding surfaces for safe vehicular travel, they function as structural layers to distribute vehicular wheel loads in such a manner that the induced stresses transmitted to the subgrade soil are of acceptable magnitudes.
Portland cement concrete (PCC) is the most common material used in the construction of rigid pavement slabs. The reason for its popularity is due to its availability and the economy. Rigid pavements must be designed to endure frequently repeated traffic loadings. The typical designed service life of a rigid pavement is between 30 and 40 years, lasting about twice as long as a flexible pavement.
One major design consideration of rigid pavements is reducing fatigue failure due to the repeated stresses of traffic. Fatigue failure is common among major roads because a typical highway will experience millions of wheel passes throughout its service life. In addition to design criteria such as traffic loadings, tensile stresses due to thermal energy must also be taken into consideration. As pavement design has progressed, many highway engineers have noted that thermally induced stresses in rigid pavements can be just as intense as those imposed by wheel loadings. Due to the relatively low tensile strength of concrete, thermal stresses are extremely important to the design considerations of rigid pavements.
Rigid pavements are generally constructed in three layers - a prepared subgrade, base or subbase, and a concrete slab. The concrete slab is constructed according to a designed choice of plan dimensions for the slab panels, directly influencing the intensity of thermal stresses occurring within the pavement. In addition to the slab panels, temperature reinforcements must be designed to control cracking behavior in the slab. Joint spacing is determined by the slab panel dimensions.
Three main types of concrete pavements commonly used are jointed plain concrete pavement (JPCP), jointed reinforced concrete pavement (JRCP), and continuously reinforced concrete pavements (CRCP). JPCPs are constructed with contraction joints which direct the natural cracking of the pavement. These pavements do not use any reinforcing steel. JRCPs are constructed with both contraction joints and reinforcing steel to control the cracking of the pavement. High temperatures and moisture stresses within the pavement creates cracking, which the reinforcing steel holds tightly together. At transverse joints, dowel bars are typically placed to assist with transferring the load of the vehicle across the cracking. CRCPs solely rely on continuous reinforcing steel to hold the pavement's natural transverse cracks together. Prestressed concrete pavements have also been used in the construction of highways; however, they are not as common as the other three. Prestressed pavements allow for a thinner slab thickness by partly or wholly neutralizing thermally induced stresses or loadings.
Flexible pavement overlay design
Over the service life of a flexible pavement, accumulated traffic loads may cause excessive rutting or cracking, inadequate ride quality, or an inadequate skid resistance. These problems can be avoided by adequately maintaining the pavement, but the solution usually has excessive maintenance costs, or the pavement may have an inadequate structural capacity for the projected traffic loads.
Throughout a highway's life, its level of serviceability is closely monitored and maintained. One common method used to maintain a highway's level of serviceability is to place an overlay on the pavement's surface.
There are three general types of overlay used on flexible pavements: asphalt-concrete overlay, Portland cement concrete overlay, and ultra-thin Portland cement concrete overlay. The concrete layer in a conventional PCC overlay is placed unbonded on top of the flexible surface. The typical thickness of an ultra-thin PCC overlay is 4 inches (10 cm) or less.
There are two main categories of flexible pavement overlay design procedures:
Component analysis design
Deflection-based design
Rigid pavement overlay design
Near the end of a rigid pavement's service life, a decision must be made to either fully reconstruct the worn pavement, or construct an overlay layer. Considering an overlay can be constructed on a rigid pavement that has not reached the end of its service life, it is often more economically attractive to apply overlay layers more frequently. The required overlay thickness for a structurally sound rigid pavement is much smaller than for one that has reached the end of its service life. Rigid and flexible overlays are both used for rehabilitation of rigid pavements such as JPCP, JRCP, and CRCP.
There are three subcategories of rigid pavement overlays that are organized depending on the bonding condition at the pavement overlay and existing slab interface.
Bonded overlays
Unbonded overlays
Partially bonded overlays
Drainage system design
Designing for proper drainage of highway systems is crucial to their success. A highway should be graded and built to remain "high and dry". Regardless of how well other aspects of a road are designed and constructed, adequate drainage is mandatory for a road to survive its entire service life. Excess water in the highway structure can inevitably lead to premature failure, even if the failure is not catastrophic.
Each highway drainage system is site-specific and can be very complex. Depending on the geography of the region, many methods for proper drainage may not be applicable. The highway engineer must determine which situations a particular design process should be applied, usually a combination of several appropriate methods and materials to direct water away from the structure. Pavement subsurface drainage, and underdrains help provide extended life and excellent and reliable pavement performance. Excessive moisture under a concrete pavement can cause pumping, cracking, and joint failure.
Erosion control is a crucial component in the design of highway drainage systems. Surface drainage must be allowed for precipitation to drain away from the structure. Highways must be designed with a slope or crown so that runoff water will be directed to the shoulder of the road, into a ditch, and away from the site. Designing a drainage system requires the prediction of runoff and infiltration, open channel analysis, and culvert design for directing surface water to an appropriate location.
Construction, maintenance, and management
Highway construction
Highway construction is generally preceded by detailed surveys and subgrade preparation. The methods and technology for constructing highways has evolved over time and become increasingly sophisticated. This advancement in technology has raised the level of skill sets required to manage highway construction projects. This skill varies from project to project, depending on factors such as the project's complexity and nature, the contrasts between new construction and reconstruction, and differences between urban region and rural region projects.
There are a number of elements of highway construction which can be broken up into technical and commercial elements of the system. Some examples of each are listed below:
Technical elements
Materials
Material quality
Installation techniques
Traffic
Commercial elements
Contract understanding
Environmental aspects
Political aspects
Legal aspects
Public concerns
Typically, construction begins at the lowest elevation of the site, regardless of the project type, and moves upward. By reviewing the geotechnical specifications of the project, information is given about:
Existing ground conditions
Required equipment for excavation, grading, and material transportation to and from the site
Properties of materials to be excavated
Dewatering requirements necessary for below-grade work
Shoring requirements for excavation protection
Water quantities for compaction and dust control
Subbase course construction
A subbase course is a layer designed of carefully selected materials that is located between the subgrade and base course of the pavement. The subbase thickness is generally in the range of 4 to 16 inches, and it is designed to withstand the required structural capacity of the pavement section.
Common materials used for a highway subbase include gravel, crushed stone, or subgrade soil that is stabilized with cement, fly ash, or lime. Permeable subbase courses are becoming more prevalent because of their ability to drain infiltrating water from the surface. They also prevent subsurface water from reaching the pavement surface.
When local material costs are excessively expensive or the material requirements to increase the structural bearing of the sub-base are not readily available, highway engineers can increase the bearing capacity of the underlying soil by mixing in Portland cement, foamed asphalt, or use polymer soil stabilization such as cross-linking styrene acrylic polymer that increases the California Bearing Ratio of in-situ materials by a factor 4 – 6.
Base course construction
The base course is the region of the pavement section that is located directly under the surface course. If there is a subbase course, the base course is constructed directly about this layer. Otherwise, it is built directly on top of the subgrade. Typical base course thickness ranges from 4 to 6 inches and is governed by underlying layer properties.
Heavy loads are continuously applied to pavement surfaces, and the base layer absorbs the majority of these stresses. Generally, the base course is constructed with an untreated crushed aggregate such as crushed stone, slag, or gravel. The base course material will have stability under the construction traffic and good drainage characteristics.
The base course materials are often treated with cement, bitumen, calcium chloride, sodium chloride, fly ash, or lime. These treatments provide improved support for heavy loads, frost susceptibility, and serves as a moisture barrier between the base and surface layers.
Surface course construction
There are two most commonly used types of pavement surfaces used in highway construction: hot-mix asphalt and Portland cement concrete. These pavement surface courses provide a smooth and safe riding surface, while simultaneously transferring the heavy traffic loads through the various base courses and into the underlying subgrade soils.
Hot-mix asphalt layers
Hot-mix asphalt surface courses are referred to as flexible pavements. The Superpave System was developed in the late 1980s and has offered changes to the design approach, mix design, specifications, and quality testing of materials.
The construction of an effective, long-lasting asphalt pavement requires an experienced construction crew, committed to their work quality and equipment control.
Construction issues:
Asphalt mix segregation
Laydown
Compaction
Joints
A prime coat is a low viscosity asphalt that is applied to the base course prior to laying the HMA surface course. This coat bonds loose material, creating a cohesive layer between the base course and asphalt surface.
A tack coat is a low viscosity asphalt emulsion that is used to create a bond between an existing pavement surface and new asphalt overlay. Tack coats are typically applied on adjacent pavements (curbs) to assist the bonding of the HMA and concrete.
Portland cement concrete (PCC)
Portland cement concrete surface courses are referred to as rigid pavements, or concrete pavements. There are three general classifications of concrete pavements - jointed plain, jointed reinforced, and continuously reinforced.
Traffic loadings are transferred between sections when larger aggregates in the PCC mix inter-lock together, or through load transfer devices in the transverse joints of the surface. Dowel bars are used as load-transferring devices to efficiently transfer loads across transverse joints while maintaining the joint's horizontal and vertical alignment. Tie-bars are deformed steel bars that are placed along longitudinal joints to hold adjacent pavement sections in place.
Highway maintenance
The overall purpose of highway maintenance is to fix defects and preserve the pavement's structure and serviceability. Defects must be defined, understood, and recorded in order to create an appropriate maintenance plan. Maintenance planning is solving an optimisation problem and it can be predictive. In predictive maintenance planning empirical, data-driven methods give more accurate results than mechanical models. Defects differ between flexible and rigid pavements.
There are four main objectives of highway maintenance:
repair of functional pavement defects
extend the functional and structural service life of the pavement
maintain road safety and signage
keep road reserve in acceptable condition
Through routine maintenance practices, highway systems and all of their components can be maintained to their original, as-built condition.
Project management
Project management involves the organization and structuring of project activities from inception to completion. Activities could be the construction of infrastructure such as highways and bridges or major and minor maintenance activities related to constructing such infrastructure. The entire project and involved activities must be handled in a professional manner and completed within deadlines and budget. In addition, minimizing social and environmental impacts is essential to successful project management.
| Technology | Disciplines | null |
205624 | https://en.wikipedia.org/wiki/Horizontal%20gene%20transfer | Horizontal gene transfer | Horizontal gene transfer (HGT) or lateral gene transfer (LGT) is the movement of genetic material between organisms other than by the ("vertical") transmission of DNA from parent to offspring (reproduction). HGT is an important factor in the evolution of many organisms. HGT is influencing scientific understanding of higher-order evolution while more significantly shifting perspectives on bacterial evolution.
Horizontal gene transfer is the primary mechanism for the spread of antibiotic resistance in bacteria, and plays an important role in the evolution of bacteria that can degrade novel compounds such as human-created pesticides and in the evolution, maintenance, and transmission of virulence. It often involves temperate bacteriophages and plasmids. Genes responsible for antibiotic resistance in one species of bacteria can be transferred to another species of bacteria through various mechanisms of HGT such as transformation, transduction and conjugation, subsequently arming the antibiotic resistant genes' recipient against antibiotics. The rapid spread of antibiotic resistance genes in this manner is becoming a challenge to manage in the field of medicine. Ecological factors may also play a role in the HGT of antibiotic resistant genes.
Horizontal gene transfer is recognized as a pervasive evolutionary process that distributes genes between divergent prokaryotic lineages and can also involve eukaryotes. HGT events are thought to occur less frequently in eukaryotes than in prokaryotes. However, growing evidence indicates that HGT is relatively common among many eukaryotic species and can have an impact on adaptation to novel environments. Its study, however, is hindered by the complexity of eukaryotic genomes and the abundance of repeat-rich regions, which complicate the accurate identification and characterization of transferred genes.
It is postulated that HGT promotes the maintenance of a universal life biochemistry and, subsequently, the universality of the genetic code.
History
Griffith's experiment, reported in 1928 by Frederick Griffith, was the first experiment suggesting that bacteria are capable of transferring genetic information through a process known as transformation. Griffith's findings were followed by research in the late 1930s and early 1940s that isolated DNA as the material that communicated this genetic information.
Horizontal genetic transfer was then described in Seattle in 1951, in a paper demonstrating that the transfer of a viral gene into Corynebacterium diphtheriae created a virulent strain from a non-virulent strain, simultaneously revealing the mechanism of diphtheria (that patients could be infected with the bacteria but not have any symptoms, and then suddenly convert later or never), and giving the first example for the relevance of the lysogenic cycle. Inter-bacterial gene transfer was first described in Japan in a 1959 publication that demonstrated the transfer of antibiotic resistance between different species of bacteria. In the mid-1980s, Syvanen postulated that biologically significant lateral gene transfer has existed since the beginning of life on Earth and has been involved in shaping all of evolutionary history.
As Jian, Rivera and Lake (1999) put it: "Increasingly, studies of genes and genomes are indicating that considerable horizontal transfer has occurred between prokaryotes" (see also Lake and Rivera, 2007). The phenomenon appears to have had some significance for unicellular eukaryotes as well. As Bapteste et al. (2005) observe, "additional evidence suggests that gene transfer might also be an important evolutionary mechanism in protist evolution."
Grafting of one plant to another can transfer chloroplasts (organelles in plant cells that conduct photosynthesis), mitochondrial DNA, and the entire cell nucleus containing the genome to potentially make a new species. Some Lepidoptera (e.g. monarch butterflies and silkworms) have been genetically modified by horizontal gene transfer from the wasp bracovirus. Bites from insects in the family Reduviidae (assassin bugs) can, via a parasite, infect humans with the trypanosomal Chagas disease, which can insert its DNA into the human genome. It has been suggested that lateral gene transfer to humans from bacteria may play a role in cancer.
Aaron Richardson and Jeffrey D. Palmer state: "Horizontal gene transfer (HGT) has played a major role in bacterial evolution and is fairly common in certain unicellular eukaryotes. However, the prevalence and importance of HGT in the evolution of multicellular eukaryotes remain unclear."
Due to the increasing amount of evidence suggesting the importance of these phenomena for evolution (see below) molecular biologists such as Peter Gogarten have described horizontal gene transfer as "A New Paradigm for Biology".
Mechanisms
There are several mechanisms for horizontal gene transfer:
Transformation, the genetic alteration of a cell resulting from the introduction, uptake and expression of foreign genetic material (DNA or RNA). This process is relatively common in bacteria, but less so in eukaryotes. Transformation is often used in laboratories to insert novel genes into bacteria for experiments or for industrial or medical applications. | Biology and health sciences | Genetics | Biology |
205710 | https://en.wikipedia.org/wiki/Achernar | Achernar | Achernar is the brightest star in the constellation of Eridanus and the ninth-brightest in the night sky. It has the Bayer designation Alpha Eridani, which is Latinized from α Eridani and abbreviated Alpha Eri or α Eri. The name Achernar applies to the primary component of a binary system. The two components are designated Alpha Eridani A (the primary) and B (the secondary), with the latter known informally as Achernar B. As determined by the Hipparcos astrometry satellite, this system is located at a distance of approximately from the Sun.
Of the ten brightest stars in the night-time sky by apparent magnitude, Alpha Eridani is the hottest and bluest in color because it is spectral type B. Achernar has an unusually rapid rotational velocity, causing it to become oblate in shape. The secondary is smaller, is spectral type A, and orbits Achernar at a distance of .
Nomenclature
α Eridani (Latinised to Alpha Eridani) is the system's Bayer designation. The designations of the two components—Alpha Eridani A and B—derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
The system bears the traditional name of Achernar (sometimes spelled Achenar), derived from the Arabic , meaning "The End of the River". However, it seems that this name originally referred to Theta Eridani instead, which latterly was known by the similar traditional name Acamar, with the same etymology. The IAU Working Group on Star Names (WGSN) approved the name with the spelling Achernar for the component Alpha Eridani A on 30 June 2016 and it is now so included in the List of IAU-approved Star Names.
In Chinese caused by adaptation of the European Southern Hemisphere constellations into the Chinese system, (), meaning Crooked Running Water, refers to an asterism consisting of Achernar, ζ Phoenicis and η Phoenicis. Consequently, Achernar itself is known as (, ).
The indigenous Boorong people of northwestern Victoria, Australia, named it Yerrerdetkurrk.
Namesake
USS Achernar (AKA-53) was a United States Navy attack cargo ship named after the star.
Properties
Achernar is in the deep southern sky and never rises above the horizon north of 33°N, roughly the latitude of Dallas, Texas. It is best seen from the Southern Hemisphere in November; it is circumpolar south of 33°S, roughly the latitude of Santiago. At this latitude—e.g., the south coast of South Africa (Cape Town to Port Elizabeth)—when at lower culmination it is only 1 degree above the horizon. Further south, it is well visible at all times during night.
Achernar is a bright, blue star about six to seven times the mass of the Sun. It has a stellar classification of B6 Vep, but despite appearing similar to a main sequence star, it is thought to have recently exhausted the hydrogen in its core and begun to evolve away from the main sequence. It has expanded to an average radius eight times the Sun's and is about 3,000 times more luminous. Infrared observations of the star using an adaptive optics system on the Very Large Telescope show that it has a companion star in a close orbit. This appears to be an A-type star in the stellar classification range A0V–A3V, which suggests a stellar mass of about double that of the Sun. The separation of the two stars is and their orbital period is 7 years.
The brightness of Achernar varies very slightly, by a maximum of 0.06 magnitudes or about 6%. A period of is given in the General Catalogue of Variable Stars, but several periods have been identified between about and . The longest periods are very similar to the rotation period of the star, although the exact period appears to vary as the rotational velocity of its upper atmosphere changes. The shortest periods may be harmonics of the longer periods. The variability type of Achernar is given only as a Be star and the exact causes of the brightness changes are unknown. The star itself appears to pulsate and the disk around it varies in size and shape as well as apparently disappearing at times.
As of 2015, Achernar was the least spherical star known in the Milky Way. It spins so rapidly that it has assumed the shape of an oblate spheroid with an equatorial diameter 35% greater than its polar diameter. The oblateness of Achernar is comparable to that of the dwarf planet Haumea, and the stars of Altair and Regulus. The polar axis is inclined about 60.6° to the line of sight from the Earth. Since it is actually a binary star, its highly distorted shape may cause non-negligible departures of the companion's orbital trajectory with respect to a Keplerian ellipse.
Because of the distorted shape of this star, there is a significant temperature variation by latitude. At the pole, the temperature is , while the equator is at . The average temperature of the star is about . The high polar temperatures are generating a fast polar wind that is ejecting matter from the star, creating a polar envelope of hot gas and plasma. The entire star is surrounded by an extended envelope that can be detected by its excess infrared emission, or by its polarization. The presence of a circumstellar disk of ionized gas is a common feature of Be stars such as this. The disk is not stable and periodically decretes back into the star. The maximum polarization for Achernar's disk was observed in September 2014, and it is now decreasing.
Co-moving companion
The red dwarf 2MASS J01375879−5645447 lies about half a degree north of Achernar. It has been identified as being at the same distance and sharing a common proper motion, as well as being of about the same age. The projected separation of the two is slightly over one light year and they would not be gravitationally bound, but it is proposed that both are part of the Tucana-Horologium association.
Historical visibility
Precession caused Achernar to lie much further south in ancient times than at present: 7.5 degrees from the south celestial pole around 3400 BCE (declination ) and still as far south as declination by around 1500 BCE. Hence the Ancient Egyptians could not have known it. Even in 100 CE, its declination was around , meaning Ptolemy could not possibly have seen it from Alexandria. However, it was visible from Syene in the time of the Almagest.
Until about March 2000, Achernar and Fomalhaut were the two first-magnitude stars farthest from any other, their nearest neighbors being each other. Antares is now the most isolated first-magnitude star. Antares is located in a constellation (Scorpius) with many bright second-magnitude stars, whereas the stars surrounding Alpha Eridani and Fomalhaut are considerably fainter.
The first star atlas to contain Achernar in the chart of Eridanus is Johann Bayer's Uranometria. Bayer did not observe it himself, and the first European knowledge of it is attributed to Pieter Dirkszoon Keyser on the first voyage of the Dutch to the East Indies ("Eerste Schipvaart"). Thus it was the only first-magnitude star not listed in Ptolemy's Almagest.
Alpha Eridani will continue to move north in the next few millennia, reaching its maximum northern declination between the 8th and 11th millennia, when it will be visible as far north as Germany and southern England.
| Physical sciences | Notable stars | Astronomy |
205716 | https://en.wikipedia.org/wiki/Secretarybird | Secretarybird | The secretarybird or secretary bird (Sagittarius serpentarius) is a large bird of prey that is endemic to Africa. It is mostly terrestrial, spending most of its time on the ground, and is usually found in the open grasslands and savanna of the sub-Saharan region. John Frederick Miller described the species in 1779. A member of the order Accipitriformes, which also includes many other diurnal birds of prey such as eagles, hawks, kites, vultures, and harriers, it is placed in its own family, Sagittariidae.
The secretarybird is instantly recognizable as a very large bird with an eagle-like body on crane-like legs that give the bird a height of as much as . The sexes are similar in appearance. Adults have a featherless red-orange face and predominantly grey plumage, with a flattened dark crest and black flight feathers and thighs.
Breeding can take place at any time of year but tends to be late in the dry season. The nest is built at the top of a thorny tree, and a clutch of one to three eggs is laid. In years with plentiful food all three young can survive to fledging. The secretarybird hunts and catches prey on the ground, often stomping on victims to kill them. Insects and small vertebrates make up its diet.
Although the secretarybird resides over a large range, the results of localised surveys suggest that the total population is experiencing a rapid decline, probably as a result of habitat destruction. The species is therefore classed as Endangered by the International Union for Conservation of Nature. The secretarybird appears on the coats of arms of Sudan and South Africa.
Taxonomy
The Dutch naturalist Arnout Vosmaer described the secretarybird in 1769 on the basis of a live specimen that had been sent to Holland from the Cape of Good Hope two years earlier by an official of the Dutch East India Company. Vosmaer suggested that the species was called "sagittarius" by the Dutch settlers because its gait was thought to resemble an archer's. He also mentioned that it was known as the "secretarius" by farmers who had domesticated the bird to combat pests around their homesteads, and proposed that the word "secretarius" might be a corruption of "sagittarius". Ian Glenn of the University of the Free State suggests that Vosmaer's "sagittarius" is a misheard or mis-transcribed form of "secretarius", rather than the other way around.
In 1779 the English illustrator John Frederick Miller included a coloured plate of the secretarybird in his Icones animalium et plantarum and coined the binomial name Falco serpentarius. As the oldest published specific name, serpentarius has priority over later scientific names. The species was assigned to its own genus Sagittarius in 1783 by the French naturalist Johann Hermann in his Tabula affinitatum animalium. The generic name Sagittarius is Latin for "archer", and the specific epithet serpentarius is from Latin serpens meaning "serpent" or "snake". A second edition of Miller's plates was published in 1796 as Cimelia physica, with added text by English naturalist George Shaw, who named it Vultur serpentarius. The French naturalist Georges Cuvier erected the genus Serpentarius in 1798, and the German naturalist Johann Karl Wilhelm Illiger erected the (now synonymous) genus Gypogeranus from the Ancient Greek words gyps "vulture" and geranos "crane" in 1811.
In 1835 the Irish naturalist William Ogilby spoke at a meeting of the Zoological Society of London and proposed three species of secretarybird, distinguishing those from Senegambia as having broader crest feathers than those from South Africa, and reporting a distinct species from the Philippines based on the writings of Pierre Sonnerat in his Voyage à la Nouvelle-Guinée. There is no other evidence this taxon existed. Despite its large range, the secretarybird is considered monotypic: no subspecies are recognised.
The evolutionary relationship of the secretarybird to other raptors had long puzzled ornithologists. The species was usually placed in its own family Sagittariidae within the order Falconiformes. A large molecular phylogenetic study published in 2008 concluded that the secretarybird was sister to a clade containing the ospreys in the family Pandionidae and the kites, hawks and eagles in the family Accipitridae. The same study found that the falcons in the order Falconiformes were only distantly related to the other diurnal birds of prey. The families Cathartidae, Sagittariidae, Pandionidae and Accipitridae were therefore moved from Falconiformes to the resurrected Accipitriformes. A later molecular phylogenetic study published in 2015 confirmed these relationships.
The earliest fossils associated with the family are two species from the genus Pelargopappus. The two species, from the Oligocene and Miocene respectively, were discovered in France. The feet in these fossils are more like those of the Accipitridae; it is suggested that these characteristics are primitive features within the family. In spite of their age, the two species are not thought to be ancestral to the secretarybird. Though strongly convergent with the modern secretarybird, the extinct raptor Apatosagittarius is thought to be an accipitrid.
The International Ornithologists' Union has designated "secretarybird" the official common name for the species. In 1780 the French polymath Georges-Louis Leclerc, Comte de Buffon suggested that the name secretary/secrétaire had been chosen because of the long quill-like feathers at the top of the bird's neck, reminiscent of a quill pen behind the ear of an ancient scribe. In 1977, C. Hilary Fry of Aberdeen University suggested that "secretary" is from the French secrétaire, a corruption of the Arabic saqr et-tair meaning either "hawk of the semi-desert" or "hawk that flies". Glenn has dismissed this etymology on the grounds that there is no evidence that the name came through French, instead supporting Buffon's etymology; namely, that the word comes from the Dutch secretaris "secretary", used by settlers in South Africa.
Description
The secretarybird is instantly recognisable as a very large terrestrial bird with an eagle-like head and body on crane-like legs. It stands about tall. It has a length of between and a wingspan of between . The weight ranges from , with a mean of . The averages and the tail is : both factor into making it both taller and longer than any other species of raptor. The neck is not especially long, and can only be lowered down to the intertarsal joint, so birds must stoop to reach down to the ground.
During flight, two elongated central feathers of the tail extend beyond the feet, and the neck stretches out like a stork. The plumage of the crown, upperparts, and lesser and median wing coverts is blue-grey, and the underparts and underwing coverts are lighter grey to grey-white. The crest is made up of long black feathers arising from the nape. The scapulars, primary and secondary flight feathers, rump and thighs are black, while the uppertail coverts are white, though barred with black in some individuals. The tail is wedge-shaped with white tipping, marbled grey and black colouring at the base, and two broad black bands, one at the base and the other at the end.
Sexes resemble one another, although the male tends to have longer tail feathers, more head plumes, a shorter head and more blue-grey plumage. Adults have a featherless red-orange face with pale brown irises and a yellow cere. The legs and feet are pinkish grey, the upper legs clad in black feathers. The toes are short—around 20% of the length of those of an eagle of the same size—and stout, so that the bird is unable to grasp objects with its feet. The rear toe is small and the three forward facing toes are connected at the base by a small web. Immature birds have yellow rather than orange bare skin on their faces, more brownish plumage, shorter tail feathers and greyish rather than brown irises.
Adults are normally silent but can utter a deep guttural croaking noise in nuptial displays or at nests. Secretarybirds make this sound when greeting their mates or in a threat display or fight against other birds, sometimes throwing their head backwards at the same time. When alarmed, the secretarybird may emit a high-pitched croak. Mated pairs at the nest make soft clucking or whistling calls. Chicks make a sharp sound heard as "chee-uk-chee-uk-chee-uk" for their first 30 days.
Distribution and habitat
The secretarybird is endemic to sub-Saharan Africa and is generally non-migratory, though it may be locally nomadic as it follows rainfall and the resulting abundance of prey. Its range extends from Senegal to Somalia and south to Western Cape, South Africa.
The species is also found at a variety of elevations, from the coastal plains to the highlands. The secretarybird prefers open grasslands, savannas and shrubland (Karoo) rather than forests and dense shrubbery that may impede its cursorial existence. More specifically, it prefers areas with grass under high and avoids those with grass over high. It is rarer in grasslands in northern parts of its range that otherwise appear similar to areas in southern Africa where it is abundant, suggesting it may avoid hotter regions. It also avoids deserts.
Behaviour and ecology
Secretarybirds are not generally gregarious aside from pairs and their offspring. They usually roost in trees of the genus Acacia or Balanites, or even introduced pine trees in South Africa. They set off 1–2 hours after dawn, generally after spending some time preening. Mated pairs roost together but may forage separately, though often remaining in sight of one another. They pace around at a speed of , taking 120 steps per minute on average. After spending much of the day on the ground, secretarybirds return at dusk, moving downwind before flying in upwind. Birds encountered singly are often unattached males, their territories generally in less suitable areas. Conversely, larger groups of up to 50 individuals may be present at an area with a localised resource such as a waterhole in a dry area or an irruption of rodents or locusts fleeing a fire.
Secretarybirds soar with their primary feathers splayed to manage turbulence. Their wings can flap, though in a slow laborious manner and requiring uplift to be sustained; otherwise they may become exhausted. In the heat of the day, they use thermals to rise up to above the ground.
The bird's average lifespan is thought to be 10 to 15 years in the wild and up to 19 years in captivity. The oldest confirmed secretarybird in the wild was a 5-year-old that was banded as a nestling on 23 July 2011 in Bloemfontein and recovered away in Mpumalanga on 7 June 2016.
Secretarybirds, like all birds, have haematozoan blood parasites that include Leucocytozoon beaurepairei (Dias 1954 recorded from Mozambique). Wild birds from Tanzania have been found to have Hepatozoon ellisgreineri, a genus that is unique among avian haematozoa in maturing within granulocytes, mainly neutrophils. Ectoparasites include the lice Neocolpocephalum cucullare (Giebel) and Falcolipeurus secretarius (Giebel).
Breeding
Secretarybirds form monogamous pairs and defend a large territory of around . They can breed at any time of the year, more frequently in the late dry season. During courtship, they exhibit a nuptial display by soaring high with undulating flight patterns and calling with guttural croaking. Males and females can also perform a ground display by chasing each other with their wings up and back, which is also the way they defend their territory. They mate either on the ground or in trees.
The nest is built by both sexes at the top of a dense thorny tree, often an Acacia, at a height of between above the ground. The nest is constructed as a relatively flat platform of sticks across with a depth . The shallow depression is lined with grass and the occasional piece of dung.
Eggs are laid at 2- to 3-day intervals until the clutch of 1–3 eggs is complete. The elongated chalky bluish green or white eggs average and weigh . Both parents incubate the eggs, starting as soon as the first egg is laid, but it is usually the female that remains on the nest overnight. The incubating parent greets its partner when it returns with a display of bowing and bobbing its head with neck extended. The tail is held upright with feathers fanned out, and the chest feathers are puffed out.
The eggs hatch after around 45 days at intervals of 2–3 days. Both parents feed the young. The adults regurgitate food onto the floor of the nest and then pick up items and pass them to the chicks. For the first 2 or 3 weeks after the eggs hatch the parents take turns to stay at the nest with the young. Despite the difference in nestling size due to the asynchronous hatching, little sibling aggression has been observed. Under favourable conditions all chicks from a clutch of three eggs fledge, but if food is scarce one or more of the chicks will die from starvation. The young may be preyed upon by crows, ravens, hornbills, and large owls.
The young are born covered in grey-white down that becomes darker grey after two weeks. Their bare facial skin and legs are yellow. Crest feathers appear at 21 days, and flight feathers by 28 days. They can stand up and feed autonomously after 40 days, although the parents still feed the nestlings after that time. At 60 days, the now fully-feathered young start to flap their wings. Their weight gain over this period changes from at hatching, to at 20 days, at 30 days, at 40 days, at 50 days, at 60 days, and at 70 days. The time they leave the nest can be anywhere between 65 and 106 days of age, although it most typically occurs between 75 and 80 days of age. Fledging is accomplished by jumping out of the nest or using a semi-controlled glide to the ground.
Juveniles remain in their natal range before dispersing when they are between 4 and 7 months of age. The usual age at which they first breed is uncertain but there is a record of a male bird breeding successfully at an age of 2 years and 9 months, which is young for a large raptor.
Food and feeding
Unlike most birds of prey, the secretarybird is largely terrestrial, hunting its prey on foot. Adults hunt in pairs and sometimes as loose familial flocks, stalking through the habitat with long strides. Prey may consist of insects such as locusts, other grasshoppers, wasps, and beetles, but small vertebrates often form main biomass. Secretarybirds are known to hunt rodents, frogs, lizards, small tortoises, and birds such as warblers, larks, doves, small hornbills, and domestic chickens. They occasionally prey on larger mammals such as hedgehogs, mongooses, small felids such as cheetah cubs, striped polecats, young gazelles, and both young and full-grown hares. The importance of snakes in the diet has been exaggerated in the past, although they can be locally important, and venomous species such as adders and cobras are regularly among the types of snakes preyed upon. Secretarybirds do not eat carrion, though they occasionally eat dead animals killed in grass or bushfires.
The birds often flush prey from tall grass by stomping on the surrounding vegetation. Their crest feathers may raise during a hunt, which may serve to help scare the target and provide shade for the face. A bird will chase after prey with the wings spread and kill by striking with swift blows of the feet. Only with small prey items such as wasps will the bird use its bill to pick them directly. There are some reports that, when capturing snakes, a secretarybird will take flight with their prey and then drop them to their death, although this has not been verified. Even with larger prey, food is generally swallowed whole through the birds' considerable gape. Occasionally, like other raptors, they will hold down a food item with their feet while tearing it apart with their bill.
Food that cannot be digested is regurgitated as pellets in diameter and in length. These are dropped on the ground usually near the roost or nest trees. The secretarybird has a relatively short digestive tract in comparison to large African birds with more mixed diets, such as the kori bustard. The foregut is specialised for the consumption of large amounts of meat and there is little need for the mechanical breakdown of food. The crop is dilated and the gizzard is nonmuscular, as in other carnivorous birds. The large intestine has a pair of vestigial ceca as there is no requirement for the fermentative digestion of plant material.
Secretarybirds specialise in stomping their prey until it is killed or immobilised. This method of hunting is commonly applied to lizards or snakes. An adult male trained to strike at a rubber snake on a force plate was found to hit with a force equal to five times its own body weight, with a contact period of only 10–15 milliseconds. This short time of contact suggests that the secretarybird relies on superior visual targeting to determine the precise location of the prey's head. Although little is known about its visual field, it is assumed that it is large, frontal and binocular. Secretarybirds have unusually long legs (nearly twice as long as other ground birds of the same body mass), which is thought to be an adaptation for the bird's unique stomping and striking hunting method. However, these long limbs appear to also lower its running efficiency. Ecophysiologist Steve Portugal and colleagues have hypothesised that the extinct Phorusrhacidae (terror birds) may have employed a similar hunting technique to secretarybirds because they are anatomically similar, although they are not closely related.
Secretarybirds rarely encounter other predators, except in the case of tawny eagles, which will steal their kills. Eagles mainly steal larger prey and will attack secretarybirds both singly or in pairs. Secretarybird pairs are sometimes successful in driving the eagles away and may even knock them down and pin them to the ground.
Relationship with humans
Cultural significance
The secretarybird is depicted on an ivory knife handle recovered from Abu Zaidan in Upper Egypt, dating to the Naqada III culture (c. 3,200 BC). This and other knife handles indicate the secretarybird most likely occurred historically further north along the Nile.
The secretarybird has traditionally been admired in Africa for its striking appearance and ability to deal with pests and snakes. As such it has often not been disturbed, although this is changing as traditional observances have declined. It is a prominent feature on the coat of arms of South Africa, which was adopted in 2000. With its wings outstretched, it represents growth, and its penchant for killing snakes is symbolic as the protector of the South African state against enemies. It is on the emblem of Sudan, adopted in 1969. It is featured on the Sudanese presidential flag and presidential seal. The secretarybird has been a common motif for African countries on postage stamps: over a hundred stamps from 37 issuers are known, including some from stamp-issuing entities such as Ajman, Manama, and the Maldives, regions where the bird does not exist, as well as the United Nations.
The Maasai people call it ol-enbai nabo, or "one arrow", referring to its crest feathers. They have used parts of the bird in traditional medicine: its feathers could be burnt and the resulting smoke inhaled to treat epilepsy, its egg could be consumed with tea twice daily to treat headaches, and its fat could be boiled and drunk for child growth or livestock health. The Xhosa people call the bird inxhanxhosi and attribute great intelligence to it in folklore. The Zulus call it intungunono.
The German biologist Ragnar Kinzelbach proposed in 2008 that the secretarybird was recorded in the 13th-century work De arte venandi cum avibus by Holy Roman Emperor Frederick II. Described as bistarda deserti, it was mistaken for a bustard. Frederick most likely gained knowledge of the bird from sources in Egypt. The 16th-century French priest and traveller André Thevet also wrote a description of a mysterious bird in 1558 that has been likened by Kinzelbach to this species.
Threats and conservation
In 1968 the species became protected under the African Convention on the Conservation of Nature and Natural Resources. The International Union for Conservation of Nature (IUCN) listed the secretarybird in 2016 as a vulnerable species and as endangered in 2020, due to a recent rapid decline across its entire range. Although widespread, the species is thinly distributed across its range; its population has been estimated in 2016 to be anywhere between 6,700 and 67,000 individuals. Long term monitoring across South Africa between 1987 and 2013 has shown that populations have declined across the country, even in protected areas such as Kruger National Park due to woody plant encroachment, an increase in the tall vegetation cover, resulting in loss of open habitat that the species prefers.
As a population, the secretarybird is mainly threatened by loss of habitat due to fragmentation by roads and development and overgrazing of grasslands by livestock. Some adaptation to altered areas has been recorded but the trend is for decline.
In captivity
The first successful rearing of a secretarybird in captivity occurred in 1986 at the Oklahoma City Zoo. Although secretarybirds normally build their nests in the trees in the wild, the captive birds at the zoo built theirs on the ground, which left them open to depredation by local wild mammals. To address this problem, the zoo staff removed the eggs from the nest each time they were laid so that they could be incubated and hatched at a safer location. The species has also been bred and reared in captivity at the San Diego Zoo Safari Park.
In June 2024, a secretarybird chick was successfully hatched at Longleat Safari Park in Wiltshire, born to parents Janine and Kevin, who have lived at the park since 2018. The chick’s sex is not yet known, and keepers are providing smaller food items for the protective parents. This successful hatch is seen as a promising step towards establishing a new breeding program for the species at the park.
| Biology and health sciences | Accipitriformes and Falconiformes | null |
205718 | https://en.wikipedia.org/wiki/Mimosa%20%28star%29 | Mimosa (star) | Mimosa is the second-brightest object in the southern constellation of Crux (after Acrux), and the 20th-brightest star in the night sky. It has the Bayer designation β Crucis, which is Latinised to Beta Crucis and abbreviated Beta Cru or β Cru. Mimosa forms part of the prominent asterism called the Southern Cross. It is a binary star or a possible triple star system.
Nomenclature
β Crucis (Latinised to Beta Crucis) is the system's Bayer designation. Although Mimosa is at roughly −60° declination, and therefore not visible north of 30° latitude, in the time of the ancient Greeks and Romans it was visible north of 40° due to the precession of equinoxes, and these civilizations regarded it as part of the constellation of Centaurus.
It bore the traditional names Mimosa and the historical name Becrux . Mimosa, which is derived from the Latin for 'actor', may come from the flower of the same name. Becrux is a modern contraction of the Bayer designation. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Mimosa for this star.
In Chinese, (), meaning Cross, refers to an asterism consisting of Acrux, Mimosa, Gamma Crucis, and Delta Crucis. Consequently, Mimosa itself is known as (, .).
Stellar system
Based on parallax measurements, Mimosa is located at a distance of from the Earth. In 1957, German astronomer Wulff-Dieter Heintz discovered that it is a spectroscopic binary with components that are too close together to resolve with a telescope. The pair orbit each other every 5 years with an estimated separation that varies from 5.4 to 12.0 Astronomical Units. The system is only 8 to 11 million years old.
The primary, β Crucis A, is a massive star with about 16 times the Sun's mass. The projected rotational velocity of this star is about . However, the orbital plane of the pair is only about 10°, which probably means the inclination of the star's pole is also likely to be low. This suggests that the azimuthal rotational velocity is quite high, at about . With a radius of about 8.4 times the radius of the Sun, this would mean the star has a rotational period of only about 3.6 days.
β Crucis A is a known β Cephei variable, although with an effective temperature of about 27,000 K it is at the hot edge of the instability strip where such stars are found. It has three different pulsation modes, none of which are radial. The periods of all three modes are in the range of 4.03–4.59 hours. Owing to the first application of polarimetry it is the heaviest star with an age determined by asteroseismology. The star has a stellar classification of B0.5 III. While the luminosity class is typically associated with giant stars that have exhausted the supply of hydrogen at their cores, Mimosa's temperature and luminosity imply that it is more likely to be a main sequence star fusing hydrogen into helium in its core. At more than ten times the mass of the Sun, Mimosa has sufficient mass to explode as a supernova, which might occur in roughly 6 million years. The high temperature of the star's outer envelope is what gives the star the blue-white hue that is characteristic of B-type stars. It is generating a strong stellar wind and is losing about per year, or the equivalent of the mass of the Sun every 100 million years. The wind is leaving the system with a velocity of 2,000 km s−1 or more.
The secondary, β Crucis B, may be a main sequence star with a stellar class of B2. In 2007, a third companion was announced, which may be a low mass, pre-main sequence star. The X-ray emission from this star was detected using the Chandra X-ray Observatory. Two other stars, located at angular separations of 44 and 370 arcseconds, are likely optical companions that are not physically associated with the system. The β Crucis system may be a member of the Lower Centaurus–Crux sub-group of the Scorpius–Centaurus association. This is a stellar association of stars that share a common origin.
In culture
Mimosa is represented in the flags of Australia, New Zealand, Samoa and Papua New Guinea as one of five stars making up the Southern Cross. It is also featured in the flag of Brazil, along with 26 other stars, each of which represents a state. Mimosa represents the State of Rio de Janeiro.
A vessel named MV Becrux is used to export live cattle from Australia to customers in Asia. An episode dedicated to the vessel features in the television documentary series Mighty Ships.
| Physical sciences | Notable stars | Astronomy |
205719 | https://en.wikipedia.org/wiki/Beta%20Centauri | Beta Centauri | Beta Centauri is a triple star system in the southern constellation of Centaurus. It is officially called Hadar (). The Bayer designation of Beta Centauri is Latinised from β Centauri, and abbreviated Beta Cen or β Cen. The system's combined apparent visual magnitude of 0.61 makes it the second-brightest object in Centaurus and the eleventh brightest star in the night sky. According to parallax measurements from the astrometric Hipparcos satellite, the distance to this system is about .
Nomenclature
β Centauri (Latinised to Beta Centauri) is the star system's Bayer designation.
It bore the traditional names Hadar and Agena. Hadar comes from the Arabic حضار (the root's meaning is "to be present" or "on the ground" or "settled, civilized area"), while the name Agena is thought to be derived from the Latin genua, meaning "knees", from the star's position on the left knee of the centaur depicted in the constellation Centaurus. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Hadar for the star β Centauri Aa on 21 August 2016 and it is now so entered in the IAU Catalog of Star Names.
The Chinese name for the star is 马腹一 (Mandarin: mǎ fù yī, "the First Star of the Horse's Abdomen").
The Boorong people indigenous to what is now northwestern Victoria, Australia named it Bermbermgle (together with α Centauri), two brothers who were noted for their courage and destructiveness, and who spear and kill Tchingal, "The Emu" (Coalsack Nebula). The Wotjobaluk people name the two brothers Bram-bram-bult.
Visibility
Beta Centauri is one of the brightest stars in the sky at magnitude 0.61. Its brightness varies by a few hundredths of a magnitude, too small to be noticeable to the naked eye. Because of its spectral type and the detection of pulsations, the Aa component has been classified as a β Cephei variable.
Beta Centauri is well known in the Southern Hemisphere as the inner of the two "Pointers" to the constellation Crux, popularly known as the Southern Cross. A line made from the other pointer, Alpha Centauri, through Beta Centauri leads to within a few degrees of Gacrux, the star at the north end of the cross. Using Gacrux, a navigator can draw a line with Acrux at the south end to effectively determine south.
Stellar system
The Beta Centauri system is made up of three stars: Beta Centauri Aa, Beta Centauri Ab, and Beta Centauri B. All the spectral lines detected are consistent with a B1-type star, with only the line profiles varying, so it is thought that all three stars have the same spectral type.
In 1935, Joan Voûte identified Beta Centauri B, giving it the identifier VOU 31. The companion is separated from the primary by 1.3 seconds of arc, and has remained so since the discovery, although the position angle has changed six degrees since. Beta Centauri B is a B1 dwarf with an apparent magnitude of 4.
In 1967, Beta Centauri's observed variation in radial velocity suggested that Beta Centauri A is a binary star. This was confirmed in 1999. It consists of a pair of stars, β Centauri Aa and β Centauri Ab, of similar mass that orbit each other over a period of 357 days with a large eccentricity of about 0.8245.
The pair were calculated to be separated by a mean distance of roughly 4 astronomical units (based on a distance to the system of 161 parsecs) in 2005.
Both Aa and Ab apparently have a stellar classification of B1 III, with the luminosity class of III indicating giant stars that are evolving away from the main sequence. Component Aa rotates much more rapidly than Ab, causing its spectral lines to be broader, and so the two components can be distinguished in the spectrum. Component Ab, the slow-rotating star, has a strong magnetic field although no detected abundance peculiarities in its spectrum. Multiple pulsations modes have been detected in component Aa, some of which correspond to brightness variations, so this star is considered to be variable. The detected pulsation modes correspond to those for both β Cephei variables and slowly pulsating B stars. Similar pulsations have not been detected in component Ab, but it is possible that it is also a variable star.
Aa is 12.02 ± 0.13 times as massive as the Sun, while Ab is 10.58 ± 0.18 times as massive.
| Physical sciences | Notable stars | Astronomy |
205721 | https://en.wikipedia.org/wiki/Regulus | Regulus | Regulus is the brightest object in the constellation Leo and one of the brightest stars in the night sky. It has the Bayer designation designated α Leonis, which is Latinized to Alpha Leonis, and abbreviated Alpha Leo or α Leo. Regulus appears singular, but is actually a quadruple star system composed of four stars that are organized into two pairs. The spectroscopic binary Regulus A consists of a blue-white main-sequence star and its companion, which has not yet been directly observed, but is probably a white dwarf. The system lies approximately 79 light years from the Sun.
HD 87884 is separated from Regulus by and is itself a close pair. Regulus, along with five slightly dimmer stars (Zeta Leonis, Mu Leonis, Gamma Leonis, Epsilon Leonis, and Eta Leonis) have collectively been called 'the Sickle', which is an asterism that marks the head of Leo.
Nomenclature
α Leonis (Latinized to Alpha Leonis) is the star system's Bayer designation. The traditional name Rēgulus is Latin for 'prince' or 'little king'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Regulus for this star. It is now so entered in the IAU Catalog of Star Names.
Observation
The Regulus system as a whole is the twenty-first brightest star in the night sky with an apparent magnitude of +1.35. The light output is dominated by Regulus A. Regulus B, if seen in isolation, would be a binocular object of magnitude +8.1, and its companion, Regulus C, the faintest of the three stars that has been directly observed, would require a substantial telescope to be seen, at magnitude +13.5. Regulus A is itself a spectroscopic binary; the secondary star has not yet been directly observed as it is much fainter than the primary. The BC pair lies at an angular distance of 177 arc-seconds from Regulus A, making them visible in amateur telescopes.
Regulus is 0.465 degrees from the ecliptic, the closest of the bright stars, and is often occulted by the Moon. This occurs in spates every 9.3 years, due to lunar precession. The last spate was around 2017, with occultations every month from December 2016 till July 2017, each one limited to certain areas on Earth. Occultations by Mercury and Venus are possible but rare, as are occultations by asteroids. Seven other stars which have a Bayer designation are less than 0.9° from the ecliptic (perfected, mean plane of Earth's orbit and mean apparent path of the Sun) the next brightest of which is δ (Delta) Geminorum, of magnitude +3.53. As Regulus closely aligns to the mean orbits of large bodies of the Solar System and involves more light reaching the Earth than such other stars, the system has advanced telescopic use (to study and identify objects occulting and casting their shadow on a telescope, including known or unknown asteroids of the Solar System such as Trojans, being in line by definition with their associated planetary plane).
The last occultation of Regulus by a planet was on July 7, 1959, by Venus. The next will occur on October 1, 2044, also by Venus. Other planets will not occult Regulus over the next few millennia because of their node positions. An occultation of Regulus by the asteroid 166 Rhodope was filmed in Italy on October 19, 2005. Differential bending of light was measured to be consistent with general relativity. Regulus was occulted by the asteroid 163 Erigone in the early morning of March 20, 2014. The center of the shadow path passed through New York and eastern Ontario, but no one is known to have seen it, due to cloud cover. The International Occultation Timing Association recorded no observations at all.
Although best seen in the evening in the northern hemisphere's late winter and spring, Regulus appears at some time of night throughout the year except for about a month (depending on ability to compensate for the sun's glare, ideally done so in twilight) on either side of August 22–24, when the Sun is too close. The star can be viewed the whole night, crossing the sky, in late February. Regulus passes through SOHO's LASCO C3 every August.
For Earth observers, the heliacal rising (pre-sunrise appearance) of Regulus occurs late in the first week of September, or in the second week. Every 8 years, Venus passes very near the star system around or a few days before the heliacal rising, as on 5 September 2022 (the superior conjunction of Venus happens about two days earlier with each turn of its 8-year cycle, so as this cycle continues Venus will more definitely pass Regulus before the star's heliacal rising).
Stellar system
Regulus is a multiple star system consisting of at least four stars. Regulus A is the dominant star, with a binary companion 177" distant that is thought to be physically related. Regulus D is a 12th magnitude companion at 212", but is an unrelated background object.
Regulus A is a binary star consisting of a blue-white subgiant star of spectral type B8, which is orbited by a star of at least 0.3 solar masses, which is probably a white dwarf. The two stars take approximately 40 days to complete an orbit around their common centre of mass. Given the extremely distorted shape of the primary, the relative orbital motion may be notably altered with respect to the two-body purely Keplerian scenario because of non-negligible long-term orbital perturbations affecting, for example, its orbital period. In other words, Kepler's third law, which holds exactly only for two point-like masses, would no longer be valid for the Regulus system. Regulus A was long thought to be fairly young, only 50–100 million years old, calculated by comparing its temperature, luminosity, and mass. The existence of a white dwarf companion would mean that the system is at least 1 billion years old, just to account for the formation of the white dwarf. The discrepancy can be accounted for by a history of mass transfer onto a once-smaller Regulus A.
The primary of Regulus A has about 3.8 times the Sun's mass. It is spinning extremely rapidly, with a rotation period of only 15.9 hours (for comparison, the rotation period of the Sun is 25 days), which causes it to have a highly oblate shape. This results in so-called gravity darkening: the photosphere at Regulus' poles is considerably hotter, and five times brighter per unit surface area, than its equatorial region. The star's surface at the equator rotates at about 320 kilometres per second (199 miles per second), or 96.5% of its critical angular velocity for break-up. It is emitting polarized light because of this.
Regulus BC is 5,000 AU from Regulus A. A and BC share a common proper motion and are thought to orbit each other taking several million years. Designated Regulus B and Regulus C, the pair has Henry Draper Catalogue number HD 87884. The first is a K2V star, while the second is about M4V. The companion pair has an orbital period of about 600 years with a separation of 2.5" in 1942.
There is also a substellar object, called SDSS J100711.74+193056.2 (SDSS J1007+1930), that is potentially bound to the Regulus system. It is located at parsec () to Regulus and has similar proper motion, comparable radial velocity and similar age (1-2 billion years), indicating that it is a companion to the system, while its metallicity is similar to Regulus B. Assuming an age of 1 billion years, it would be a brown dwarf with a mass of 0.06 (62.8 ). If bound to the system, the orbital period of SDSS J1007+1930 would be about 200 million years, about the same as the orbital period of the Sun around the Milky Way (Galactic year). In the future it will either be stripped away by stellar encounters because it is so weakly bound to the system, or it was once closer and got ejected by dynamical interactions.
Etymology and cultural associations
Rēgulus is Latin for 'prince' or 'little king'; its Greek equivalent is Basiliskos or, in Latinised form, Basiliscus. The name Regulus first appeared in the early 16th century. It is also known as Qalb al-Asad, from the Arabic قلب الأسد, meaning 'the heart of the lion', a name already attested in the Greek Kardia Leontos whose Latin equivalent is Cor Leōnis. The Arabic phrase is sometimes approximated as Kabelaced. In Chinese it is known as 軒轅十四, the Fourteenth Star of Xuanyuan, the Yellow Emperor. In Indian astronomy, Regulus corresponds to the Nakshatra Magha ("the bountiful").
Babylonians called it Sharru ("the King"), and it marked the 15th ecliptic constellation. In India it was known as Maghā ("the Mighty"), in Sogdiana Magh ("the Great"), in Persia Miyan ("the Centre") and also as one of the four 'royal stars' of the Persian monarchy. It was one of the fifteen Behenian stars known to medieval astrologers, associated with granite, mugwort, and the kabbalistic symbol .
In the Babylonian MUL.APIN, Regulus is listed as Lugal, meaning king, with co-descriptor, "star of the Lion's breast".
| Physical sciences | Notable stars | Astronomy |
205742 | https://en.wikipedia.org/wiki/Glaucophyte | Glaucophyte | The glaucophytes, also known as glaucocystophytes or glaucocystids, are a small group of unicellular algae found in freshwater and moist terrestrial environments, less common today than they were during the Proterozoic. The stated number of species in the group varies from about 14 to 26. Together with the red algae (Rhodophyta) and the green algae plus land plants (Viridiplantae or Chloroplastida), they form the Archaeplastida.
The glaucophytes are of interest to biologists studying the evolution of chloroplasts as they may be similar to the original algal type that led to the red algae and green plants, i.e. glaucophytes may be basal Archaeplastida.
Unlike red and green algae, glaucophytes only have asexual reproduction.
Characteristics
The plastids of glaucophytes are known as 'muroplasts', 'cyanoplasts', or 'cyanelles'. Unlike the plastids in other organisms, they have a peptidoglycan layer, believed to be a relic of the endosymbiotic origin of plastids from cyanobacteria. Glaucophytes contain the photosynthetic pigment chlorophyll a. Along with red algae and cyanobacteria, they harvest light via phycobilisomes, structures consisting largely of phycobiliproteins. The green algae and land plants have lost that pigment. Like red algae, and in contrast to green algae and plants, glaucophytes store fixed carbon in the cytosol.
The most early-diverging genus is Cyanophora, which only has one or two plastids. When there are two, they are semi-connected.
Glaucophytes have mitochondria with flat cristae, and undergo open mitosis without centrioles. Motile forms have two unequal flagella, which may have fine hairs and are anchored by a multilayered system of microtubules, both of which are similar to forms found in some green algae.
Phylogeny
External
Together with red algae and Viridiplantae (green algae and land plants), glaucophytes form the Archaeplastida – a group of plastid-containing organisms that may share a unique common ancestor that established an endosymbiotic association with a cyanobacterium. The relationship among the three groups remains uncertain, although it is most likely that glaucophytes diverged first:
The alternative, that glaucophytes and red algae form a clade, has been shown to be less plausible, but cannot be ruled out.
Internal
The internal phylogeny of the glaucophytes and the number of genera and species varies considerably among taxonomic sources. A phylogeny of the Glaucophyta published in 2017 divided the group into three families, and includes five genera:
Taxonomy
A 2019 list of the described glaucophyte species has the same three subdivisions, treated as orders, but includes a further five unplaced possible species, producing a total of between 14 and 19 possible species.
Order Cyanophorales
Genus Cyanophora – 5–6 species
Order Glaucocystales
Genus Glaucocystis – 7–8 species
Order Gloeochaetales
Cyanoptyche – 1 species
Gloeochaete – 1 species
Other possible species
?Archaeopsis monococca Skuja
?Chalarodora azurea Pascher
?Glaucocystopsis africana Bourrelly
?Peliaina cyanea Pascher
?Strobilomonas cyaneus Schiller
, AlgaeBase divided glaucophytes into only two groups, placing Cyanophora in Glaucocystales rather than Cyanophorales (however the entry was dated 2011). AlgaeBase included a total of 26 species in nine genera:
Glaucocystales
Chalarodora Pascher – 1 species
Corynoplastis Yokoyama, J.L.Scott, G.C.Zuccarello, M.Kajikawa, Y.Hara & J.A.West – 1 species
Cyanophora Korshikov – 6 species
Glaucocystis Itzigsohn – 13 species
Glaucocystopsis Bourrelly – 1 species
Peliaina Pascher – 1 species
Strobilomonas Schiller – 1 species
Gloeochaetales
Cyanoptyche Pascher – 1 species
Gloeochaete Lagerheim – 1 species
None of the species of Glaucophyta is particularly common in nature.
The glaucophytes were considered before as part of family Oocystaceae, in the order Chlorococcales.
| Biology and health sciences | Bikonts | Plants |
206018 | https://en.wikipedia.org/wiki/Weather%20station | Weather station | A weather station is a facility, either on land or sea, with instruments and equipment for measuring atmospheric conditions to provide information for weather forecasts and to study the weather and climate. The measurements taken include temperature, atmospheric pressure, humidity, wind speed, wind direction, and precipitation amounts. Wind measurements are taken with as few other obstructions as possible, while temperature and humidity measurements are kept free from direct solar radiation, or insolation. Manual observations are taken at least once daily, while automated measurements are taken at least once an hour. Weather conditions out at sea are taken by ships and buoys, which measure slightly different meteorological quantities such as sea surface temperature (SST), wave height, and wave period. Drifting weather buoys outnumber their moored versions by a significant amount.
Weather instruments
A weather instrument is any device that measures weather related conditions. Since there are a variety of different weather conditions, there are a variety of different weather instruments.
Typical weather stations have the following instruments:
Thermometer for measuring air and sea surface temperature
Barometer for measuring atmospheric pressure
Hygrometer for measuring humidity
Anemometer for measuring wind speed
Pyranometer for measuring solar radiation
Rain gauge for measuring liquid precipitation over a set period of time.
Wind sock for measuring general wind speed and wind direction
Wind vane, also called a weather vane or a weathercock: it shows which way the wind is blowing.
Evaporation pan for measuring evaporation.
In addition, at certain automated airport weather stations, additional instruments may be employed, including:
Present weather sensor for identifying falling precipitation
Disdrometer for measuring drop size distribution
Transmissometer for measuring visibility
Ceilometer for measuring cloud ceiling
More sophisticated stations may also measure the ultraviolet index, leaf wetness, soil moisture, soil temperature, water temperature in ponds, lakes, creeks, or rivers, and occasionally other data.
Exposure
Except for those instruments requiring direct exposure to the elements (anemometer, rain gauge), the instruments should be sheltered in a vented box, usually a Stevenson screen, to keep direct sunlight off the thermometer and wind off the hygrometer. The instrumentation may be specialized to allow for periodic recording, otherwise significant manual labour is required for record keeping. Automatic transmission of data, in a format such as METAR, is also desirable as many weather station's data is required for weather forecasting.
Personal weather station
A personal weather station is a set of weather measuring instruments operated by a private individual, club, association, or business (where obtaining and distributing weather data is not a part of the entity's business operation). Personal weather stations have become more advanced and can include many different sensors to measure weather conditions. These sensors can vary between models but most measure wind speed, wind direction, outdoor and indoor temperatures, outdoor and indoor humidity, barometric pressure, rainfall, and UV or solar radiation. Other available sensors can measure soil moisture, soil temperature, and leaf wetness. The quality, number of instruments, and placement of personal weather stations can vary widely, making the determination of which stations collect accurate, meaningful, and comparable data difficult. There are a comprehensive number of retail weather stations available.
Personal weather stations typically involve a digital console that provides readouts of the data being collected. These consoles may interface to a personal computer where data can be displayed, stored, and uploaded to websites or data ingestion/distribution systems. Open-source weather stations are available that are designed to be fully customizable by users.
Personal weather stations may be operated solely for the enjoyment and education of the owner, while some owners share their results with others. They do this by manually compiling data and distributing it, distributing data over the Internet, or sharing data via amateur radio. The Citizen Weather Observer Program (CWOP) is a service which facilitates the sharing of information from personal weather stations. This data is submitted through use of software, a personal computer, and internet connection (or amateur radio) and are utilized by groups such as the National Weather Service (NWS) when generating forecast models. Each weather station submitting data to CWOP will also have an individual Web page that depicts the data submitted by that station. The Weather Underground Internet site is another popular destination for the submittal and sharing of data with others around the world. As with CWOP, each station submitting data to Weather Underground has a unique Web page displaying their submitted data. The UK Met Office's Weather Observations Website (WOW) also allows such data to be shared and displayed.
Dedicated ships
A weather ship was a ship stationed in the ocean as a platform for surface and upper air meteorological measurements for use in weather forecasting. It was also meant to aid in search and rescue operations and to support transatlantic flights. The establishment of weather ships proved to be so useful during World War II that the International Civil Aviation Organization (ICAO) established a global network of 13 weather ships in 1948. Of the 12 left in operation in 1996, nine were located in the northern Atlantic Ocean while three were located in the northern Pacific Ocean. The agreement of the weather ships ended in 1990. Weather ship observations proved to be helpful in wind and wave studies, as they did not avoid weather systems like merchant ships tended to and were considered a valuable resource. The last weather ship was , known as weather station M ("jilindras") at 66°N, 02°E, run by the Norwegian Meteorological Institute. MS Polarfront was removed from service January 1, 2010. Since the 1960s this role has been largely superseded by satellites, long range aircraft and weather buoys. Weather observations from ships continue from thousands of voluntary merchant vessels in routine commercial operation; the Old Weather crowdsourcing project transcribes naval logs from before the era of dedicated ships.
Dedicated buoys
Weather buoys are instruments which collect weather and oceanography data within the world's oceans and lakes. Moored buoys have been in use since 1951, while drifting buoys have been used since the late 1970s. Moored buoys are connected with the seabed using either chains, nylon, or buoyant polypropylene. With the decline of the weather ship, they have taken a more primary role in measuring conditions over the open seas since the 1970s. During the 1980s and 1990s, a network of buoys in the central and eastern tropical Pacific Ocean helped study the El Niño-Southern Oscillation. Moored weather buoys range from in diameter, while drifting buoys are smaller, with diameters of . Drifting buoys are the dominant form of weather buoy in sheer number, with 1250 located worldwide. Wind data from buoys has smaller error than that from ships. There are differences in the values of sea surface temperature measurements between the two platforms as well, relating to the depth of the measurement and whether or not the water is heated by the ship which measures the quantity.
Synoptic weather station
Synoptic weather stations are instruments which collect meteorological information at synoptic time 00h00, 06h00, 12h00, 18h00 (UTC) and at intermediate synoptic hours 03h00, 09h00, 15h00, 21h00 (UTC). Every weather station has assigned station unique code by WMO for identification.
The common instruments of measure are anemometer, wind vane, pressure sensor, thermometer, hygrometer, and rain gauge.
The weather measures are formatted in special format and transmit to WMO to help the weather forecast model.
Networks
A variety of land-based weather station networks have been set up globally. Some of these are basic to analyzing weather fronts and pressure systems, such as the synoptic observation network, while others are more regional in nature, known as mesonets.
Global
Global Climate Observing System
Citizen Weather Observer Program (CWOP)
Weather Underground Personal Weather Stations
Japan
Automated Meteorological Data Acquisition System (AMeDAS)(アメダス)
United States of America
Arizona Meteorological Network (AZMET)
Central Pennsylvania Volunteer Weather Station Network
Florida Automated Weather Network (FAWN)
Georgia Environmental Monitoring Network (GAEMN)
Indiana Purdue Automated Agricultural Weather Station Network (PAAWS)
Iowa Environmental Mesonet (IEM)
MesoWest
Michigan Automated Weather Network (MAWN)
Missouri Weather Stations
National Weather Service Cooperative Observer (COOP) program
New York State Mesonet
Oklahoma Mesonet
The Pacific Northwest Cooperative Agricultural Weather Network
Southern Hemisphere
Antarctic Automatic Weather Stations Project
Australia: Bureau of Meteorology AWS network.
Australia: Department of Agriculture and Food Western Australia
Australia: Lower Murray Water Automatic Weather Station Network
| Physical sciences | Meteorology: General | Earth science |
206064 | https://en.wikipedia.org/wiki/Van%20der%20Waals%20equation | Van der Waals equation | The van der Waals equation is a mathematical formula that describes the behavior of real gases. It is named after Dutch physicist Johannes Diderik van der Waals. It is an equation of state that relates the pressure, temperature, and molar volume in a fluid. However, it can be written in terms of other, equivalent, properties in place of the molar volume, such as specific volume or number density. The equation modifies the ideal gas law in two ways: first, it considers particles to have a finite diameter (whereas an ideal gas consists of point particles); second, its particles interact with each other (unlike an ideal gas, whose particles move as though alone in the volume).
It was only in 1909 that the scientific debate about the nature of matter—discrete or continuous—was finally settled. Indeed, at the time van der Waals created his equation, which he based on the idea that fluids are composed of discrete particles, few scientists believed that such particles really existed. They were regarded as purely metaphysical constructs that added nothing useful to the knowledge obtained from the results of experimental observations. However, the theoretical explanation of the critical point, which had been discovered a few years earlier, and later its qualitative and quantitative agreement with experiments cemented its acceptance in the scientific community. Ultimately these accomplishments won van der Waals the 1910 Nobel Prize in Physics. Today the equation is recognized as an important model of phase change processes. Van der Waals also adapted his equation so that it applied to a binary mixture of fluids. He, and others, then used the modified equation to discover a host of important facts about the phase equilibria of such fluids. This application, expanded to treat multi-component mixtures, has extended the predictive ability of the equation to fluids of industrial and commercial importance. In this arena it has spawned many similar equations in a continuing attempt by engineers to improve their ability to understand and manage these fluids; it remains relevant to the present.
Behavior of the equation
One way to write the van der Waals equation is:where is pressure, is the universal gas constant, is temperature, is molar volume, and and are experimentally determinable, substance-specific constants. Molar volume is given by , where is the Avogadro constant, is the volume, and is the number of molecules (the ratio is the amount of substance, a physical quantity with the base unit mole).
When van der Waals created his equation, few scientists believed that fluids were composed of rapidly moving particles. Moreover, those who thought so had no knowledge of the atomic/molecular structure. The simplest conception of a particle, and the easiest to model mathematically, was a hard sphere; this is what van der Waals used. In that case, two particles of diameter would come into contact when their centers were a distance apart; hence the center of the one was excluded from a spherical volume equal to about the other. That is 8 times , the volume of each particle of radius , but there are 2 particles which gives 4 times the volume per particle. The total excluded volume is then ; that is, 4 times the volume of all the particles. Van der Waals and his contemporaries used an alternative, but equivalent, analysis based on the mean free path between molecular collisions that gave this result. From the fact that the volume fraction of particles, must be positive, van der Waals noted that as becomes larger the factor 4 must decrease (for spheres there is a known minimum ), but he was never able to determine the nature of the decrease.
The constant , and has dimension of molar volume, [v]. The constant expresses the strength of the hypothesized interparticle attraction. Van der Waals only had as a model Newton's law of gravitation, in which two particles are attracted in proportion to the product of their masses. Thus he argued that in his case the attractive pressure was proportional to the square of the density. The proportionality constant, , when written in the form used above, has the dimension [pv2] (pressure times molar volume squared), which is also molar energy times molar volume.
The intermolecular force was later conveniently described by the negative derivative of a pair potential function. For spherically symmetric particles, this is most simply a function of separation distance with a single characteristic length, , and a minimum energy, (with ). Two of the many such functions that have been suggested are shown in the accompanying plot.
A modern theory based on statistical mechanics produces the same result for obtained by van der Waals and his contemporaries. This result is valid for any pair potential for which the increase in is sufficiently rapid. This includes the hard sphere model for which the increase is infinitely rapid and the result is exact. Indeed, the Sutherland potential most accurately models van der Waals' conception of a molecule. It also includes potentials that do not represent hard sphere force interactions provided that the increase in for is fast enough, but then it is approximate; increasingly better the faster the increase. In that case is only an "effective diameter" of the molecule. This theory also produces where is a number that depends on the shape of the potential function, . However, this result is only valid when the potential is weak, namely, when the minimum potential energy is very much smaller than the thermal energy, .
In his book (see references and ) Ludwig Boltzmann wrote equations using (specific volume) rather than (molar volume, used here); Josiah Willard Gibbs did as well, as do most engineers. Physicists use the property (the reciprocal of number density), but there is no essential difference between equations written with any of these properties. Equations of state written using molar volume contain , those using specific volume contain (the substance specific is the molar mass with the mass of a single particle), and those written with number density contain .
Once the constants and are experimentally determined for a given substance, the van der Waals equation can be used to predict attributes like the boiling point at any given pressure, and the critical point (defined by pressure and temperature such that the substance cannot be liquefied either when no matter how low the temperature, or when no matter how high the pressure; uniquely define ). These predictions are accurate for only a few substances. For most simple fluids they are only a valuable approximation. The equation also explains why superheated liquids can exist above their boiling point and subcooled vapors can exist below their condensation point.
Example
The graph on the right plots the intersection of the surface shown in Figures A and C and four planes of constant pressure. Each intersection produces a curve in the plane corresponding to the value of the pressure chosen. These curves are isobars, since they represent all the points with the same pressure.
On the red isobar, , the slope is positive over the entire range, (although the plot only shows a finite region). This describes a fluid as homogeneous for all —that is, it does not undergo a phase transition at any temperature—which is a characteristic of all supercritical isobars .
The orange isobar, , is the critical one that marks the boundary between homogeneity and heterogeneity. The critical point lies on this isobar.
The green isobar, , has a region of negative slope. This region consists of states that are unstable and therefore never observed (for this reason this region is shown dotted gray). The green curve thus consists of two disconnected branches, indicated two phase states: a vapor on the right, and a denser liquid on the left. For this pressure, at a temperature (specified by mechanical, thermal, and material equilibrium), the boiling (saturated) liquid and condensing (saturated) vapor coexist, shown on the curve as the left and right green circles, respectively. The locus of these coexistent saturation points across all subcritical isobars forms the saturation curve on the surface. In this situation, the denser liquid will separate and collect below the vapor due to gravity, and a meniscus will form between them. This heterogeneous combination of coexisting liquid and vapor is the phase transition. Heating the liquid in this state increases the fraction of vapor in the mixture—its , an average of and weighted by this fraction, increases while remains the same. This is shown as the horizontal dotted gray line, which represents not a solution of the equation but the observed behavior. The points above , superheated liquid, and those below it, subcooled vapor, are metastable; a sufficiently strong disturbance causes them to transform to the stable alternative. These metastable regions are shown with green dashed lines. In summary, this isobar describes a fluid as a stable vapor for , a stable liquid for , and a mixture of liquid and vapor at , that also supports metastable states of subcooled vapor and superheated liquid. This behavior is characteristic of all subcritical isobars , where is a function of .
The black isobar, , is the limit of positive pressures. None of its points represent stable solutions: they are either metastable (positive or zero slope) or unstable (negative slope). Interestingly, states of negative pressure (tension) exist. Their isobars lie below the black isobar, and form those parts of the surfaces seen in Figures A and C that lie below the zero-pressure plane. In this plane they have a parabola-like shape, and, like the zero-pressure isobar, their states are all either metastable (positive or zero slope) or unstable (negative slope).
Surface plots
Figure B shows the surface calculated from the ideal gas equation of state. This surface is universal, meaning it represents all ideal gases. Here, the surface is normalized so that the coordinate is at in the 3-dimensional plot space (the black dot). This normalization makes it easier to compare this surface with the surface generated by the van der Waals equation in Figure C.
Figures A and C show the surface calculated from the van der Waals equation. Note that whereas the ideal gas surface is relatively uniform, the van der Waals surface has a distinctive "fold". This fold develops from a critical point defined by specific values of pressure, temperature, and molar volume. Because the surface is plotted using dimensionless variables (formed by the ratio of each property to its respective critical value), the critical point is located at the coordinates . When drawn using these dimensionless axes, this surface is, like that of the ideal gas, also universal. Moreover, it represents all real substances to a remarkably high degree of accuracy. This principle of corresponding states, developed by van der Waals from his equation, has become one of the fundamental ideas in the thermodynamics of fluids.
The fold's boundary on the surface is marked, on each side of the critical point, by the spinodal curve (identified in Fig. A, and seen in Figs. A and C). This curve delimits an unstable region wherein no observable homogeneous states exist; elsewhere on the surface, states of liquid, vapor, and gas exist. The fold in the surface is what enables the equation to predict the phenomenon of liquid–vapor phase change. This phenomenon is described by the saturation curve (or coexistence curve): the locus of saturated liquid and vapor states which, being in equilibrium with each other, can coexist. The saturation curve is not specified by the properties of the surface alone—it is substance-dependent. The saturated liquid curve and saturated vapor curve (both identified in Fig. A) together comprise the saturation curve. The inset in Figure A shows the mixture states, which are a combination of the saturated liquid and vapor states that correspond to each end of the horizontal mixture line (that is, the points of intersection between the mixture line and its isotherm). However, these mixture states are not part of the surface generated by the van der Waals equation; they are not solutions of the equation.
Relationship to the ideal gas law
The ideal gas law follows from the van der Waals equation whenever the molar volume is sufficiently large (when , so ), or correspondingly whenever the molar density, , is sufficiently small (when , so ).
When is large enough that both inequalities are satisfied, these two approximations reduce the van der Waals equation to ; rearranging in terms of and gives , which is the ideal gas law. This is not surprising since the van der Waals equation was constructed from the ideal gas equation in order to obtain an equation valid beyond the limit of ideal gas behavior.
What is truly remarkable is the extent to which van der Waals succeeded. Indeed, Epstein in his classic thermodynamics textbook began his discussion of the van der Waals equation by writing, "In spite of its simplicity, it comprehends both the gaseous and the liquid state and brings out, in a most remarkable way, all the phenomena pertaining to the continuity of these two states". Also, in Volume 5 of his Lectures on Theoretical Physics, Sommerfeld, in addition to noting that "Boltzmann described van der Waals as the Newton of real gases", also wrote "It is very remarkable that the theory due to van der Waals is in a position to predict, at least qualitatively, the unstable [referring to superheated liquid, and subcooled vapor, now called metastable] states" that are associated with the phase change process.
Utility of the equation
The van der Waals equation has been, and remains, useful because:
It yields simple analytic expressions for thermodynamic properties: internal energy, entropy, enthalpy, Helmholtz free energy, Gibbs free energy, and specific heat at constant pressure .
It yields an analytic expression of its coefficient of thermal expansion and its isothermal compressibility.
It yields an analytic analysis of the Joule–Thomson coefficient and associated inversion curve, which were instrumental in the development of the commercial liquefaction of gases.
It shows that the specific heat at constant volume is a function of only.
It explains the existence of the critical point and the liquid–vapor phase transition, including the observed metastable states.
It establishes the theorem of corresponding states.
It is an intermediate mathematical model, useful as a pedagogical tool when teaching physics, chemistry, and engineering.
In addition, its saturation curve has an analytic solution, which can depict the liquid metals (mercury and cesium) quantitatively, and describes most real fluids qualitatively. As such, this solution can be regarded as one member of a family of equations of state (known as extended corresponding states). Consequently, the equation plays an important role in the modern theory of phase transitions.
History
In 1857 Rudolf Clausius published The Nature of the Motion which We Call Heat. In it he derived the relation for the pressure in a gas, composed of particles in motion, with number density , mass , and mean square speed . He then noted that using the classical laws of Boyle and Charles, one could write with a constant of proportionality . Hence temperature was proportional to the average kinetic energy of the particles. This article inspired further work based on the twin ideas that substances are composed of indivisible particles, and that heat is a consequence of the particle motion; movement that evolves in accordance with Newton's laws. The work, known as the kinetic theory of gases, was done principally by Clausius, James Clerk Maxwell, and Ludwig Boltzmann. At about the same time, Josiah Willard Gibbs advanced the work by converting it into statistical mechanics.
This environment influenced Johannes Diderik van der Waals. After initially pursuing a teaching credential, he was accepted for doctoral studies at the University of Leiden under Pieter Rijke. This led, in 1873, to a dissertation that provided a simple, particle-based equation that described the gas–liquid change of state, the origin of a critical temperature, and the concept of corresponding states. The equation is based on two premises: first, that fluids are composed of particles with non-zero volumes, and second, that at a large enough distance each particle exerts an attractive force on all other particles in its vicinity. Boltzmann called these forces van der Waals cohesive forces.
In 1869 Irish professor of chemistry Thomas Andrews at Queen's University Belfast, in a paper entitled On the Continuity of the Gaseous and Liquid States of Matter, displayed an experimentally obtained set of isotherms of carbonic acid, , that showed at low temperatures a jump in density at a certain pressure, while at higher temperatures there was no abrupt change (the figure can be seen here). Andrews called the isotherm at which the jump just disappears the critical point. Given the similarity of the titles of this paper and van der Waals' subsequent thesis, one might think that van der Waals set out to develop a theoretical explanation of Andrews' experiments; however, this is not what happened. Van der Waals began work by trying to determine a molecular attraction that appeared in Laplace's theory of capillarity, and only after establishing his equation he tested it using Andrews' results.
By 1877 sprays of both liquid oxygen and liquid nitrogen had been produced, and a new field of research, low temperature physics, had been opened. The van der Waals equation played a part in all this, especially with respect to the liquefaction of hydrogen and helium which was finally achieved in 1908. From measurements of and in two states with the same density, the van der Waals equation produces the values
Thus from two such measurements of pressure and temperature one could determine and , and from these values calculate the expected critical pressure, temperature, and molar volume. Goodstein summarized this contribution of the van der Waals equation as follows:All this labor required considerable faith in the belief that gas–liquid systems were all basically the same, even if no one had ever seen the liquid phase. This faith arose out of the repeated success of the van der Waals theory, which is essentially a universal equation of state, independent of the details of any particular substance once it has been properly scaled. [...] As a result, not only was it possible to believe that hydrogen could be liquefied, but it was even possible to predict the necessary temperature and pressure.Van der Waals was awarded the Nobel Prize in 1910, in recognition of the contribution of his formulation of this "equation of state for gases and liquids".
As noted previously, modern-day studies of first-order phase changes make use of the van der Waals equation together with the Gibbs criterion, equal chemical potential of each phase, as a model of the phenomenon. This model has an analytic coexistence (saturation) curve expressed parametrically, (the parameter is related to the entropy difference between the two phases), that was first obtained by Plank, was known to Gibbs and others, and was later derived in a beautifully simple and elegant manner by Lekner. A summary of Lekner's solution is presented in a subsequent section, and a more complete discussion in the Maxwell construction.
Critical point and corresponding states
Figure 1 shows four isotherms of the van der Waals equation (abbreviated as vdW) on a (pressure, molar volume) plane. The essential character of these curves is that they come in three forms:
At some critical temperature (orange isotherm), the slope is negative everywhere except at a single inflection point: the critical point , where both the slope and curvature are zero, .
At higher temperatures (red isotherm), the isotherm's slope is negative everywhere. (This corresponds to values of for which the vdW equation has one real root for ).
At lower temperatures (green and blue isotherms), all isotherms have two points where the slope is zero. (This corresponds to values of , for which the vdW equation has three real roots for ).
The critical point can be analytically determined by evaluating the two partial derivatives of the vdW equation in (1) and equating them to zero. This produces the critical values and ; plugging these back into the vdW equation gives .
This calculation can also be done algebraically by noting that the vdW equation can be written as a cubic in terms of , which at the critical point is
which, by dividing out , can be refactored as
Separately, since all three roots coalesce at the critical point, we can write
These two cubic equations are the same when all their coefficients are equal; matching like terms produces a system of three equations:
whose solution produces the previous results for .
Using these critical values to define reduced (dimensionless) variables , , and renders the vdW equation in the dimensionless form (used to construct Fig. 1):
This dimensionless form is a similarity relation; it indicates that all vdW fluids at the same will plot on the same curve. It expresses the law of corresponding states which Boltzmann described as follows:
All the constants characterizing the gas have dropped out of this equation. If one bases measurements on the van der Waals units [Boltzmann's name for the reduced quantities here], then he obtains the same equation of state for all gases. [...] Only the values of the critical volume, pressure, and temperature depend on the nature of the particular substance; the numbers that express the actual volume, pressure, and temperature as multiples of the critical values satisfy the same equation for all substances. In other words, the same equation relates the reduced volume, reduced pressure, and reduced temperature for all substances.
Obviously such a broad general relation is unlikely to be correct; nevertheless, the fact that one can obtain from it an essentially correct description of actual phenomena is very remarkable.
This "law" is just a special case of dimensional analysis in which an equation containing 6 dimensional quantities, , and 3 independent dimensions, [p], [v], [T] (independent means that "none of the dimensions of these quantities can be represented as a product of powers of the dimensions of the remaining quantities", and [R]=[pv/T]), must be expressible in terms of 6 − 3 = 3 dimensionless groups. Here is a characteristic molar volume, a characteristic pressure, and a characteristic temperature, and the 3 dimensionless groups are . According to dimensional analysis the equation must then have the form , a general similarity relation. In his discussion of the vdW equation, Sommerfeld also mentioned this point. The reduced properties defined previously are , , and . Recent research has suggested that there is a family of equations of state that depend on an additional dimensionless group, and this provides a more exact correlation of properties. Nevertheless, as Boltzmann observed, the van der Waals equation provides an essentially correct description.
The vdW equation produces the critical compressibility factor , while for most real fluids . Thus most real fluids do not satisfy this condition, and consequently their behavior is only described qualitatively by the vdW equation. However, the vdW equation of state is a member of a family of state equations based on the Pitzer (acentric) factor, , and the liquid metals (mercury and cesium) are well approximated by it.
Thermodynamic properties
The properties of molar internal energy and entropy —defined by the first and second laws of thermodynamics, hence all thermodynamic properties of a simple compressible substance—can be specified, up to a constant of integration, by two measurable functions: a mechanical equation of state , and a constant volume specific heat .
Internal energy and specific heat at constant volume
The internal energy is given by the energetic equation of state,
where is an arbitrary constant of integration.
Now in order for to be an exact differential—namely that be continuous with continuous partial derivatives—its second mixed partial derivatives must also be equal, . Then with this condition can be written as . Differentiating for the vdW equation gives , so . Consequently for a vdW fluid exactly as it is for an ideal gas. For simplicity, it is regarded as a constant here, for some constant number . Then both integrals can be evaluated, resulting in
This is the energetic equation of state for a perfect vdW fluid. By making a dimensional analysis (what might be called extending the principle of corresponding states to other thermodynamic properties) it can be written in the reduced form
where and is a dimensionless constant.
Enthalpy
The enthalpy of a system is given by . Substituting with and (the vdW equation multiplied by ) gives
This is the enthalpic equation of state for a perfect vdW fluid, or in reduced form,
Entropy
The entropy is given by the entropic equation of state:
Using as before, and integrating the second term using we obtain
This is the entropic equation of state for a perfect vdW fluid, or in reduced form,
Helmholtz free energy
The Helmholtz free energy is , so combining the previous results gives
This is the Helmholtz free energy for a perfect vdW fluid, or in reduced form,
Gibbs free energy
The Gibbs free energy is , so combining the previous results gives
This is the Gibbs free energy for a perfect vdW fluid, or in reduced form,
Thermodynamic derivatives: α, κT and cp
The two first partial derivatives of the vdW equation are
where is the isothermal compressibility (a measure of the relative increase of volume from an increase of pressure, at constant temperature), and is the coefficient of thermal expansion (a measure of the relative increase of volume from an increase of temperature, at constant pressure). Therefore,
In the limit , the vdW equation becomes , and , and . Both these limits of and are the ideal gas values, which is consistent because, as noted earlier, a vdW fluid behaves like an ideal gas in this limit.
The specific heat at constant pressure is defined as the partial derivative . However, it is not independent of , as they are related by the Mayer equation, . Then the two partials of the vdW equation can be used to express as
Here in the limit , , which is also the ideal gas result as expected; however the limit gives the same result, which does not agree with experiments on liquids.
In this liquid limit we also find , namely that the vdW liquid is incompressible. Moreover, since , it is also mechanically incompressible, that is, approaches 0 faster than does.
Finally, , , and are all infinite on the curve . This curve, called the spinodal curve, is defined by .
Stability
According to the extremum principle of thermodynamics, and ; namely, that at equilibrium the entropy is a maximum. This leads to a requirement that . This mathematical criterion expresses a physical condition which Epstein described as follows:
It is obvious that this middle part, dotted in our curves [the place where the requirement is violated, dashed gray in Fig. 1], can have no physical reality. In fact, let us imagine the fluid in a state corresponding to this part of the curve contained in a heat conducting vertical cylinder whose top is formed by a piston. The piston can slide up and down in the cylinder, and we put on it a load exactly balancing the pressure of the gas. If we take a little weight off the piston, there will no longer be equilibrium and it will begin to move upward. However, as it moves the volume of the gas increases and with it its pressure. The resultant force on the piston gets larger, retaining its upward direction. The piston will, therefore, continue to move and the gas to expand until it reaches the state represented by the maximum of the isotherm. Vice versa, if we add ever so little to the load of the balanced piston, the gas will collapse to the state corresponding to the minimum of the isotherm.
For isotherms , this requirement is satisfied everywhere, thus all states are gas. For isotherms , the states that lie between the local minimum and local maximum , for which (shown dashed gray in Fig. 1), are unstable and thus not observed. This unstable region is the genesis of the phase change; there is a range , for which no observable states exist. The states for are liquid, and those for are vapor; the denser liquid separates and lies below the vapor due to gravity. The transition points, states with zero slope, are called spinodal points. Their locus is the spinodal curve, a boundary that separates the regions of the plane for which liquid, vapor, and gas exist from a region where no observable homogeneous states exist. This spinodal curve is obtained here from the vdW equation by differentiation (or equivalently from ) as
A projection of the spinodal curve is plotted in Figure 1 as the black dash-dot curve. It passes through the critical point, which is also a spinodal point.
Saturation
Although the gap in delimited by the two spinodal points on an isotherm (e.g. in Fig. 1) is the origin of the phase change, the change occurs as some intermediate value. This can be seen by considering that both saturated liquid and saturated vapor can coexist in equilibrium, at which they have the same pressure and temperature. However, the minimum and maximum spinodal points are not at the same pressure. Therefore, at a temperature , the phase change is characterized by the pressure , which lies within the range of set by the spinodal points (), and by the molar volume of liquid and vapor , which lie outside the range of set by the spinodal points ( and ).
Applying the vdW equation to the saturated liquid (fluid) and saturated vapor (gas) states gives:
These two equations contain four variables (), so a third equation is required in order to uniquely specify three of these variables in terms of the fourth. The following is a derivation of this third equation (the result is ).
Now, the energy required to vaporize a mole at constant pressure is (from the first law of thermodynamics) and at constant temperature is (from the second law). Thus,
That is, in this case, the Gibbs free energy in the saturated liquid state equals that in the saturated vapor state. The Gibbs free energy is one of the four thermodynamic potentials whose partial derivatives produce all other thermodynamics state properties; its differential is . Integrating this over an isotherm from to , noting that the pressure is the same at each endpoint, and setting the result to zero yields
Here because is a multivalued function, the integral must be divided into 3 parts corresponding to the 3 real roots of the vdW equation in the form, (this can be visualized most easily by imagining Fig. 1 rotated ); the result is a special case of material equilibrium. The last equality, which follows from integrating , is the Maxwell equal area rule, which requires that the upper area between the vdW curve and the horizontal through be equal to the lower area. This form means that the thermodynamic restriction that fixes is specified by the equation of state itself, . Using the equation for the Gibbs free energy for the vdW equation (), the difference can be evaluated as
This is a third equation that along with the two vdW equations of can be solved numerically. This has been done given a value for either or , and tabular results presented; however, the equations also admit an analytic parametric solution obtained most simply and elegantly, by Lekner. Details of this solution may be found in the Maxwell construction; the results are:
where
and the parameter is given physically by . The values of all other property discontinuities across the saturation curve also follow from this solution. These functions define the coexistence curve (or saturation curve), which is the locus of the saturated liquid and saturated vapor states of the vdW fluid. Various projections of this saturation curve are plotted in Figures 1, 2a, and 2b.
Referring back to Figure 1, the isotherms for are discontinuous. For example, the (green) isotherm consists of two separate segments. The solid green lines represent stable states, and terminate at dots that represent the saturated liquid and vapor states that comprise the phase change. The dashed green lines represent metastable states (superheated liquid and subcooled vapor) that are created in the process of phase transition, have a short lifetime, and then devolve into their lower energy stable alternative.
At every point in the region between the two curves in Figure 2b, there are two states: one stable and one metastable. The coexistence of these states can be seen in Figure 1—for discontinuous isotherms, there are values of which correspond to two points on the isotherm: one on a solid line (the stable state) and one on a dashed region (the metastable state).
In his treatise of 1898, in which he described the van der Waals equation in great detail, Boltzmann discussed these metastable states in a section titled "Undercooling, Delayed evaporation". (Today, these states are now denoted "subcooled vapor" and "superheated liquid".) Moreover, it has now become clear that these metastable states occur regularly in the phase transition process. In particular, processes that involve very high heat fluxes create large numbers of these states, and transition to their stable alternative with a corresponding release of energy that can be dangerous. Consequently, there is a pressing need to study their thermal properties.
In the same section, Boltzmann also addressed and explained the negative pressures which some liquid metastable states exhibit (for example, the blue isotherm in Fig. 1). He concluded that such liquid states of tensile stresses were real, as did Tien and Lienhard many years later who wrote "The van der Waals equation predicts that at low temperatures liquids sustain enormous tension [...] In recent years measurements have been made that reveal this to be entirely correct."
Even though the phase change produces a mathematical discontinuity in the homogeneous fluid properties (for example ), there is no physical discontinuity. As the liquid begins to vaporize, the fluid becomes a heterogeneous mixture of liquid and vapor whose molar volume varies continuously from to according to the equation of state where and is the mole fraction of the vapor. This equation is called the lever rule and applies to other properties as well. The states it represents form a horizontal line bridging the discontinuous region of an isotherm (not shown in Fig. 1 because it is a different equation from the vdW equation).
Extended corresponding states
The idea of corresponding states originated when van der Waals cast his equation in the dimensionless form, . However, as Boltzmann noted, such a simple representation could not correctly describe all substances. Indeed, the saturation analysis of this form produces ; namely, that all substances have the same dimensionless coexistence curve, which is not true. To avoid this paradox, an extended principle of corresponding states has been suggested in which where is a substance-dependent dimensionless parameter related to the only physical feature associated with an individual substance: its critical point.
One candidate for is the critical compressibility factor ; however, because is difficult to measure accurately, the acentric factor developed by Kenneth Pitzer, , is more useful. The saturation pressure in this situation is represented by a one-parameter family of curves: . Several investigators have produced correlations of saturation data for a number of substances; Dong and Lienhard give
which has an RMS error of over the range .
Figure 3 is a plot of vs for various values of the Pitzer factor as given by this equation. The vertical axis is logarithmic in order to show the behavior at pressures closer to zero, where differences among the various substances (indicated by varying values of ) are more pronounced.
Figure 4 is another plot of the same equation showing as a function of for various values of . It includes data from 51 substances, including the vdW fluid, over the range . This plot shows that the vdW fluid () is a member of the class of real fluids; indeed, the vdW fluid can quantitatively approximate the behavior of the liquid metals cesium () and mercury (), which share similar values of . However, in general it can describe the behavior of fluids of various only qualitatively.
Joule–Thomson coefficient
The Joule–Thomson coefficient, , is of practical importance because the two end states of a throttling process () lie on a constant enthalpy curve. Although ideal gases, for which , do not change temperature in such a process, real gases do, and it is important in applications to know whether they heat up or cool down.
This coefficient can be found in terms of the previously derived and as
When is positive, the gas temperature decreases as it passes through a throttling process, and when it is negative, the temperature increases. Therefore, the condition defines a curve that separates the region of the plane where from the region where . This curve is called the inversion curve, and its equation is . Evaluating this using the expression for derived in produces
Note that for there will be cooling for (or, in terms of the critical temperature, ). As Sommerfeld noted, "This is the case with air and with most other gases. Air can be cooled at will by repeated expansion and can finally be liquified."
In terms of , the equation has a simple positive solution , which for produces . Using this to eliminate from the vdW equation then gives the inversion curve as
where, for simplicity, have been replaced by .
The maximum of this quadratic curve occurs with , for
which gives , or , and the corresponding . By the quadratic formula, the zeros of the curve are and ( and ). In terms of the dimensionless variables , the zeros are at and , while the maximum is , and occurs at . A plot of the curve is shown in green in Figure 5. Sommerfeld also displays this plot, together with a curve drawn using experimental data from H2. The two curves agree qualitatively, but not quantitatively. For example the maximum on these two curves differ by about 40% in both magnitude and location.
Figure 5 shows an overlap between the saturation curve and the inversion curve plotted in the same region. This crossover means a van der Waals gas can be liquified by passing it through a throttling process under the proper conditions; real gases are liquified in this way.
Compressibility factor
Real gases are characterized by their difference from ideal gases by writing . Here , called the compressibility factor, is expressed either as or . In either case, the limit as or approaches zero is 1, and takes the ideal gas value. In the second case , so for a van der Waals fluid the compressibility factor is
or in terms of reduced variables
where . At the critical point, and .
In the limit , ; the fluid behaves like an ideal gas, as mentioned before. The derivative is never negative when ; that is, when (which corresponds to ). Alternatively, the initial slope is negative when , is zero at , and is positive for larger (see Fig. 6). In this case, the value of passes through when . Here is called the Boyle temperature. It ranges between , and denotes a point in space where the equation of state reduces to the ideal gas law. However, the fluid does not behave like an ideal gas there, because neither its derivatives nor reduce to their ideal gas values, other than where the actual ideal gas region.
Figure 6 plots various isotherms of vs . Also shown are the spinodal and coexistence curves described previously. The subcritical isotherm consists of stable, metastable, and unstable segments (identified in the same way as in Fig. 1). Also included are the zero initial slope isotherm and the one corresponding to infinite temperature.
Figure 7 shows a generalized compressibility chart for a vdW gas. Like all other vdW properties, this is not quantitatively correct for most gases, but it has the correct qualitative features. Note the caustic generated by the crossing isotherms.
Virial expansion
Statistical mechanics suggests that the compressibility factor can be expressed by a power series, called a virial expansion:
where the functions are the virial coefficients; the th term represents a -particle interaction.
Expanding the term in the definition of () into an infinite series, convergent for , produces
The corresponding expression for when is
These are the virial expansions, one dimensional and one dimensionless, for the van der Waals fluid. The second virial coefficient is the slope of at . Notice that it can be positive when or negative when , which agrees with the result found previously by differentiation.
For molecules modeled as non-attracting hard spheres, , and the vdW virial expansion becomes
which illustrates the effect of the excluded volume alone. It was recognized early on that this was in error beginning with the term . Boltzmann calculated its correct value as , and used the result to propose an enhanced version of the vdW equation:
On expanding , this produced the correct coefficients through and also gave infinite pressure at , which is approximately the close-packing distance for hard spheres. This was one of the first of many equations of state proposed over the years that attempted to make quantitative improvements to the remarkably accurate explanations of real gas behavior produced by the vdW equation.
Mixtures
In 1890 van der Waals published an article that initiated the study of fluid mixtures. It was subsequently included as Part III of a later published version of his thesis. His essential idea was that in a binary mixture of vdW fluids described by the equations
the mixture is also a vdW fluid given by
where
Here and , with (so that ), are the mole fractions of the two fluid substances. Adding the equations for the two fluids shows that , although for sufficiently large with equality holding in the ideal gas limit. The quadratic forms for and are a consequence of the forces between molecules. This was first shown by Lorentz, and was credited to him by van der Waals. The quantities and in these expressions characterize collisions between two molecules of the same fluid component, while and represent collisions between one molecule of each of the two different fluid components. This idea of van der Waals' was later called a one fluid model of mixture behavior.
Assuming that is the arithmetic mean of and , , substituting into the quadratic form and noting that produces
Van der Waals wrote this relation, but did not make use of it initially. However, it has been used frequently in subsequent studies, and its use is said to produce good agreement with experimental results at high pressure.
Common tangent construction
In this article, van der Waals used the Helmholtz potential minimum principle to establish the conditions of stability. This principle states that in a system in diathermal contact with a heat reservoir , , and , namely at equilibrium, the Helmholtz potential is a minimum. Since, like , the molar Helmholtz function is also a potential function whose differential is
this minimum principle leads to the stability condition . This condition means that the function, , is convex at all stable states of the system. Moreover, for those states the previous stability condition for the pressure is necessarily satisfied as well.
Single fluid
For a single substance, the definition of the molar Gibbs free energy can be written in the form . Thus when and are constant along with temperature, the function represents a straight line with slope , and intercept . Since the curve has positive curvature everywhere when , the curve and the straight line will have a single tangent. However, for a subcritical is not everywhere convex. With and a suitable value of , the line will be tangent to at the molar volume of each coexisting phase: saturated liquid and saturated vapor ; there will be a double tangent. Furthermore, each of these points is characterized by the same values of , , and These are the same three specifications for coexistence that were used previously.
Figure 8 depicts an evaluation of as a green curve, with and marked by the left and right green circles, respectively. The region on the green curve for corresponds to the liquid state. As increases past , the curvature of (proportional to ) continually decreases. The inflection point, characterized by zero curvature, is a spinodal point; between and this point is the metastable superheated liquid. For further increases in the curvature decreases to a minimum then increases to another (zero curvature) spinodal point; between these two spinodal points is the unstable region in which the fluid cannot exist in a homogeneous equilibrium state (represented by the dotted grey curve). With a further increase in the curvature increases to a maximum at , where the slope is ; the region between this point and the second spinodal point is the metastable subcooled vapor. Finally, the region is the vapor. In this region the curvature continually decreases until it is zero at infinitely large . The double tangent line (solid black) that runs between and represents states that are stable but heterogeneous, not homogeneous solutions of the vdW equation. The states above this line (with larger Helmholtz free energy) are either metastable or unstable. The combined solid green-black curve in Figure 8 is the convex envelope of , which is defined as the largest convex curve that is less than or equal to the function.
For a vdW fluid, the molar Helmholtz potential is
where . Its derivative is
which is the vdW equation, as expected. A plot of this function , whose slope at each point is specified by the vdW equation, for the subcritical isotherm is shown in Figure 8 along with the line tangent to it at its two coexisting saturation points. The data illustrated in Figure 8 is exactly the same as that shown in Figure 1 for this isotherm. This double tangent construction thus provides a graphical alternative to the Maxwell construction to establish the saturated liquid and vapor points on an isotherm.
Binary fluid
Van der Waals used the Helmholtz function because its properties could be easily extended to the binary fluid situation. In a binary mixture of vdW fluids, the Helmholtz potential is a function of two variables, , where is a composition variable (for example so ). In this case, there are three stability conditions:
and the Helmholtz potential is a surface (of physical interest in the region ). The first two stability conditions show that the curvature in each of the directions and are both non-negative for stable states, while the third condition indicates that stable states correspond to elliptic points on this surface. Moreover, its limit
specifies the spinodal curves on the surface.
For a binary mixture, the Euler equation can be written in the form
where are the molar chemical potentials of each substance, . For constant values of , , and , this equation is a plane with slopes in the direction, in the direction, and intercept . As in the case of a single substance, here the plane and the surface can have a double tangent, and the locus of the coexisting phase points forms a curve on each surface. The coexistence conditions are that the two phases have the same , , , and ; the last two are equivalent to having the same and individually, which are just the Gibbs conditions for material equilibrium in this situation. The two methods of producing the coexistence surface are equivalent.
Although this case is similar to that of a single fluid, here the geometry can be much more complex. The surface can develop a wave (called a plait or fold) in the direction as well as the one in the direction. Therefore, there can be two liquid phases that can be either miscible, or wholly or partially immiscible, as well as a vapor phase. Despite a great deal of both theoretical and experimental work on this problem by van der Waals and his successors—work which produced much useful knowledge about the various types of phase equilibria that are possible in fluid mixtures—complete solutions to the problem were only obtained after 1967, when the availability of modern computers made calculations of mathematical problems of this complexity feasible for the first time. The results obtained were, in Rowlinson's words,
a spectacular vindication of the essential physical correctness of the ideas behind the van der Waals equation, for almost every kind of critical behavior found in practice can be reproduced by the calculations, and the range of parameters that correlate with the different kinds of behavior are intelligible in terms of the expected effects of size and energy.
Mixing rules
In order to obtain these numerical results, the values of the constants of the individual component fluids must be known. In addition, the effect of collisions between molecules of the different components, given by and , must also be specified. In the absence of experimental data, or computer modeling results to estimate their value the empirical combining rules, geometric and algebraic means can be used, respectively:
These relations correspond to the empirical combining rules for the intermolecular force constants,
the first of which follows from a simple interpretation of the dispersion forces in terms of polarizabilities of the individual molecules, while the second is exact for rigid molecules. Using these empirical combining rules to generalize for fluid components, the quadradic mixing rules for the material constants are:
These expressions come into use when mixing gases in proportion, such as when producing tanks of air for diving and managing the behavior of fluid mixtures in engineering applications. However, more sophisticated mixing rules are often necessary, in order to obtain satisfactory agreement with reality over the wide variety of mixtures encountered in practice.
Another method of specifying the vdW constants, pioneered by W.B. Kay and known as Kay's rule, specifies the effective critical temperature and pressure of the fluid mixture by
In terms of these quantities, the vdW mixture constants are
which Kay used as the basis for calculations of the thermodynamic properties of mixtures. Kay's idea was adopted by T. W. Leland, who applied it to the molecular parameters , which are related to through by and . Using these together with the quadratic mixing rules for produces
which is the van der Waals approximation expressed in terms of the intermolecular constants. This approximation, when compared with computer simulations for mixtures, are in good agreement over the range , namely for molecules of similar diameters. In fact, Rowlinson said of this approximation, "It was, and indeed still is, hard to improve on the original van der Waals recipe when expressed in [this] form".
Mathematical and empirical validity
Since van der Waals presented his thesis, "[m]any derivations, pseudo-derivations, and plausibility arguments have been given" for it. However, no mathematically rigorous derivation of the equation over its entire range of molar volume that begins from a statistical mechanical principle exists. Indeed, such a proof is not possible, even for hard spheres. Goodstein put it this way, "Obviously the value of the van der Waals equation rests principally on its empirical behavior rather than its theoretical foundation."
Nevertheless, a review of the work that has been done is useful in order to better understand where and when the equation is valid mathematically, and where and why it fails.
Review
The classical canonical partition function, , of statistical mechanics for a three dimensional particle macroscopic system is
where , is the de Broglie wavelength (alternatively is the quantum concentration), is the particle configuration integral, and is the intermolecular potential energy, which is a function of the particle position vectors . Lastly is the volume element of , which is a -dimensional space.
The connection of with thermodynamics is made through the Helmholtz free energy, , from which all other properties can be found; in particular . For point particles that have no force interactions (), all integrals of can be evaluated producing . In the thermodynamic limit, with finite, the Helmholtz free energy per particle (or per mole, or per unit mass) is finite; for example, per mole it is . The thermodynamic state equations in this case are those of a monatomic ideal gas, specifically
Early derivations of the vdW equation were criticized mainly on two grounds. First, a rigorous derivation from the partition function should produce an equation that does not include unstable states for which, . Second, the constant in the vdW equation (here is the volume of a single molecule) gives the maximum possible number of molecules as , or a close packing density of 1/4=0.25, whereas the known close-packing density of spheres is . Thus a single value of cannot describe both gas and liquid states.
The second criticism is an indication that the vdW equation cannot be valid over the entire range of molar volume. Van der Waals was well aware of this problem; he devoted about 30% of his Nobel lecture to it, and also said that it is
... the weak point in the study of the equation of state. I still wonder whether there is a better way. In fact this question continually obsesses me, I can never free myself from it, it is with me even in my dreams.
In 1949 the first criticism was proved by van Hove when he showed that in the thermodynamic limit, hard spheres with finite-range attractive forces have a finite Helmholtz free energy per particle. Furthermore, this free energy is a continuously decreasing function of the volume per particle (see Fig. 8 where are molar quantities). In addition, its derivative exists and defines the pressure, which is a non-increasing function of the volume per particle. Since the vdW equation has states for which the pressure increases with increasing volume per particle, this proof means it cannot be derived from the partition function, without an additional constraint that precludes those states.
In 1891 Korteweg used kinetic theory ideas to show that a system of hard rods of length , constrained to move along a straight line of length and exerting only direct contact forces on one another, satisfy a vdW equation with ; Rayleigh also knew this. Tonks, by evaluating the configuration integral, later showed that the force exerted on a wall by this system is given by with . This can be put in a more recognizable, molar, form by dividing by the rod cross sectional area , and defining . This produces ; there is no condensation, as for all . This result is obtained because in one dimension, particles cannot pass by one another as they can in higher dimensions; their mass center coordinates satisfy the relations . As a result, the configuration integral is .
In 1959 this one-dimensional gas model was extended by Kac to include particle pair interactions through an attractive potential,
. This specific form allowed evaluation of the grand partition function,
in the thermodynamic limit, in terms of the eigenfunctions and eigenvalues of a homogeneous integral equation. Although an explicit equation of state was not obtained, it was proved that the pressure was a strictly decreasing function of the volume per particle, hence condensation did not occur.
Four years later, in 1963, Kac together with Uhlenbeck and Hemmer modified the pair potential of Kac's previous work as , so that
was independent of . They found that a second limiting process they called the van der Waals limit, (in which the pair potential becomes both infinitely long range and infinitely weak) and performed after the thermodynamic limit, produced the one-dimensional vdW equation (here rendered in molar form)
as well as the Gibbs criterion, (equivalently the Maxwell construction). As a result, all isotherms satisfy the condition as shown in Figure 9, and hence the first criticism of the vdW equation is not as serious as originally thought.
Then, in 1966, Lebowitz and Penrose generalized what they called the Kac potential, , to apply to a nonspecific function of dimensions:
For and this reduces to the specific one-dimensional function considered by Kac, et al., and for it is an arbitrary function (although subject to specific requirements) in physical three-dimensional space. In fact, the function must be bounded, non-negative, and one whose integral
is finite, independent of . By obtaining upper and lower bounds on and hence on , taking the thermodynamic limit ( with finite) to obtain upper and lower bounds on the function , then subsequently taking the van der Waals limit, they found that the two bounds coalesced and thereby produced a unique limit (here written in terms of the free energy per mole and the molar volume):
The abbreviation stands for "convex envelope"; this is a function which is the largest convex function that is less than or equal to the original function. The function is the limit function when ; also here . This result is illustrated in Figure 8 by the solid green curves and black line, which is the convex envelope of .
The corresponding limit for the pressure is a generalized form of the vdW equation
together with the Gibbs criterion, (equivalently the Maxwell construction). Here is the pressure when attractive molecular forces are absent.
The conclusion from all this work is that a rigorous mathematical derivation from the partition function produces a generalization of the vdW equation together with the Gibbs criterion if the attractive force is infinitely weak with an infinitely long range. In that case, , the pressure that results from direct particle collisions (or more accurately the core repulsive forces), replaces . This is consistent with the second criticism, which can be stated as . Consequently, the vdW equation cannot be rigorously derived from the configuration integral over the entire range of .
Nevertheless, it is possible to rigorously show that the vdW equation is equivalent to a two-term approximation of the virial equation. Hence it can be rigorously derived from the partition function as a two-term approximation in the additional limit .
The virial equation of state
This derivation is simplest when begun from the grand partition function, ().
In this case, the connection with thermodynamics is through , together with the number of particles Substituting the expression for () in the series for produces
Expanding in its convergent power series, using in each term, and equating powers of produces relations that can be solved for the in terms of the . For example, , , and .
Then from , the number density is expressed as the series
which can be inverted to give
The coefficients are given in terms of by the Lagrange inversion theorem, or determined by substituting into the series for and equating powers of ; thus , and so on. Finally, using this series in the series for produces the virial expansion, or virial equation of state
The second virial coefficient
This conditionally convergent series is also an asymptotic power series for the limit , and a finite number of terms is an asymptotic approximation to . The dominant order approximation in this limit is , which is the ideal gas law. It can be written as an equality using order symbols, for example , which states that the remaining terms approach zero in the limit (or , which states, more accurately, that they approach zero in proportion to ). The two-term approximation is , and the expression for is
where and is a dimensionless two-particle potential function. For spherically symmetric molecules, this function can be represented most simply with two parameters: a characteristic molecular diameter and a binding energy , as shown in the Figure 10 plot, in which . Also, for spherically symmetric molecules, five of the six integrals in the expression for can be done with the result
From its definition, is positive for , and negative for with a minimum of at some . Furthermore, increases so rapidly that whenever then . In addition, in the limit ( is a dimensionless "coldness", and the quantity is a characteristic molecular temperature), the exponential can be approximated for by two terms of its power series expansion. In these circumstances, can be approximated as
where has the minimum value of . On splitting the interval of integration into two parts, one less than and the other greater than , evaluating the first integral and making the second integration variable dimensionless using produces
where and , where is a numerical factor whose value depends on the specific dimensionless intermolecular-pair potential
Here and , where are the constants given in the introduction. The condition that be finite requires that be integrable over the range . This result indicates that a dimensionless (that is, a function of a dimensionless molecular temperature ) is a universal function for all real gases with an intermolecular pair potential of the form . This is an example of the principle of corresponding states on the molecular level. Moreover, this is true in general and has been developed extensively both theoretically and experimentally.
The van der Waals approximation
Substituting the (approximate in ) expression for into the two-term virial approximation produces
Here the approximation is written in terms of molar quantities; its first two terms are the same as the first two terms of the vdW virial equation.
The Taylor expansion of , uniformly convergent for , can be written as , so substituting for produces
Alternatively this is
which is the vdW equation.
Summary
According to this derivation, the vdW equation is an equivalent of the two-term approximation of the virial equation of statistical mechanics in the limits . Consequently the equation produces an accurate approximation in a region defined by (on a molecular basis ), which corresponds to a dilute gas. But as the density increases, the behavior of the vdW approximation and the two-term virial expansion differ markedly. Whereas the virial approximation in this instance either increases or decreases continuously, the vdW approximation together with the Maxwell construction expresses physical reality in the form of a phase change, while also indicating the existence of metastable states. This difference in behaviors was pointed out by Korteweg and Rayleigh (see Rowlinson) in the course of their dispute with Tait about the vdW equation.
In this extended region, use of the vdW equation is not justified mathematically; however, it has empirical validity. Its various applications in this region that attest to this, both qualitative and quantitative, have been described previously in this article. This point was also made by Alder, et al. who, at a conference marking the 100th anniversary of van der Waals' thesis, noted that:
It is doubtful whether we would celebrate the centennial of the Van der Waals equation if it were applicable only under circumstances where it has been proven to be rigorously valid. It is empirically well established that many systems whose molecules have attractive potentials that are neither long-range nor weak conform nearly quantatively to the Van der Waals model. An example is the theoretically much studied system of Argon, where the attractive potential has only a range half as large as the repulsive core. They continued by saying that this model has "validity down to temperatures below the critical temperature, where the attractive potential is not weak at all but, in fact, comparable to the thermal energy." They also described its application to mixtures "where the Van der Waals model has also been applied with great success. In fact, its success has been so great that not a single other model of the many proposed since, has equalled its quantitative predictions, let alone its simplicity."
Engineers have made extensive use of this empirical validity, modifying the equation in numerous ways (by one account there have been some 400 cubic equations of state produced) in order to manage the liquids, and gases of pure substances and mixtures, that they encounter in practice.
This situation has been aptly described by Boltzmann:
...van der Waals has given us such a valuable tool that it would cost us much trouble to obtain by the subtlest deliberations a formula that would really be more useful than the one that van der Waals found by inspiration, as it were.
| Physical sciences | Thermodynamics | Physics |
206115 | https://en.wikipedia.org/wiki/Schwarzschild%20radius | Schwarzschild radius | The Schwarzschild radius or the gravitational radius is a physical parameter in the Schwarzschild solution to Einstein's field equations that corresponds to the radius defining the event horizon of a Schwarzschild black hole. It is a characteristic radius associated with any quantity of mass. The Schwarzschild radius was named after the German astronomer Karl Schwarzschild, who calculated this exact solution for the theory of general relativity in 1916.
The Schwarzschild radius is given as
where G is the gravitational constant, M is the object mass, and c is the speed of light.
History
In 1916, Karl Schwarzschild obtained the exact solution to Einstein's field equations for the gravitational field outside a non-rotating, spherically symmetric body with mass (see Schwarzschild metric). The solution contained terms of the form and , which becomes singular at and respectively. The has come to be known as the Schwarzschild radius. The physical significance of these singularities was debated for decades. It was found that the one at is a coordinate singularity, meaning that it is an artifact of the particular system of coordinates that was used; while the one at is a spacetime singularity and cannot be removed. The Schwarzschild radius is nonetheless a physically relevant quantity, as noted above and below.
This expression had previously been calculated, using Newtonian mechanics, as the radius of a spherically symmetric body at which the escape velocity was equal to the speed of light. It had been identified in the 18th century by John Michell and Pierre-Simon Laplace.
Parameters
The Schwarzschild radius of an object is proportional to its mass. Accordingly, the Sun has a Schwarzschild radius of approximately , whereas Earth's is approximately and the Moon's is approximately .
Derivation
The simplest way of deriving the Schwarzschild radius comes from the equality of the modulus of a spherical solid mass' rest energy with its gravitational energy:
So, the Schwarzschild radius reads as
Black hole classification by Schwarzschild radius
Any object whose radius is smaller than its Schwarzschild radius is called a black hole. The surface at the Schwarzschild radius acts as an event horizon in a non-rotating body (a rotating black hole operates slightly differently). Neither light nor particles can escape through this surface from the region inside, hence the name "black hole".
Black holes can be classified based on their Schwarzschild radius, or equivalently, by their density, where density is defined as mass of a black hole divided by the volume of its Schwarzschild sphere. As the Schwarzschild radius is linearly related to mass, while the enclosed volume corresponds to the third power of the radius, small black holes are therefore much more dense than large ones. The volume enclosed in the event horizon of the most massive black holes has an average density lower than main sequence stars.
Supermassive black hole
A supermassive black hole (SMBH) is the largest type of black hole, though there are few official criteria on how such an object is considered so, on the order of hundreds of thousands to billions of solar masses. (Supermassive black holes up to 21 billion have been detected, such as NGC 4889.) Unlike stellar mass black holes, supermassive black holes have comparatively low average densities. (Note that a (non-rotating) black hole is a spherical region in space that surrounds the singularity at its center; it is not the singularity itself.) With that in mind, the average density of a supermassive black hole can be less than the density of water.
The Schwarzschild radius of a body is proportional to its mass and therefore to its volume, assuming that the body has a constant mass-density. In contrast, the physical radius of the body is proportional to the cube root of its volume. Therefore, as the body accumulates matter at a given fixed density (in this example, 997 kg/m3, the density of water), its Schwarzschild radius will increase more quickly than its physical radius. When a body of this density has grown to around 136 million solar masses (), its physical radius would be overtaken by its Schwarzschild radius, and thus it would form a supermassive black hole.
It is thought that supermassive black holes like these do not form immediately from the singular collapse of a cluster of stars. Instead they may begin life as smaller, stellar-sized black holes and grow larger by the accretion of matter, or even of other black holes.
The Schwarzschild radius of the supermassive black hole at the Galactic Center of the Milky Way is approximately 12 million kilometres. Its mass is about .
Stellar black hole
Stellar black holes have much greater average densities than supermassive black holes. If one accumulates matter at nuclear density (the density of the nucleus of an atom, about 1018 kg/m3; neutron stars also reach this density), such an accumulation would fall within its own Schwarzschild radius at about and thus would be a stellar black hole.
Micro black hole
A small mass has an extremely small Schwarzschild radius. A black hole of mass similar to that of Mount Everest would have a Schwarzschild radius much smaller than a nanometre. Its average density at that size would be so high that no known mechanism could form such extremely compact objects. Such black holes might possibly be formed in an early stage of the evolution of the universe, just after the Big Bang, when densities of matter were extremely high. Therefore, these hypothetical miniature black holes are called primordial black holes.
When moving to the Planck scale , it is convenient to write the gravitational radius in the form , (see also virtual black hole).
Other uses
In gravitational time dilation
Gravitational time dilation near a large, slowly rotating, nearly spherical body, such as the Earth or Sun can be reasonably approximated as follows:
where:
is the elapsed time for an observer at radial coordinate r within the gravitational field;
is the elapsed time for an observer distant from the massive object (and therefore outside of the gravitational field);
is the radial coordinate of the observer (which is analogous to the classical distance from the center of the object);
is the Schwarzschild radius.
Compton wavelength intersection
The Schwarzschild radius () and the Compton wavelength () corresponding to a given mass are similar when the mass is around one Planck mass (), when both are of the same order as the Planck length ().
Gravitational radius and the Heisenberg Uncertainty Principle
Thus, or , which is another form of the Heisenberg uncertainty principle on the Planck scale. ( | Physical sciences | Basics_2 | Astronomy |
206119 | https://en.wikipedia.org/wiki/Night | Night | Night, or nighttime, is the period of darkness when the Sun is below the horizon. The opposite of nighttime is daytime. Sunlight illuminates one side of the Earth, leaving the other in darkness. Earth's rotation causes the appearance of sunrise and sunset. Moonlight, airglow, starlight, and light pollution dimly illuminate night. The duration of day, night, and twilight varies depending on the time of year and the latitude. Night on other celestial bodies is affected by their rotation and orbital periods. The planets Mercury and Venus have much longer nights than Earth. On Venus, night lasts 120 Earth days. The Moon's rotation is tidally locked, rotating so that one of the sides of the Moon always faces Earth. Nightfall across portions of the near side of the Moon results in lunar phases visible from Earth.
Organisms respond to the changes brought by nightfall, including darkness, increased humidity, and lower temperatures. Their responses include direct reactions and adjustments to circadian rhythms, governed by an internal biological clock. These circadian rhythms, regulated by exposure to light and darkness, affect an organism's behavior and physiology. Animals more active at night are called nocturnal and have adaptations for low light, including different forms of night vision and the heightening of other senses. Diurnal animals are active during the day and sleep at night; mammals, birds, and some others dream while asleep. Fungi respond directly to nightfall and increase their biomass. With some exceptions, fungi do not rely on a biological clock. Plants store energy produced through photosynthesis as starch granules to consume at night. Algae engage in a similar process, and cyanobacteria transition from photosynthesis to nitrogen fixation after sunset. In arid environments like deserts, plants evolved to be more active at night, with many gathering carbon dioxide overnight for daytime photosynthesis. Night-blooming cacti rely on nocturnal pollinators such as bats and moths for reproduction. Light pollution disrupts the patterns in ecosystems and is especially harmful to night-flying insects.
Historically, night has been a time of increased danger and insecurity. Many daytime social controls dissipated after sunset. Theft, fights, murders, taboo sexual activities, and accidental deaths all became more frequent due in part to reduced visibility. Cultures have personified night through deities associated with some or all of these aspects of nighttime. The folklore of many cultures contains "creatures of the night", including werewolves, witches, ghosts, and goblins, reflecting societal fears and anxieties. The introduction of artificial lighting extended daytime activities. Major European cities hung lanterns housing candles and oil lamps in the 1600s. Nineteenth-century gas and electric lights created unprecedented illumination. The range of socially acceptable leisure activities expanded, and various industries introduced a night shift. Nightlife, encompassing bars, nightclubs, and cultural venues, has become a significant part of urban culture, contributing to social and political movements.
Astronomy
A planet's rotation causes nighttime and daytime. When a place on Earth is pointed away from the Sun, that location experiences night. The Sun appears to set in the West and rise in the East due to Earth's rotation. Many celestial bodies, including the other planets in the solar system, have a form of night.
Earth
The length of night on Earth varies depending on the time of year. Longer nights occur in winter, with the winter solstice being the longest. Nights are shorter in the summer, with the summer solstice being the shortest. Earth orbits the Sun on an axis tilted 23.44 degrees. Nights are longer when a hemisphere is tilted away from the Sun and shorter when a hemisphere is tilted toward the Sun. As a result, the longest night of the year for the Northern Hemisphere will be the shortest night of the year for the Southern Hemisphere.
Night's duration varies least near the equator. The difference between the shortest and longest night increases approaching the poles. At the equator, night lasts roughly 12 hours throughout the year. The tropics have little difference in the length of day and night. At the 45th parallel, the longest winter night is roughly twice as long as the shortest summer night. Within the polar circles, night will last the full 24 hours of the winter solstice. The length of this polar night increases closer to the poles. Utqiagvik, Alaska, the northernmost point in the United States, experiences 65 days of polar night. At the pole itself, polar night lasts 179 days from September to March.
Over a year, there is more daytime than nighttime because of the Sun's size and atmospheric refraction. The Sun is not a single point. Viewed from Earth, the Sun ranges in angular diameter from 31 to 33 arcminutes. When the center of the Sun falls to the western horizon, half of the Sun will still be visible during sunset. Likewise, by the time the center of the Sun rises to the eastern horizon, half of the Sun will already be visible during sunrise. This shortens night by about three minutes in temperate zones. Atmospheric refraction is a larger factor. Refraction bends sunlight over the horizon. On Earth, the Sun remains briefly visible after it has geometrically fallen below the horizon. This shortens night by about six minutes. Scattered, diffuse sunlight remains in the sky after sunset and into twilight.
There are multiple ways to define twilight, the gradual transition to and from darkness when the Sun is below the horizon. "Civil" twilight occurs when the Sun is between 0° and 6° below the horizon. Nearby planets like Venus and bright stars like Sirius are visible during this period. "Nautical" twilight continues until the Sun is 12° below the horizon. During nautical twilight, the horizon is visible enough for navigation. "Astronomical" twilight continues until the Sun has sunk 18° below the horizon. Beyond 18°, refracted sunlight is no longer visible. The period when the sun is 18° or more below either horizon is called astronomical night.
Similar to the duration of night itself, the duration of twilight varies according to latitude. At the equator, day quickly transitions to night, while the transition can take weeks near the poles. The duration of twilight is longest at the summer solstice and shortest near the equinoxes. Moonlight, starlight, airglow, and light pollution can dimly illuminate the nighttime, with their diffuse aspects being termed skyglow. The amount of skyglow increases each year due to artificial lighting.
Other celestial bodies
Night exists on the other planets and moons in the solar system. The length of night is affected by the rotation period and orbital period of the celestial object. The lunar phases visible from Earth result from nightfall on the Moon. The Moon has longer nights than Earth, lasting about two weeks. This is half of the synodic lunar month, the time it takes the Moon to cycle through its phases. The Moon is tidally locked to Earth; it rotates so that one side of the Moon always faces the Earth. The side of the Moon facing away from Earth is called the far side of the Moon, and the side facing Earth is called the near side of the Moon. During lunar night on the near side, Earth is 50 times brighter than a full moon. Because the Moon has no atmosphere, there is an abrupt transition from day to night without twilight.
Night varies from planet to planet within the Solar System. Mars's dusty atmosphere causes a lengthy twilight period. The refracted light ranges from purple to blue, often resulting in glowing noctilucent clouds. Venus and Mercury have long nights because of their slow rotational periods. The planet Venus rotates once every 243 Earth days. Because of its unusual retrograde rotation, nights last 116.75 Earth days. The dense greenhouse atmosphere on Venus keeps its surface hot enough to melt lead throughout the night. Its planetary wind system, driven by solar heat, reverses direction from day to night. Venus's winds flow from the equator to the poles on the day side and from the poles to the equator on the night side. On Mercury, the planet closest to the Sun, the temperature drops by over after nightfall.
The day-night cycle is one consideration for planetary habitability or the possibility of extraterrestrial life on distant exoplanets. In general, shorter nights result in a higher equilibrium temperature for the planet. On an Earth-like planet, longer day-night cycles may increase habitability up to a point. Computer models show that longer nights would affect Hadley circulation, resulting in a cooler, less cloudy planet. Once the rotation speed of a planet slows beyond 1/16 that of Earth, the difference in day-to-night temperature shifts increases dramatically. Some exoplanets, like those of TRAPPIST-1, are tidally locked. Tidally locked planets have equal rotation and orbital periods, so one side experiences constant day, and the other side constant night. In these situations, astrophysicists believe that life would most likely develop in the twilight zone between the day and night hemispheres.
Biology
Living organisms react directly to the darkness of night. Light and darkness also affect circadian rhythms, the physical and mental changes that occur in a 24-hour cycle. This daily cycle is regulated by an internal "biological clock" that is adjusted by exposure to light. The length and timing of nighttime depend on location and time of year. Organisms that are more active at night possess adaptations to the night's dimmer light, increased humidity, and lower temperatures.
Animals
Animals that are active primarily at night are called nocturnal and usually possess adaptations for night vision. In vertebrates' eyes, two types of photoreceptor cells sense light. Cone cells sense color but are ineffective in low light; rod cells sense only brightness but remain effective in very dim light. The eyes of nocturnal animals have a greater percentage of rod cells. In most mammals, rod cells contain densely packed DNA near the edge of the nucleus. For nocturnal mammals, this is reversed with the densely packed DNA in the center of the nucleus, which reduces the scattering of light. Some nocturnal animals have a mirror, the tapetum lucidum, behind the retina. This doubles the amount of light their eyes can process.
The compound eyes of insects can see at even lower levels of light. For example, the elephant hawk moth can see in color, including ultraviolet, with only starlight. Nocturnal insects navigate using moonlight, lunar phases, infrared vision, the position of the stars, and the Earth's magnetic field. Artificial lighting disrupts the biorhythms of many animals. Night-flying insects that use the moon for navigation are especially vulnerable to disorientation from increasing levels of artificial lighting. Artificial lights attract many night-flying insects that die from exhaustion and nocturnal predators. Decreases in insect populations disrupt the overall ecosystem because their larvae are a key food source for smaller fish. Dark-sky advocate Paul Bogard described the unnatural migration of night-flying insects from the unlit Nevada desert into Las Vegas as "like sparkling confetti floating in the beam's white column".
Some nocturnal animals have developed other senses to compensate for limited light. Many snakes have a pit organ that senses infrared light and enables them to detect heat. Nocturnal mice possess a vomeronasal organ that enhances their sense of smell. Bats heavily depend on echolocation. Echolocation allows an animal to navigate with their sense of hearing by emitting sounds and listening for the time it takes them to bounce back. Bats emit a steady stream of clicks while hunting insects and home in on prey as thin as human hair.
People and other diurnal animals sleep primarily at night. Humans, other mammals, and birds experience multiple stages of sleep visible via electroencephalography. The stages of sleep are wakefulness, three stages of non-rapid eye movement sleep (NREM), including deep sleep, and rapid eye movement (REM) sleep. During REM sleep, dreams are more frequent and complex. Studies show that some reptiles may also experience REM sleep. During deep sleep, memories are consolidated into long-term memory. Invertebrates most likely experience a form of sleep as well. Studies on bees, which have complex but unrelated brain structures, have shown improvements in memory after sleep, similar to mammals.
Compared to waking life, dreams are sparse with limited sensory detail. Dreams are hallucinatory or bizarre, and they often have a narrative structure. Many hypotheses exist to explain the function of dreams without a definitive answer. Nightmares are dreams that cause distress. The word "night-mare" originally referred to nocturnal demons that were believed to assail sleeping dreamers, like the incubus (male) or succubus (female). It was believed that the demons could sit upon a dreamer's chest to suffocate a victim, as depicted in John Henry Fuseli's The Nightmare.
Fungi
Fungi can sense the presence and absence of light, and the nightly changes of most fungi growth and biological processes are direct responses to either darkness or falling temperatures. By night, fungi are more engaged in synthesizing cellular components and increasing their biomass. For example, fungi that prey on insects will infect the central nervous system of their prey, allowing the fungi to control the actions of the dying insect. During the late afternoon, the fungi will pilot their prey to higher elevations where wind currents can carry its spores further. The fungi will kill and digest the insect as night falls, extending fruiting bodies from the host's exoskeleton. Few species of fungi have true circadian rhythms. A notable exception is Neurospora crassa, a bread mold, widely used to study biorhythms.
Plants
During the day, plants engage in photosynthesis and release oxygen. By night, plants engage in respiration, consuming oxygen and releasing carbon dioxide. Plants can draw up more water after sunset, which facilitates new leaf growth. As plants cannot create energy through photosynthesis after sunset, they use energy stored in the plant, typically as starch granules. Plants use this stored energy at a steady rate, depleting their reserves almost right at dawn. Plants will adjust their rate of consumption to match the expected time until sunrise. This avoids prematurely running out of starch reserves, and it allows the plant to adjust for longer nights in the winter. If a plant is subjected to artificially early darkness, it will ration its energy consumption to last until dawn.
Succulent plants, including cacti, have adapted to the limited water availability in arid environments like deserts. The stomata of cacti do not open until night. When the temperature drops, the pores open to allow the cacti to store carbon dioxide for photosynthesis the next day, a process known as crassulacean acid metabolism (CAM). Cacti and most night-blooming plants use CAM to store up to 99% of the carbon dioxide they use in daily photosynthesis. Ceroid cacti often have flowers that bloom at night and fade before sunrise. As few bees are nocturnal, night-flowering plants rely on other pollinators, including moths, beetles, and bats. These flowers rely more on the pollinators' sense of smell, with strong perfumes to attract moths and foul-smelling odors to attract bats.
Eukaryotic and prokaryotic organisms that engage in photosynthesis are also affected by nightfall. Like plants, algae will switch to taking in oxygen and processing energy stored as starch. Cyanobacteria, also known as blue algae, switch from photosynthesis to nitrogen fixation after sunset, and they absorb DNA at a much higher rate.
Culture
History and technology
Before the industrial era, night was a time of heightened insecurity. Fear of the night was common but varied in intensity across cultures. Dangers increased due to lower visibility. Injuries and deaths were caused by drowning and falling into pits, ditches, and shafts. People were less able to evaluate others after dark. Due to nocturnal alcohol consumption and the anonymity of darkness, quarrels were more likely to escalate to violence. In medieval Stockholm, the majority of murders were committed while intoxicated. Crime and fear of crime increased at night. In pre-industrial Europe, criminals disguised themselves with hats, face paint, or cloaks. Thieves would trip pedestrians with ropes laid across streets and dismount horse riders using long poles extended from the roadside shadows. They used "dark lanterns" where light could be shined through a single side. Gangs were uncommon except for housebreaking. The increased humidity of night was deemed the result of vapors and fumes. The annual movements of stars and constellations across the night sky were used to track the passage of time, but other changes in the night sky were interpreted as significant omens.
Many daytime religious, governmental, and local social controls dissipated after nightfall. Fortified Christian communities announced the coming darkness with horns, church bells, or drums. This alerted residents—like peasants working the fields—to return home before the city gates shut. The English engaged in a daily process of "shutting in", where valuables were brought into homes before they were bolted, barred, locked, and shuttered. Many English and European towns attempted to impose curfews during the medieval period and gradually loosened the restrictions via exceptions. Prayer and folk magic were more common by night. Amulets were hung to ward off nightmares, spells were cast against thievery, and pig hearts were hung in chimneys to block demons from traveling down them. The common phrase "good night" has been shortened from "God give you a good night." In Ottoman Istanbul, the royal palaces shifted to projecting nocturnal power through large parties lit by lanterns, candles, and fireworks. Though alcohol was forbidden for Muslims, after dark, Turkish Muslims went to the bars and taverns beyond the Muslim areas.
The night has long been a time of increased sexual activity, especially in taboo forms such as premarital, extramarital, gay, and lesbian sex. In colonial New England courtship, young unmarried couples practiced bundling before marriage. The couples would lie down in the woman's bed, her family would wrap them tightly with blankets, and they would spend the night together this way. Some families took precautions to prevent unintended pregnancies, like sleeping in the same room, laying a large wooden board between the couple, or pulling a single stocking over both of their daughter's legs. Historian Roger Ekirch described pre-industrial night as a "sanctuary from ordinary existence."
Artificial lighting expanded the scope of acceptable work and leisure after dark. In the 1600s, the major European cities introduced streetlights. These were lit by lamplighters each evening outside of the summer months. Early streetlights were metal and glass enclosures housing candles or oil lamps. They were suspended above streets or mounted on posts. The use of artificial lighting led to an increase in acceptable nightlife. In more rural areas, night remained a period of rest and nocturnal labor. Young adults, the urban poor, prostitutes, and thieves benefited from the anonymity of darkness and frequently smashed the new lanterns. Gas lighting was invented in the 1800s. A gas mantle was over ten times brighter than an oil lamp. Gas lighting was associated with the creation of regular police forces. In England, police departments were tasked with maintaining the gas lights, which became known as "police lamps". Daytime routines were further pushed back into the night by the electric light bulb—invented in the late 19th century—and the widespread usage of newer timekeeping devices like watches. Electric lights created night shifts for traditionally daytime fields, like India's cotton industry, and created opportunities for working adults to attend night school.
Before the widespread usage of artificial lighting, sleep was typically split into two major segments separated by about an hour of wakefulness. During this midnight period, people engaged in prayer, crimes, urination, sex, and, most commonly, reflection. Without exposure to artificial light, studies show that people revert to sleeping in two separate intervals.
Folklore and religion
Diverse cultures have made connections between the night sky and the afterlife. Many Native American peoples have described the Milky Way as a path where the deceased travel as stars. The Lakota term for the Milky Way is Wanáǧi Thacháŋku, or "Spirit's Road". In Mayan mythology, the Milky Way's dark band is the Road of Xibalba, the path to the underworld. Unrelated cultures share a myth of a star-covered sky goddess who arches over the planet after sunset, like Citlālicue, the Aztec personification of the Milky Way. The elongated Egyptian goddess Nut and N!adima from Botswana are said to consume the Sun at dusk. In the Ancient Egyptian religion, the Sun then travels through the netherworld inside Nut's body, where it is reborn at dawn.
Many cultures have personified the night. Ratri is the star-covered Hindu goddess of the night. In the Icelandic Prose Edda, night is embodied by Nótt. Ratri and Nött are goddesses of sleep and rest, but it's common for personifications to be associated with misfortune. In Aztec mythology, Black Tezcatlipoca, the "Night Wind", was associated with obsidian and the nocturnal jaguar. In his "Precious Owl" manifestation, the Aztecs regarded Tezcatlipoca as the bringer of death and destruction. The Aztecs anticipated an unending night when the Tzitzimīmeh, skeletal female star deities, would descend to consume all humans. In classical mythology, the night goddess Nyx is the mother of Sleep, Death, Disease, Strife, and Doom. In Jewish culture and mysticism, the demon Lilith embodies the emotional reactions to darkness, including terror, lust, and liberation.
Nighttime in the pre-industrial period, often called the "night season", was associated with darkness and uncertainty. Various cultures have regarded the night as a time when ghosts and other spirits are active on Earth. When Protestant theologians abandoned the concept of purgatory, many came to view reported ghost sightings as the result of demonic activity. In the sixteenth century, Swiss theologian Ludwig Lavater began attempting to explain reported spirits as mistakes, deceit, or the work of demons. The idea of night as a dangerous, dark, or haunted time persists in modern urban legends like the vanishing hitchhiker.
In folklore, nocturnal preternatural beings like goblins, fairies, werewolves, pucks, brownies, banshees, and boggarts have overlapping but non-synonymous definitions. The werewolf—and its francophone variations, the loup-garou and rougarou—were believed to be people who transformed into beasts at night. In West Africa and among the African diaspora, there is a widespread tradition of a type of vampire who removes their human skin at night and travels as a blood-sucking ball of light. Variation includes the feu-follet, the Surinamese asema, the Caribbean sukuyan, the Ashanti obayifo, and the Ghanaian asanbosam. The medieval fear of night-flying European witches was influenced by the Roman strix. The Romans described the strix as capable of changing between a beautiful woman and an owl-shaped monster. Common themes among these mythical nocturnal entities include hypersexuality, predation, shapeshifting, deception, mischief, and malice.
Nightlife
Nightlife, sometimes referred to as "the night-time economy", is a range of entertainment available and generally more popular from the late evening into the early morning. It has traditionally included venues such as pubs, bars, nightclubs, live music, concerts, cabarets, theaters, hookah lounges, cinemas, and shows. Nightlife entertainment is often more adult-oriented than daytime entertainment. It also includes informal gatherings like parties, botellón, gymkhanas, bingo, and amateur sports. In many cities, there has been an increasing focus on nightlife catering to tourists. Nightlife has become a major part of the economy and urban planning in modern cities. People who prefer to be active at night are called night owls.
Social movements in the 20th century, including feminism, black activism, the gay rights movement, and community action, blurred the lines between political action and broader cultural activities, making political movements a part of the nightlife. Sociologists have argued that vibrant city nightlife scenes contribute to the development of culture and political movements. David Grazian cites as examples the development of beat poetry, musical styles including bebop, urban blues, and early rock, and the importance of nightlife for the development of the gay rights movement in the United States kicked off by the riots at the Stonewall Inn nightclub in Greenwich Village, Lower Manhattan, New York City. Modern cities treat nightlife as necessary to the city's marketability but also something to be managed in order to reduce activities viewed as disorderly, risky, or otherwise problematic. Urban renewal policies have increased the available possibilities for nighttime consumers and decreased the non-commercial nocturnal activities outside of sanctioned festivals and concerts.
Art
Literature
In literature, night is often associated with mysterious, hidden, dangerous, and clandestine activities. Rhesus is the only extant Greek tragedy where night is explicitly invoked and made an element of the story. In the play, night is a time of disorder and confusion that allows Odysseus to sneak into the Trojan camp and kill King Rhesus of Thrace. The handful of surviving Classical Greek texts that describe the nocturnal activities of women portray female freedom, especially to speak openly, male anxieties about that freedom, and magic that functions as a metaphor for nocturnal danger. Roman poets like Marcus Manilius and Aratus worked late into the night and incorporated darkness and the night sky into their writing.
Since the Age of Enlightenment, nocturnal settings have been a frequent place for passionate chaos as a counterbalance to the rationality present during the day. In Gothic fiction, this absence of rationality offered a space for lust and terror. Ottoman literature portrayed night as a time for forbidden or unrequited love. Night and day were long depicted as opposite conditions. The electric light, the industrial revolution, and shift work brought many aspects of daily life into the night. The author Charles Dickens lived in London during the time of gas lighting and compared the unstable separation between the waking and sleeping city to the unstable separation he perceived between dream and delusion. Night in contemporary literature offers liminal settings, such as hospitals and gas stations, that contain some aspects of daily life.
Film and photography
Directly filming at night is rarely done. Film stocks and video cameras are much less sensitive to low-light environments than the human eye. During the silent film era, night scenes were filmed during the day in black and white. The sections of the monochrome film reel with exterior night scenes were soaked in an acidic dye that tinted the whole scene blue. "Day for night" is a set of cinematic techniques that simulate a night scene while filming in daylight. They include underexposing to soften the scene, using a graduated neutral-density filter to mute lighting, and setting up the artificial lighting to amplify shadows in the background. Lower-budget films are more likely to use day for night shooting; larger-budget films are more likely to film at night with artificial lighting. Cinematographers have used tinting, filters, color balance settings, and physical lights to color night scenes blue. In low light, people experience the Purkinje effect, which causes reds to dim so that more blue is perceived. As light decreases towards total darkness, the human eye has more scotopic vision, relying more on rod cells and being less able to perceive color.
Night photography can capture the natural colors of night by increasing the exposure time, or length of time captured in the photography. Longer exposures open the possibility for photographers to use light painting to selectively illuminate a scene. Digital photography can also make use of high-ISO settings, which increase the sensitivity to light, to take shorter exposure shots. This makes it possible to capture moving subjects without turning their movements into a blur.
Painting
Dating back to prehistoric cave paintings, artists have used a range of symbols to denote and depict the night sky. Researchers at the Universities of Edinburgh and Kent have proposed that some of the animals painted at prehistoric sites across Europe and Asia Minor, like Çatalhöyük, Lascaux, and the Cave of Altamira, represent not actual animals but prehistoric zodiac signs. The first widely accepted portrayal of the night sky is the Nebra sky disc created . In medieval art, astrological signs gave meaning to paintings of night scenes. Adam Elsheimer's paintings on copper plates were some of the earliest realistic depictions of the night sky.
Baroque paintings typically used a darker color scheme than previous painting styles in Europe. From the 17th century, darkness took up larger areas of paintings on average. Changes in the chemical composition of the paint itself and the development of new techniques for representing light led to the tenebrism style of painting. Tenebrism used stark, realistic depictions of light contrasted with darkness to create realistic depictions of night and darkness illuminated by moonlight, candles, and lamps. The work of Baroque painters, like Caravaggio, who painted an entire studio black, was influenced by the alchemical concept of "nigredo", or blackness as connected to death and decomposition. Dutch Golden Age painter Rembrandt recreated the dim light cast by early street lighting by layering translucent brown glazes.
Impressionists represented darkness with shades of brown and blue based on the ideas that true black was not present in nature and that black had a deadening effect on the art. Claude Monet notably avoided black paints. Vincent van Gogh used heavy outlines between panes of color in his paintings, inspired by woodblock printing in Japan. This style, called cloisonné after the metalworking technique that embedded glass between dark lines of wire, was adopted by other painters like Paul Gauguin. As night in Europe became more artificially lit, former railway worker John Atkinson Grimshaw became known for his vibrantly lit urban paintings. In the modern era, painters have variously returned to archetypal symbols to capture the awe of night or painted scenes that emphasize how the modern city separates the viewer from the night sky.
Near Eastern artists initially rejected these techniques to depict shadow as hiding aspects of creation in shadows. Mughal painters quickly incorporated techniques to depict night, twilight, and mists. Under Emperor Akbar I, European materials and techniques were imported. Rajasthani paintings combined these with traditional styles and symbolism. Nayikas, depictions of women seeking romantic love, were a common subject and often included night as the setting for romance and peril. Jesuit painter Giuseppe Castiglione brought Renaissance techniques for painting light and shadow to 17th-century China. In pieces like One Hundred Famous Views of Edo, Hiroshige developed techniques to represent shadow and nocturnal light that became widespread in Japanese Meiji-era art. Known for his crowd scenes lit by fireworks, Hiroshige had a strong influence on European painters.
| Physical sciences | Celestial mechanics | null |
206122 | https://en.wikipedia.org/wiki/Big%20Crunch | Big Crunch | The Big Crunch is a hypothetical scenario for the ultimate fate of the universe, in which the expansion of the universe eventually reverses and the universe recollapses, ultimately causing the cosmic scale factor to reach absolute zero, an event potentially followed by a reformation of the universe starting with another Big Bang. The vast majority of evidence, however, indicates that this hypothesis is not correct. Instead, astronomical observations show that the expansion of the universe is accelerating rather than being slowed by gravity, suggesting that a Big Freeze is much more likely to occur. Nonetheless, some physicists have proposed that a "Big Crunch-style" event could result from a dark energy fluctuation.
The hypothesis dates back to 1922, with Russian physicist Alexander Friedmann creating a set of equations showing that the end of the universe depends on its density. It could either expand or contract rather than stay stable. With enough matter, gravity could stop the universe's expansion and eventually reverse it. This reversal would result in the universe collapsing on itself, not too dissimilar to a black hole.
The ending of the Big Crunch would get filled with radiation from stars and high-energy particles; when this is condensed and blueshifted to higher energy, it would be intense enough to ignite the surface of stars before they collide. In the final moments, the universe would be one large fireball with a near-infinite temperature, and at the absolute end, neither time, nor space would remain.
Overview
The Big Crunch scenario hypothesized that the density of matter throughout the universe is sufficiently high that gravitational attraction will overcome the expansion that began with the Big Bang. The FLRW cosmology can predict whether the expansion will eventually stop based on the average energy density, Hubble parameter, and cosmological constant. If the expansion stopped, then contraction will inevitably follow, accelerating as time passes and finishing the universe in a kind of gravitational collapse,
turning the universe into a black hole.
Experimental evidence in the late 1990s and early 2000s (namely the observation of distant supernovas as standard candles; and the well-resolved mapping of the cosmic microwave background) led to the conclusion that the expansion of the universe is not getting slowed by gravity but is instead accelerating. The 2011 Nobel Prize in Physics was awarded to researchers who contributed to this discovery.
The Big Crunch hypothesis also leads into another hypothesis known as the Big Bounce, in which after the big crunch destroys the universe, it does a sort of bounce, causing another big bang. This could potentially repeat forever in a phenomenon known as a cyclic universe.
History
Richard Bentley, a churchman and scholar, sent a letter to Isaac Newton in preparation for a lecture on Newton's theories and the rejection of atheism:
If we're in a finite universe and all stars attract each other together, would they not all collapse to a singular point, and if we're in an infinite universe with infinite stars, would infinite forces in every direction not affect all of those stars?
This question is known as Bentley's paradox, an early predecessor of the Big Crunch. Although, it is now known that stars move around and are not static.
Einstein's cosmological constant
Albert Einstein favored an unchanging model of the universe. He collaborated in 1917 with Dutch astronomer Willem de Sitter to help demonstrate that the theory of general relativity would work with a static model; Willem demonstrated that his equations could describe a very simple universe. Finding no problems initially, scientists adapted the model to describe the universe. They ran into a different form of Bentley's paradox.
The theory of general relativity also described the universe as restless. Einstein realized that for a static universe to exist—which was observed at the time—an anti-gravity would be needed to counter the gravity contracting the universe together, adding an extra force that would ruin the equations in the theory of relativity. In the end, the cosmological constant, the name for the anti-gravity force, was added to the theory of relativity.
Discovery of Hubble's law
Edwin Hubble working in the Mount Wilson Observatory took measurements of the distances of galaxies and paired them with Vesto Silpher and Milton Humason's measurements of red shifts associated with those galaxies. He discovered a rough proportionality between the red shift of an object and its distance. Hubble plotted a trend line from 46 galaxies, studying and obtaining the Hubble Constant, which he deduced to be 500 km/s/Mpc, nearly seven times than what it is considered today, but still giving the proof that the universe was expanding and was not a static object.
Abandonment of the cosmological constant
After Hubble's discovery was published, Einstein abandoned the cosmological constant. In their simplest form, the equations generated a model of the universe that expanded or contracted. Contradicting what was observed, hence the creation of the cosmological constant. After the confirmation that the universe was expanding, Einstein called his assumption that the universe was static his "biggest mistake". In 1931, Einstein visited Hubble to thank him for "providing the basis of modern cosmology". After this discovery, Einstein's and Newton's models of a contracting, yet static universe were dropped for the expanding universe model.
Cyclic universes
A hypothesis called "Big Bounce" proposes that the universe could collapse to the state where it began and then initiate another Big Bang, so in this way, the universe would last forever but would pass through phases of expansion (Big Bang) and contraction (Big Crunch). This means that there may be a universe in a state of constant Big Bangs and Big Crunches.
Cyclic universes were briefly considered by Albert Einstein in 1931. He hypothesized that there was a universe before the Big Bang, which ended in a Big Crunch, which could create a Big Bang as a reaction. Our universe could be in a cycle of expansion and contraction, a cycle possibly going on infinitely.
Ekpyrotic model
There are more modern models of Cyclic universes as well. The Ekpyrotic model, formed by Paul Steinhardt, states that the Big Bang could have been caused by two parallel orbifold planes, referred to as branes colliding in a higher-dimensional space. The four-dimension universe lies on one of the branes. The collision corresponds to the Big Crunch, then a Big Bang. The matter and radiation around us today are quantum fluctuations from before the branes. After several billion years, the universe has reached its modern state, and it will start contracting in another several billion years. Dark energy corresponds to the force between the branes, allowing for problems, like the flatness and monopole in the previous models to be fixed. The cycles can also go infinitely into the past and the future, and an attractor allows for a complete history of the universe.
This fixes the problem of the earlier model of the universe going into heat death from entropy buildup. The new model avoids this with a net expansion after every cycle, stopping entropy buildup. There are still some flaws in this model, however. The basis of the model, branes, are still not understood completely by string theorists, and the possibility that the scale invariant spectrum could be destroyed from the big crunch. While cosmic inflation and the general character of the forces—or the collision of the branes in the Ekpyrotic model—required to make vacuum fluctuations is known. A candidate from particle physics is missing.
Conformal Cyclic Cosmology (CCC) model
Physicist Roger Penrose advanced a general relativity-based theory called the conformal cyclic cosmology in which the universe expands until all the matter decays and is turned to light. Since nothing in the universe would have any time or distance scale associated with it, it becomes identical with the Big Bang (resulting in a type of Big Crunch that becomes the next Big Bang, thus starting the next cycle). Penrose and Gurzadyan suggested that signatures of conformal cyclic cosmology could potentially be found in the cosmic microwave background; as of 2020, these have not been detected.
There are also some flaws with this model as well: skeptics pointed out that in order to match up an infinitely large universe to an infinitely small universe, that all particles must lose their mass when the universe gets old. Penrose presented evidence of CCC in the form of rings that had uniform temperature in the CMB, the idea being that these rings would be the signature in our aeon—An aeon being the current cycle of the universe that we're in—was caused by spherical gravitational waves caused by colliding black holes from our previous aeon.
Loop quantum cosmology (LQC)
Loop quantum cosmology is a model of the universe that proposes a "quantum-bridge" between expanding and contracting universes. In this model quantum geometry creates a brand-new force that is negligible at low spacetime curvature, but that rises very rapidly in the Planck regime, overwhelming classical gravity and resolving singularities of general relativity. Once the singularities are resolved the conceptual paradigm of cosmology changes, forcing one to revisit the standard issues—such as the horizon problem—from a new perspective.
Under this model, due to quantum geometry, the Big Bang is replaced by the Big Bounce with no assumptions or any fine tuning. The approach of effective dynamics has been used extensively in loop quantum cosmology to describe physics at the Planck scale, and also the beginning of the universe. Numerical simulations have confirmed the validity of effective dynamics, which provides a good approximation of the full loop quantum dynamics. It has been shown when states have very large quantum fluctuations at late times, meaning they do not lead to macroscopic universes as described by general relativity, but the effective dynamics departs from quantum dynamics near bounce and the later universe. In this case, the effective dynamics will overestimate the density at bounce, but it will still capture qualitative aspects extremely well.
Empirical scenarios from physical theories
If a form of quintessence driven by a scalar field evolving down a monotonically decreasing potential that passes sufficiently below zero is the (main) explanation of dark energy and current data (in particular observational constraints on dark energy) is true as well, the accelerating expansion of the Universe would inverse to contraction within the cosmic near-future of the next 100 million years. According to an Andrei-Ijjas-Steinhardt study, the scenario fits "naturally with cyclic cosmologies and recent conjectures about quantum gravity". The study suggests that the slow contraction phase would "endure for a period of order 1 billion y before the universe transitions to a new phase of expansion".
Effects
Paul Davies considered a scenario in which the Big Crunch happens about 100 billion years from the present. In his model, the contracting universe would evolve roughly like the expanding phase in reverse. First, galaxy clusters, and then galaxies, would merge, and the temperature of the cosmic microwave background (CMB) would begin to rise as CMB photons get blueshifted. Stars would eventually become so close together that they begin to collide with each other. Once the CMB becomes hotter than M-type stars (about 500,000 years before the Big Crunch in Davies' model), they would no longer be able to radiate away their heat and would cook themselves until they evaporate; this continues for successively hotter stars until O-type stars boil away about 100,000 years before the Big Crunch. In the last minutes, the temperature of the universe would be so great that atoms and atomic nuclei would break up and get sucked up into already coalescing black holes. At the time of the Big Crunch, all the matter in the universe would be crushed into an infinitely hot, infinitely dense singularity similar to the Big Bang. The Big Crunch may be followed by another Big Bang, creating a new universe.
In culture
In The Restaurant at the End of the Universe, a novel by Douglas Adams, the concept is that a restaurant, Milliways, is set up to allow patrons to observe the end of the Universe, or "Gnab Gib", as it is referred to, as they dine. The term is sometimes used in the mainstream, for example (as "gnaB giB") in Physics I For Dummies and in a posting discussing the Big Crunch.
| Physical sciences | Physical cosmology | Astronomy |
206230 | https://en.wikipedia.org/wiki/Coonhound | Coonhound | A coonhound, colloquially a coon dog, is a type of scenthound, a member of the hound group. They are an American type of hunting dog developed for the hunting of raccoons and also for feral pigs, bobcats, cougars, and bears. There are six distinct breeds of coonhound.
History
In the colonial period, hounds were imported into North America for the popular sport of fox hunting. Various breeds of foxhounds and other hunting hounds were imported from England, Ireland, and France.
Foxhounds were found to be inadequate for hunting American animals that did not hide near the ground, but instead climbed trees, such as raccoons, opossums, bobcats, and even larger prey like cougars and bears. The dogs were often confused or unable to hold the scent when this occurred, and would mill about.
This led to the development of treeing hounds by hunters and dog breeders. Foundation dogs were chosen for a keen sense of smell, the ability to track an animal independent of human commands and, most importantly, to follow an animal both on the ground and when it took to a tree. Bloodhounds specifically were added to many coonhound lines to enhance the ability to track.
Coonhounds can hunt individually or as a pack. Often, hunters do not chase their quarry along with the hounds, unlike organized foxhunting, but wait and listen to the distinctive baying to determine if the prey has been treed. Coonhounds are excellent at hunting all manner of prey if trained properly.
Memorial
Established in 1937, the Key Underwood Coon Dog Memorial Graveyard is located in Colbert County, Alabama. It is used specifically for the burial of certified coonhounds.
Breeds
There are six breeds of coonhound, all of which were first recognized by the United Kennel Club:
The first to be officially registered was the Black and Tan Coonhound in 1900.
It was followed by the solid red Redbone Coonhound in 1902.
The third is the English Coonhound, recognized by the UKC in 1905. The English has the widest color variation of the coonhound breeds, coming in redtick, bluetick, and tricolor patterns.
The Bluetick Coonhound and tricolored Treeing Walker Coonhound were originally considered varieties of the English, but were split off and recognized as different breeds by 1946 and 1945, respectively.
The Plott Hound, a dark brindle in color, was the last to be recognized, in 1946. It is the only coonhound that does not descend from foxhounds; instead, its ancestry traces back to German boar-hunting dogs.
The Black and Tan Coonhound was the first to be recognized by the American Kennel Club, in 1946. The other coonhound breeds were not able to be AKC-registered until the 2000s; the Redbone and Bluetick Coonhounds were both recognized in 2009, the English in 2011 (as the American English), and the Treeing Walker in 2012.
In 2008, the UKC recognized the American Leopard Hound as a scenthound breed. It is used for hunting raccoons, as well as other game animals.
Health
As a breed that is often used to hunt raccoons, coonhounds are susceptible to "Coonhound paralysis," or more accurately, acute canine idiopathic polyradiculoneuritis (ACIP). This condition is the often result of a dog coming into contact with a raccoon's saliva, typically through a scratch or bite, though some cases do not involve raccoons at all. Despite the name, any breed of dog can contract the disease, but it is more commonly associated with coonhounds due to their use as raccoon hunting dogs. The disease is compared to Guillain-Barre syndrome in humans, resulting in progressive atrophy to leg muscles, starting with the rear legs and moving forward, and in some cases impacting respiratory muscles.
A study of 90,000 dog's patient records found coonhounds to be predisposed to atopy/allergic dermatitis with 8.33% of coonhounds having the condition compared to 1.08% for mixed-breeds.
| Biology and health sciences | Dogs | Animals |
206242 | https://en.wikipedia.org/wiki/Differential%20%28mechanical%20device%29 | Differential (mechanical device) | A differential is a gear train with three drive shafts that has the property that the rotational speed of one shaft is the average of the speeds of the others. A common use of differentials is in motor vehicles, to allow the wheels at each end of a drive axle to rotate at different speeds while cornering. Other uses include clocks and analogue computers.
Differentials can also provide a gear ratio between the input and output shafts (called the "axle ratio" or "diff ratio"). For example, many differentials in motor vehicles provide a gearing reduction by having fewer teeth on the pinion than the ring gear.
History
Milestones in the design or use of differentials include:
100 BCE–70 BCE: The Antikythera mechanism has been dated to this period. It was discovered in 1902 on a shipwreck by sponge divers, and modern research suggests that it used a differential gear to determine the angle between the ecliptic positions of the Sun and Moon, and thus the phase of the Moon.
: Chinese engineer Ma Jun creates the first well-documented south-pointing chariot, a precursor to the compass. Its mechanism of action is unclear, though some 20th century engineers put forward the argument that it used a differential gear.
1810: Rudolph Ackermann of Germany invents a four-wheel steering system for carriages, which some later writers mistakenly report as a differential.
1823: Aza Arnold develops a differential drive train for use in cotton-spinning. The design quickly spreads across the United States and into the United Kingdom.
1827: Modern automotive differential patented by watchmaker Onésiphore Pecqueur (1792–1852) of the Conservatoire National des Arts et Métiers in France for use on a steam wagon.
1874: Aveling and Porter of Rochester, Kent list a crane locomotive in their catalogue fitted with their patent differential gear on the rear axle.
1876: James Starley of Coventry invents chain-drive differential for use on bicycles; invention later used on automobiles by Karl Benz.
1897: While building his Australian steam car, David Shearer made the first use of a differential in a motor vehicle.
1958: Vernon Gleasman patents the Torsen limited-slip differential.
Use in wheeled vehicles
Purpose
During cornering, the outer wheels of a vehicle must travel further than the inner wheels (since they are on a larger radius). This is easily accommodated when the wheels are not connected, however it becomes more difficult for the drive wheels, since both wheels are connected to the engine (usually via a transmission). Some vehicles (for example go-karts and trams) use axles without a differential, thus relying on wheel slip when cornering. However, for improved cornering abilities, many vehicles use a differential, which allows the two wheels to rotate at different speeds.
The purpose of a differential is to transfer the engine's power to the wheels while still allowing the wheels to rotate at different speeds when required. An illustration of the operating principle for a ring-and-pinion differential is shown below.
Ring-and-pinion design
A relatively simple design of differential is used in rear-wheel drive vehicles, whereby a ring gear is driven by a pinion gear connected to the transmission. The functions of this design are to change the axis of rotation by 90 degrees (from the propshaft to the half-shafts) and provide a reduction in the gear ratio.
The components of the ring-and-pinion differential shown in the schematic diagram on the right are: 1. Output shafts (axles) 2. Drive gear 3. Output gears 4. Planetary gears 5. Carrier 6. Input gear 7. Input shaft (driveshaft)
Epicyclic design
An epicyclic differential uses epicyclic gearing to send certain proportions of torque to the front axle and the rear axle in an all-wheel drive vehicle. An advantage of the epicyclic design is its relatively compact width (when viewed along the axis of its input shaft).
Spur-gear design
A spur-gear differential has equal-sized spur gears at each end, each of which is connected to an output shaft. The input torque (i.e. from the engine or transmission) is applied to the differential via the rotating carrier. Pinion pairs are located within the carrier and rotate freely on pins supported by the carrier. The pinion pairs only mesh for the part of their length between the two spur gears, and rotate in opposite directions. The remaining length of a given pinion meshes with the nearer spur gear on its axle. Each pinion connects the associated spur gear to the other spur gear (via the other pinion). As the carrier is rotated (by the input torque), the relationship between the speeds of the input (i.e. the carrier) and that of the output shafts is the same as other types of open differentials.
Uses of spur-gear differentials include the Oldsmobile Toronado American front-wheel drive car.
Locking differentials
Locking differentials have the ability to overcome the chief limitation of a standard open differential by essentially "locking" both wheels on an axle together as if on a common shaft. This forces both wheels to turn in unison, regardless of the traction (or lack thereof) available to either wheel individually. When this function is not required, the differential can be "unlocked" to function as a regular open differential.
Locking differentials are mostly used on off-road vehicles, to overcome low-grip and variable grip surfaces.
Limited-slip differentials
An undesirable side-effect of a regular ("open") differential is that it can send most of the power to the wheel with the lesser traction (grip). In situation when one wheel has reduced grip (e.g. due to cornering forces or a low-grip surface under one wheel), an open differential can cause wheelspin in the tyre with less grip, while the tyre with more grip receives very little power to propel the vehicle forward.
In order to avoid this situation, various designs of limited-slip differentials are used to limit the difference in power sent to each of the wheels.
Torque vectoring
Torque vectoring is a technology employed in automobile differentials that has the ability to vary the torque to each half-shaft with an electronic system; or in rail vehicles which achieve the same using individually motored wheels. In the case of automobiles, it is used to augment the stability or cornering ability of the vehicle.
Other uses
Non-automotive uses of differentials include performing analogue arithmetic. Two of the differential's three shafts are made to rotate through angles that represent (are proportional to) two numbers, and the angle of the third shaft's rotation represents the sum or difference of the two input numbers. The earliest known use of a differential gear is in the Antikythera mechanism, c. 80 BCE, which used a differential gear to control a small sphere representing the Moon from the difference between the Sun and Moon position pointers. The ball was painted black and white in hemispheres, and graphically showed the phase of the Moon at a particular point in time. An equation clock that used a differential for addition was made in 1720. In the 20th century, large assemblies of many differentials were used as analogue computers, calculating, for example, the direction in which a gun should be aimed.
Compass-like devices
Chinese south-pointing chariots may also have been very early applications of differentials. The chariot had a pointer which constantly pointed to the south, no matter how the chariot turned as it travelled. It could therefore be used as a type of compass. It is widely thought that a differential mechanism responded to any difference between the speeds of rotation of the two wheels of the chariot, and turned the pointer appropriately. However, the mechanism was not precise enough, and, after a few miles of travel, the dial could be pointing in the wrong direction.
Clocks
The earliest verified use of a differential was in a clock made by Joseph Williamson in 1720. It employed a differential to add the equation of time to local mean time, as determined by the clock mechanism, to produce solar time, which would have been the same as the reading of a sundial. During the 18th century, sundials were considered to show the "correct" time, so an ordinary clock would frequently have to be readjusted, even if it worked perfectly, because of seasonal variations in the equation of time. Williamson's and other equation clocks showed sundial time without needing readjustment. Nowadays, we consider clocks to be "correct" and sundials usually incorrect, so many sundials carry instructions about how to use their readings to obtain clock time.
Analogue computers
Differential analysers, a type of mechanical analogue computer, were used from approximately 1900 to 1950. These devices used differential gear trains to perform addition and subtraction.
Vehicle suspension
The Mars rovers Spirit and Opportunity (both launched in 2004) used differential gears in their rocker-bogie suspensions to keep the rover body balanced as the wheels on the left and right move up and down over uneven terrain. The Curiosity and Perseverance rovers used a differential bar instead of gears to perform the same function.
| Technology | Mechanisms | null |
206410 | https://en.wikipedia.org/wiki/Rail%20%28bird%29 | Rail (bird) | Rails (avian family Rallidae) are a large, cosmopolitan family of small- to medium-sized terrestrial and/or semi-amphibious birds. The family exhibits considerable diversity in its forms, and includes such ubiquitous species as the crakes, coots, and gallinule; other rail species are extremely rare or endangered. Many are associated with wetland habitats, some being semi-aquatic like waterfowl (such as the coot), but many more are wading birds or shorebirds. The ideal rail habitats are marsh areas, including rice paddies, and flooded fields or open forest. They are especially fond of dense vegetation for nesting. The rail family is found in every terrestrial habitat with the exception of dry desert, polar or freezing regions, and alpine areas (above the snow line). Members of Rallidae occur on every continent except Antarctica. Numerous unique island species are known.
Name
"Rail" is the anglicized respelling of the French râle, from Old French rasle. It is named from its harsh cry, in Vulgar Latin *rascula, from Latin rādere ("to scrape").
Morphology
The rails are a family of small to medium-sized, ground-living birds. They vary in length from and in weight from . Some species have long necks and in many cases are laterally compressed.
The bill is the most variable feature within the family. In some species, it is longer than the head (like the clapper rail of the Americas); in others, it may be short and wide (as in the coots), or massive (as in the purple gallinules). A few coots and gallinules have a frontal shield, which is a fleshy, rearward extension of the upper bill. The most complex frontal shield is found in the horned coot.
Rails exhibit very little sexual dimorphism in either plumage or size. Two exceptions are the watercock (Gallicrex cinerea) and the little crake (Zapornia parva).
Flight and flightlessness
The wings of all rails are short and rounded. The flight of those Rallidae able to fly, while not powerful, can be sustained for long periods of time, and many species migrate annually. The weakness of their flight, however, means they are easily blown off course, thus making them common vagrants, a characteristic that has led them to colonize many isolated oceanic islands. Furthermore, these birds often prefer to run rather than fly, especially in dense habitat. Some are also flightless at some time during their moult periods.
Flightlessness in rails is one of the best examples of parallel evolution in the animal kingdom. Of the roughly 150 historically known rail species, 31 extant or recently extinct species evolved flightlessness from volant (flying) ancestors. This process created the endemic populations of flightless rails seen on Pacific islands today.
Many island rails are flightless because small island habitats without mammalian predators eliminate the need to fly or move long distances. Flight makes intense demands, with the keel and flight muscles taking up to 40% of a bird's weight. Reducing the flight muscles, with a corresponding lowering of metabolic demands, reduces the flightless rail's energy expenditures. For this reason, flightlessness makes it easier to survive and colonize an island where resources may be limited. This also allows for the evolution of multiple sizes of flightless rails on the same island as the birds diversify to fill niches.
In addition to energy conservation, certain morphological traits also affect rail evolution. Rails have relatively small flight muscles and wings to begin with. In rails, the flight muscles make up only 12–17% of their overall body mass. This, in combination with their terrestrial habits and behavioral flightlessness, is a significant contributor to the rail's remarkably fast loss of flight; as few as 125,000 years were needed for the Laysan rail to lose the power of flight and evolve the reduced, stubby wings only useful to keep balance when running quickly. Indeed, some argue that measuring the evolution of flightlessness in rails in generations rather than millennia might be possible.
Another factor that contributes to the occurrence of the flightless state is a climate that does not necessitate seasonal long-distance migration; this is evidenced by the tendency to evolve flightlessness at a much greater occurrence in tropical islands than in temperate or polar islands.
It is paradoxical, since rails appear loath to fly, that the evolution of flightless rails would necessitate high dispersal to isolated islands. Nonetheless, three species of small-massed rails, Gallirallus philippensis, Porphyrio porphyrio, and Porzana tabuensis, exhibit a persistently high ability to disperse long distances among tropic Pacific islands, though only the latter two gave rise to flightless endemic species throughout the Pacific Basin. In examining the phylogeny of G. philippensis, although the species is clearly polyphyletic (it has more than one ancestral species), it is not the ancestor of most of its flightless descendants, revealing that the flightless condition evolved in rails before speciation was complete.
A consequence of lowered energy expenditure in flightless island rails has also been associated with evolution of their "tolerance" and "approachability". For example, the (non-Rallidae) Corsican blue tits exhibit lower aggression and reduced territorial defense behaviors than do their mainland European counterparts, but this tolerance may be limited to close relatives. The resulting kin-selecting altruistic phenomena reallocate resources to produce fewer young that are more competitive and would benefit the population as an entirety, rather than many young that would exhibit less fitness. Unfortunately, with the human occupation of most islands in the past 5,000 to 35,000 years, selection has undoubtedly reversed the tolerance into a wariness of humans and predators, causing species unequipped for the change to become susceptible to extinction.
Behaviour and ecology
In general, members of the Rallidae are omnivorous generalists. Many species eat invertebrates, as well as fruit or seedlings. A few species are primarily herbivorous. The calls of Rallidae species vary and are often quite loud. Some are whistle-like or squeak-like, while others seem unbirdlike. Loud calls are useful in dense vegetation, or at night where seeing another member of the species is difficult. Some calls are territorial.
The most typical family members occupy dense vegetation in damp environments near lakes, swamps, or rivers. Reed beds are a particularly favoured habitat. Those that migrate do so at night.
Most nest in dense vegetation. In general, they are shy, secretive, and difficult to observe. Most species walk and run vigorously on strong legs, and have long toes that are well adapted to soft, uneven surfaces. They tend to have short, rounded wings, and although they are generally weak fliers, they are, nevertheless, capable of covering long distances. Island species often become flightless, and many of them are now extinct following the introduction of terrestrial predators such as cats, foxes, weasels, mongooses, rats, and pigs.
Many reedbed species are secretive (apart from loud calls), crepuscular, and have laterally flattened bodies. In the Old World, long-billed species tend to be called rails and short-billed species crakes. North American species are normally called rails irrespective of bill length. The smallest of these is Swinhoe's rail, at and 25 g. The larger species are also sometimes given other names. The black coots are more adapted to open water than their relatives, and some other large species are called gallinules and swamphens. The largest of this group is the takahē, at and .
The rails have suffered disproportionally from human changes to the environment, and an estimated several hundred species of island rails have become extinct because of this. Several island species of rails remain endangered, and conservation organisations and governments continue to work to prevent their extinction.
Reproduction
The breeding behaviors of many Rallidae species are poorly understood or unknown. Most are thought to be monogamous, although polygyny and polyandry have been reported. Most often, they lay five to 10 eggs. Clutches as small as one or as large as 15 eggs are known. Egg clutches may not always hatch at the same time. Chicks become mobile after a few days. They often depend on their parents until fledging, which happens around 1 month old.
Rallidae and humans
Some larger, more abundant rails are hunted and their eggs collected for food. The Wake Island rail was hunted to extinction by the starving Japanese garrison after the island was cut off from supply during World War II. At least two species, the common moorhen and the American purple gallinule, have been considered pests.
Threats and conservation
Due to their tendencies towards flightlessness, many island species have been unable to cope with introduced species. The most dramatic human-caused extinctions occurred in the Pacific Ocean as people colonised the islands of Melanesia, Polynesia, and Micronesia, during which an estimated 750–1800 species of birds became extinct, half of which were rails. Some species that came close to extinction, such as the Lord Howe woodhen, and the takahē, have made modest recoveries due to the efforts of conservation organisations. The Guam rail came perilously close to extinction when brown tree snakes were introduced to Guam, but some of the last remaining individuals were taken into captivity and are breeding well, though attempts at reintroduction have met with mixed results.
Systematics and evolution
The family Rallidae was introduced (as Rallia) by the French polymath Constantine Samuel Rafinesque in 1815.
The family has traditionally been grouped with two families of larger birds, the cranes and bustards, as well as several smaller families of usually "primitive" midsized amphibious birds, to make up the order Gruiformes. The alternative Sibley-Ahlquist taxonomy, which has been widely accepted in America, raises the family to ordinal level as the Ralliformes. Given uncertainty about gruiform monophyly, this may or may not be correct; it certainly seems more justified than most of the Sibley-Ahlquist proposals. However, such a group would probably also include the Heliornithidae (finfoots and sungrebes), an exclusively tropical group that is somewhat convergent with grebes, and usually united with the rails in the Ralli.
The cladogram below showing the phylogeny of the living and recently extinct Rallidae is based on a study by Juan Garcia-R and collaborators published in 2020. The genera and number of species are taken from the list maintained by Frank Gill, Pamela Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC). The names of the subfamilies and tribes are those proposed by Jeremy Kirchman and collaborators in 2021.
Extant genera
The list maintained on behalf of the International Ornithological Committee (IOC) contains 152 species divided into 43 genera. For more detail, see List of rail species.
Canirallus – grey-throated rail
Mustelirallus – (4 species)
Pardirallus (3 species)
Amaurolimnas – uniform crake
Aramides – wood rails (8 species)
Rallus – typical rails (14 species)
Crecopsis – African crake
Rougetius – Rouget's rail
Dryolimnas – (1 living species, 1 recently extinct)
Crex – corn crake
Aramidopsis – snoring rail
Lewinia – (4 species)
Aptenorallus – Calayan rail
Habroptila – invisible rail
Gallirallus – weka
Eulabeornis – chestnut rail
Cabalus – (1 possibly extinct species, 1 recently extinct)
Hypotaenidia – Austropacific rails (8 living species, 4 recently extinct)
Porphyriops – spot-flanked gallinule
Porzana – (3 species)
Tribonyx – nativehens (2 species)
Paragallinula – lesser moorhen
Gallinula – moorhens (5 living species, 2 recently extinct)
Fulica – coots (10 living species, one recently extinct)
Porphyrio – swamphens and purple gallinules (10 living species, 2 recently extinct)
Micropygia – ocellated crake
Rufirallus – (2 species)
Coturnicops – (3 species)
Laterallus – (13 species)
Zapornia – (10 living species, 5 recently extinct)
Rallina – (4 species)
Gymnocrex – (3 species)
Himantornis – Nkulengu rail
Megacrex – New Guinea flightless rail
Poliolimnas – white-browed crake
Aenigmatolimnas – striped crake
Gallicrex – watercock
Amaurornis – bush-hens (5 species)
Additionally, many prehistoric rails of extant genera are known only from fossil or subfossil remains, such as the Ibiza rail (Rallus eivissensis). These have not been listed here; see the genus accounts and the articles on fossil and Late Quaternary prehistoric birds for these species.
Recently extinct genera
Mundia – Ascension crake (recently extinct; flightless, single island, lost by early 1800s to introduced cats and rats)
Aphanocrex – Saint Helena rail (recently extinct; flightless, single island, lost by 1500s to introduced cats and rats)
Diaphorapteryx – Hawkins's rail (recently extinct; flightless, two islands, lost between 1500 and 1700 to overhunting)
Aphanapteryx – Red rail (recently extinct; flightless, single island, lost by 1700 to overhunting and introduced pigs, cats and rats)
Erythromachus – Rodrigues rail (recently extinct; flightless, single island, lost by 1760 to overhunting, destruction of habitat by tortoise hunters, and introduced cats)
Genus Cabalus – Chatham rail and New Caledonian rail (sometimes included in Gallirallus; extinct around 1900)
Genus Capellirallus – Snipe-rail (recently extinct; flightless, single island, lost by no later than 1400s to introduced rats)
Genus Vitirallus – Viti Levu rail (recently extinct; flightless, single island, lost by no later than early Holocene)
Genus Hovacrex – Hova gallinule (recently extinct; flight ability uncertain, single island, lost by no later than Late Pleistocene)
The undescribed Fernando de Noronha rail, genus and species undetermined, survived to historic times. The extinct genus Nesotrochis from the Greater Antilles was formerly considered to be a rail, but based on DNA evidence is now known to be an independent lineage of gruiform more closely related to Sarothruridae and adzebills.
Fossil record
Fossil species of long-extinct prehistoric rails are richly documented from the well-researched formations of Europe and North America, as well from the less comprehensively studied strata elsewhere:
Genus Eocrex (Wasatch Early Eocene of Steamboat Springs, USA; Late Eocene – ?Oligocene of Isfara, Tadzhikistan)
Genus Palaeorallus (Wasatch Early Eocene of Wyoming, USA)
Genus Parvirallus (Early – Middle Eocene of England)
Genus Aletornis (Bridger Middle Eocene of Uinta County, USA) – includes Protogrus
Genus Fulicaletornis (Bridger Middle Eocene of Henry's Fork, USA)
Genus Latipons (Middle Eocene of Lee-on-Solent, England)
Genus Ibidopsis (Hordwell Late Eocene of Hordwell, UK)
Genus Quercyrallus (Late Eocene -? Late Oligocene of France)
Genus Belgirallus (Early Oligocene of WC Europe)
Genus Rallicrex (Corbula Middle/Late Oligocene of Kolzsvár, Romania)
Rallidae gen. et sp. indet. (Late Oligocene of Billy-Créchy, France)
Genus Palaeoaramides (Late Oligocene/Early Miocene – Late Miocene of France)
Genus Rhenanorallus (Late Oligocene/Early Miocene of Mainz Basin, Germany)
Genus Paraortygometra (Late Oligocene/?Early Miocene -? Middle Miocene of France) – includes Microrallus
Genus Australlus (Late Oligocene – Middle Miocene of NW Queensland, Australia)
Genus Pararallus (Late Oligocene? – Late Miocene of C Europe) – possibly belongs in Palaeoaramides
Genus Litorallus (Early Miocene of New Zealand)
Rallidae gen. et sp. indet. (Bathans Early/Middle Miocene of Otago, New Zealand)
Rallidae gen. et sp. indet. (Bathans Early/Middle Miocene of Otago, New Zealand)
Genus Miofulica (Anversian Black Sand Middle Miocene of Antwerp, Belgium)
Genus Miorallus (Middle Miocene of Sansan, France -? Late Miocene of Rudabánya, Hungary)
Genus Youngornis (Shanwang Middle Miocene of Linqu, China)
Rallidae gen. et sp. indet. (Sajóvölgyi Middle Miocene of Mátraszõlõs, Hungary)
Rallidae gen. et sp. indet. (Middle Miocene of Grive-Saint-Alban, France)
Rallidae gen. et sp. indet. (Late Miocene of Lemoyne Quarry, USA)
Rallidae gen. et sp. indet. UMMP V55013-55014; UMMP V55012/V45750/V45746 (Rexroad Late Pliocene of Saw Rock Canyon, USA)
Rallidae gen. et sp. indet. UMMP V29080 (Rexroad Late Pliocene of Fox Canyon, USA)
Genus Creccoides (Blanco Late Pliocene/Early Pleistocene of Crosby County, USA)
Rallidae gen. et sp. indet. (Bermuda, West Atlantic)
Rallidae gen. et sp. indet. (formerly Fulica podagrica) (Late Pleistocene of Barbados)
Genus Pleistorallus (mid-Pleistocene New Zealand). The holotype of Pleistorallus flemingi is in the collection of the Museum of New Zealand Te Papa Tongarewa.
Doubtfully placed here
These taxa may or may not have been rails:
Genus Ludiortyx (Late Eocene) – includes "Tringa" hoffmanni, "Palaeortyx" blanchardi, "P." hoffmanni
Genus Telecrex (Irdin Manha Late Eocene of Chimney Butte, China)
Genus Amitabha (Bridger middle Eocene of Forbidden City, USA) – phasianid?
Genus Palaeocrex (Early Oligocene of Trigonias Quarry, USA)
Genus Rupelrallus (Early Oligocene of Germany)
Neornithes incerta sedis (Late Oligocene of Riversleigh, Australia)
Genus Euryonotus (Pleistocene of Argentina)
The presumed scolopacid wader Limosa gypsorum (Montmartre Late Eocene of France) is sometimes considered a rail and then placed in the genus Montirallus.
| Biology and health sciences | Gruiformes | null |
206491 | https://en.wikipedia.org/wiki/Bustard | Bustard | Bustards, including floricans and korhaans, are large, terrestrial birds living mainly in dry grassland areas and in steppe regions. They range in length from . They make up the family Otididae (, formerly known as Otidae).
Bustards are omnivorous and opportunistic, eating leaves, buds, seeds, fruit, small vertebrates, and invertebrates. There are 26 species currently recognised.
Etymology
The word bustard comes from the Old French bistarda and some other languages: abetarda (pt), abetarda (gl), avutarda (es) used for the great bustard. The naturalist William Turner listed the English spelling "bustard" and "bistard" in 1544.
All of the common names above are derived from Latin avis tarda or aves tardas given by Pliny the Elder, these names were mentioned by the Pierre Belon in 1555 and Ulisse Aldrovandi in 1600. The word tarda comes from tardus in Latin meaning "slow" and "deliberate", which is apt to describe the typical walking style of the species.
Floricans
Some Indian bustards are also called floricans. The origin of the name is unclear. Thomas C. Jerdon writes in The Birds of India (1862)
The Hobson-Jobson dictionary, however, casts doubt on this theory stating that
Taxonomy
The family Otididae was introduced (as Otidia) by the French polymath Constantine Samuel Rafinesque in 1815. Otididae and before that Otidae come from the genus Otis given to the great bustard by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae in 1758, it comes from the Greek word ōtis.
Family Otididae
Extinct genera
Genus †Gryzaja Zubareva 1939
†Gryzaja odessana Zubareva 1939
Genus †Ioriotis Burchak-Abramovich & Vekua 1981
†Ioriotis gabunii Burchak-Abramovich & Vekua 1981
Genus †Miootis Umanskaya 1979
†Miootis compactus Umanskaya 1979
Genus †Pleotis Hou 1982
†Pleotis liui Hou 1982
Description
Bustards are all fairly large with the two largest species, the kori bustard (Ardeotis kori) and the great bustard (Otis tarda), being frequently cited as the world's heaviest flying birds. In both the largest species, large males exceed a weight of , weigh around on average and can attain a total length of . The smallest species is the little brown bustard (Eupodotis humilis), which is around long and weighs around on average. In most bustards, males are substantially larger than females, often about 30% longer and sometimes more than twice the weight. They are among the most sexually dimorphic groups of birds. In only the floricans is the sexual dimorphism the reverse, with the adult female being slightly larger and heavier than the male.
The wings have 10 primaries and 16–24 secondary feathers. There are 18–20 feathers in the tail. The plumage is predominantly cryptic.
Behaviour and ecology
Bustards are omnivorous, feeding principally on seeds and invertebrates. They make their nests on the ground, making their eggs and offspring often very vulnerable to predation. They walk steadily on strong legs and big toes, pecking for food as they go. Most prefer to run or walk over flying. They have long broad wings with "fingered" wingtips, and striking patterns in flight. Many have interesting mating displays, such as inflating throat sacs or elevating elaborate feathered crests. The female lays three to five dark, speckled eggs in a scrape in the ground, and incubates them alone.
Evolution
Genetic dating indicates that bustards evolved 30 million years ago in either southern or eastern Africa from where they dispersed into Eurasia and Australia.
Status and conservation
Bustards are gregarious outside the breeding season, but are very wary and difficult to approach in the open habitats they prefer. Most species are declining or endangered through habitat loss and hunting, even where they are nominally protected.
United Kingdom
The birds were once common and abounded on the Salisbury Plain. They had become rare by 1819 when a large male, surprised by a dog on Newmarket Heath, sold in Leadenhall Market for five guineas. The last bustard in Britain died in approximately 1832, but the bird is being reintroduced through batches of chicks imported from Russia. In 2009, two great bustard chicks were hatched in Britain for the first time in more than 170 years. Reintroduced bustards also hatched chicks in 2010.
| Biology and health sciences | Gruiformes | null |
206520 | https://en.wikipedia.org/wiki/Pyroclastic%20flow | Pyroclastic flow | A pyroclastic flow (also known as a pyroclastic density current or a pyroclastic cloud) is a fast-moving current of hot gas and volcanic matter (collectively known as tephra) that flows along the ground away from a volcano at average speeds of but is capable of reaching speeds up to . The gases and tephra can reach temperatures of about .
Pyroclastic flows are the deadliest of all volcanic hazards and are produced as a result of certain explosive eruptions; they normally touch the ground and hurtle downhill or spread laterally under gravity. Their speed depends upon the density of the current, the volcanic output rate, and the gradient of the slope.
Origin of term
The word pyroclast is derived from the Greek (pýr), meaning "fire", and (klastós), meaning "broken in pieces". A name for pyroclastic flows that glow red in the dark is (French, "burning cloud"); this was notably used to describe the disastrous 1902 eruption of Mount Pelée on Martinique, a French island in the Caribbean.
Pyroclastic flows that contain a much higher proportion of gas to rock are known as "fully dilute pyroclastic density currents" or pyroclastic surges. The lower density sometimes allows them to flow over higher topographic features or water such as ridges, hills, rivers, and seas. They may also contain steam, water, and rock at less than ; these are called "cold" compared with other flows, although the temperature is still lethally high. Cold pyroclastic surges can occur when the eruption is from a vent under a shallow lake or the sea. Fronts of some pyroclastic density currents are fully dilute; for example, during the eruption of Mount Pelée in 1902, a fully dilute current overwhelmed the city of Saint-Pierre and killed nearly 30,000 people.
A pyroclastic flow is a type of gravity current; in scientific literature, it is sometimes abbreviated to PDC (pyroclastic density current).
Causes
Several mechanisms can produce a pyroclastic flow:
Fountain collapse of an eruption column from a Plinian eruption (e.g. Mount Vesuvius' destruction of Herculaneum and Pompeii in 79 AD). In such an eruption, the material forcefully ejected from the vent heats the surrounding air and the turbulent mixture rises, through convection, for many kilometers. If the erupted jet is unable to heat the surrounding air sufficiently, convection currents will not be strong enough to carry the plume upwards and it falls, flowing down the flanks of the volcano.
Fountain collapse of an eruption column associated with a Vulcanian eruption (e.g., Montserrat's Soufrière Hills volcano has generated many of these deadly pyroclastic flows and surges). The gas and projectiles create a cloud that is denser than the surrounding air and becomes a pyroclastic flow.
Frothing at the mouth of the vent during degassing of the erupted lava. This can lead to the production of a rock called ignimbrite. This occurred during the eruption of Novarupta in 1912.
Gravitational collapse of a lava dome or spine, with subsequent avalanches and flows down a steep slope (e.g., Montserrat's Soufrière Hills volcano, which caused nineteen deaths in 1997).
The directional blast (or jet) when part of a volcano collapses or explodes (e.g., the eruption of Mount St. Helens on May 18, 1980). As distance from the volcano increases, this rapidly transforms into a gravity-driven current.
Size and effects
Flow volumes range from a few hundred cubic meters to more than . Larger flows can travel for hundreds of kilometres, although none on that scale has occurred for several hundred thousand years. Most pyroclastic flows are around and travel for several kilometres. Flows usually consist of two parts: the basal flow hugs the ground and contains larger, coarse boulders and rock fragments, while an extremely hot ash plume lofts above it because of the turbulence between the flow and the overlying air, admixing and heating cold atmospheric air causing expansion and convection. Flows can deposit less than 1 meter to 200 meters in depth of loose rock fragment.
The kinetic energy of the moving cloud will flatten trees and buildings in its path. The hot gases and high speed make them particularly lethal, as they will incinerate living organisms instantaneously or turn them into carbonized fossils:
The cities of Pompeii and Herculaneum, Italy, for example, were engulfed by pyroclastic surges in 79 AD with many lives lost.
The 1902 eruption of Mount Pelée destroyed the Martinique town of St. Pierre. Despite signs of impending eruption, the government deemed St. Pierre safe due to hills and valleys between it and the volcano, but the pyroclastic flow charred almost the entirety of the city, killing all but three of its 30,000 residents.
A pyroclastic surge killed volcanologists Harry Glicken and Katia and Maurice Krafft and 40 other people on Mount Unzen, in Japan, on June 3, 1991. The surge started as a pyroclastic flow and the more energised surge climbed a spur on which the Kraffts and the others were standing; it engulfed them, and the corpses were covered with about of ash.
On June 25, 1997, a pyroclastic flow travelled down Mosquito Ghaut on the Caribbean island of Montserrat. A large, highly energized pyroclastic surge developed. This flow could not be restrained by the Ghaut and spilled out of it, killing 19 people who were in the Streatham village area (which was officially evacuated). Several others in the area suffered severe burns.
Interaction with water
Testimonial evidence from the 1883 eruption of Krakatoa, supported by experimental evidence, shows that pyroclastic flows can cross significant bodies of water. However, that might be a pyroclastic surge, not flow, because the density of a gravity current means it cannot move across the surface of water. One flow reached the Sumatran coast as far as away.
A 2006 BBC documentary film, Ten Things You Didn't Know About Volcanoes, demonstrated tests by a research team at Kiel University, Germany, of pyroclastic flows moving over the water. When the reconstructed pyroclastic flow (stream of mostly hot ash with varying densities) hit the water, two things happened: the heavier material fell into the water, precipitating out from the pyroclastic flow and into the liquid; the temperature of the ash caused the water to evaporate, propelling the pyroclastic flow (now only consisting of the lighter material) along on a bed of steam at an even faster pace than before.
During some phases of the Soufriere Hills volcano on Montserrat, pyroclastic flows were filmed about offshore. These show the water boiling as the flow passes over it. The flows eventually built a delta, which covered about . Another example was observed in 2019 at Stromboli when a pyroclastic flow traveled for several hundreds of meters above the sea.
A pyroclastic flow can interact with a body of water to form a large amount of mud, which can then continue to flow downhill as a lahar. This is one of several mechanisms that can create a lahar.
On other celestial bodies
In 1963, NASA astronomer Winifred Cameron proposed that the lunar equivalent of terrestrial pyroclastic flows may have formed sinuous rilles on the Moon. In a lunar volcanic eruption, a pyroclastic cloud would follow local relief, resulting in an often sinuous track. The Moon's Schröter's Valley offers one example.
Some volcanoes on Mars, such as Tyrrhenus Mons and Hadriacus Mons, have produced layered deposits that appear to be more easily eroded than lava flows, suggesting that they were emplaced by pyroclastic flows.
| Physical sciences | Volcanology | Earth science |
206542 | https://en.wikipedia.org/wiki/Astronomical%20object | Astronomical object | An astronomical object, celestial object, stellar object or heavenly body is a naturally occurring physical entity, association, or structure that exists within the observable universe. In astronomy, the terms object and body are often used interchangeably. However, an astronomical body or celestial body is a single, tightly bound, contiguous entity, while an astronomical or celestial object is a complex, less cohesively bound structure, which may consist of multiple bodies or even other objects with substructures.
Examples of astronomical objects include planetary systems, star clusters, nebulae, and galaxies, while asteroids, moons, planets, and stars are astronomical bodies. A comet may be identified as both a body and an object: It is a body when referring to the frozen nucleus of ice and dust, and an object when describing the entire comet with its diffuse coma and tail.
History
Astronomical objects such as stars, planets, nebulae, asteroids and comets have been observed for thousands of years, although early cultures thought of these bodies as gods or deities. These early cultures found the movements of the bodies very important as they used these objects to help navigate over long distances, tell between the seasons, and to determine when to plant crops. During the Middle-Ages, cultures began to study the movements of these bodies more closely. Several astronomers of the Middle-East began to make detailed descriptions of stars and nebulae, and would make more accurate calendars based on the movements of these stars and planets. In Europe, astronomers focused more on devices to help study the celestial objects and creating textbooks, guides, and universities to teach people more about astronomy.
During the Scientific Revolution, in 1543, Nicolaus Copernicus's heliocentric model was published. This model described the Earth, along with all of the other planets as being astronomical bodies which orbited the Sun located in the center of the Solar System. Johannes Kepler discovered Kepler's laws of planetary motion, which are properties of the orbits that the astronomical bodies shared; this was used to improve the heliocentric model. In 1584, Giordano Bruno proposed that all distant stars are their own suns, being the first in centuries to suggest this idea. Galileo Galilei was one of the first astronomers to use telescopes to observe the sky, in 1610 he observed the four largest moons of Jupiter, now named the Galilean moons. Galileo also made observations of the phases of Venus, craters on the Moon, and sunspots on the Sun. Astronomer Edmond Halley was able to successfully predict the return of Halley's Comet, which now bears his name, in 1758. In 1781, Sir William Herschel discovered the new planet Uranus, being the first discovered planet not visible by the naked eye.
In the 19th and 20th century, new technologies and scientific innovations allowed scientists to greatly expand their understanding of astronomy and astronomical objects. Larger telescopes and observatories began to be built and scientists began to print images of the Moon and other celestial bodies on photographic plates. New wavelengths of light unseen by the human eye were discovered, and new telescopes were made that made it possible to see astronomical objects in other wavelengths of light. Joseph von Fraunhofer and Angelo Secchi pioneered the field of spectroscopy, which allowed them to observe the composition of stars and nebulae, and many astronomers were able to determine the masses of binary stars based on their orbital elements. Computers began to be used to observe and study massive amounts of astronomical data on stars, and new technologies such as the photoelectric photometer allowed astronomers to accurately measure the color and luminosity of stars, which allowed them to predict their temperature and mass. In 1913, the Hertzsprung-Russell diagram was developed by astronomers Ejnar Hertzsprung and Henry Norris Russell independently of each other, which plotted stars based on their luminosity and color and allowed astronomers to easily examine stars. It was found that stars commonly fell on a band of stars called the main-sequence stars on the diagram. A refined scheme for stellar classification was published in 1943 by William Wilson Morgan and Philip Childs Keenan based on the Hertzsprung-Russel Diagram. Astronomers also began debating whether other galaxies existed beyond the Milky Way, these debates ended when Edwin Hubble identified the Andromeda nebula as a different galaxy, along with many others far from the Milky Way.
Galaxy and larger
The universe can be viewed as having a hierarchical structure. At the largest scales, the fundamental component of assembly is the galaxy. Galaxies are organized into groups and clusters, often within larger superclusters, that are strung along great filaments between nearly empty voids, forming a web that spans the observable universe.
Galaxies have a variety of morphologies, with irregular, elliptical and disk-like shapes, depending on their formation and evolutionary histories, including interaction with other galaxies, which may lead to a merger. Disc galaxies encompass lenticular and spiral galaxies with features, such as spiral arms and a distinct halo. At the core, most galaxies have a supermassive black hole, which may result in an active galactic nucleus. Galaxies can also have satellites in the form of dwarf galaxies and globular clusters.
Within a galaxy
The constituents of a galaxy are formed out of gaseous matter that assembles through gravitational self-attraction in a hierarchical manner. At this level, the resulting fundamental components are the stars, which are typically assembled in clusters from the various condensing nebulae. The great variety of stellar forms are determined almost entirely by the mass, composition and evolutionary state of these stars. Stars may be found in multi-star systems that orbit about each other in a hierarchical organization. A planetary system and various minor objects such as asteroids, comets and debris, can form in a hierarchical process of accretion from the protoplanetary disks that surround newly formed stars.
The various distinctive types of stars are shown by the Hertzsprung–Russell diagram (H–R diagram)—a plot of absolute stellar luminosity versus surface temperature. Each star follows an evolutionary track across this diagram. If this track takes the star through a region containing an intrinsic variable type, then its physical properties can cause it to become a variable star. An example of this is the instability strip, a region of the H-R diagram that includes Delta Scuti, RR Lyrae and Cepheid variables. The evolving star may eject some portion of its atmosphere to form a nebula, either steadily to form a planetary nebula or in a supernova explosion that leaves a remnant. Depending on the initial mass of the star and the presence or absence of a companion, a star may spend the last part of its life as a compact object; either a white dwarf, neutron star, or black hole.
Shape
The IAU definitions of planet and dwarf planet require that a Sun-orbiting astronomical body has undergone the rounding process to reach a roughly spherical shape, an achievement known as hydrostatic equilibrium. The same spheroidal shape can be seen on smaller rocky planets like Mars to gas giants like Jupiter.
Any natural Sun-orbiting body that has not reached hydrostatic equilibrium is classified by the IAU as a small Solar System body (SSSB). These come in many non-spherical shapes which are lumpy masses accreted haphazardly by in-falling dust and rock; not enough mass falls in to generate the heat needed to complete the rounding. Some SSSBs are just collections of relatively small rocks that are weakly held next to each other by gravity but are not actually fused into a single big bedrock. Some larger SSSBs are nearly round but have not reached hydrostatic equilibrium. The small Solar System body 4 Vesta is large enough to have undergone at least partial planetary differentiation.
Stars like the Sun are also spheroidal due to gravity's effects on their plasma, which is a free-flowing fluid. Ongoing stellar fusion is a much greater source of heat for stars compared to the initial heat released during their formation.
Categories by location
The table below lists the general categories of bodies and objects by their location or structure.
| Physical sciences | Basics_2 | null |
206555 | https://en.wikipedia.org/wiki/Solar%20cycle | Solar cycle | The Solar cycle, also known as the solar magnetic activity cycle, sunspot cycle, or Schwabe cycle, is a periodic 11-year change in the Sun's activity measured in terms of variations in the number of observed sunspots on the Sun's surface. Over the period of a solar cycle, levels of solar radiation and ejection of solar material, the number and size of sunspots, solar flares, and coronal loops all exhibit a synchronized fluctuation from a period of minimum activity to a period of a maximum activity back to a period of minimum activity.
The magnetic field of the Sun flips during each solar cycle, with the flip occurring when the solar cycle is near its maximum. After two solar cycles, the Sun's magnetic field returns to its original state, completing what is known as a Hale cycle.
This cycle has been observed for centuries by changes in the Sun's appearance and by terrestrial phenomena such as aurora but was not clearly identified until 1843. Solar activity, driven by both the solar cycle and transient aperiodic processes, governs the environment of interplanetary space by creating space weather and impacting space- and ground-based technologies as well as the Earth's atmosphere and also possibly climate fluctuations on scales of centuries and longer.
Understanding and predicting the solar cycle remains one of the grand challenges in astrophysics with major ramifications for space science and the understanding of magnetohydrodynamic phenomena elsewhere in the universe.
The current scientific consensus on climate change is that solar variations only play a marginal role in driving global climate change, since the measured magnitude of recent solar variation is much smaller than the forcing due to greenhouse gases.
Definition
Solar cycles have an average duration of about 11 years. Solar maximum and solar minimum refer to periods of maximum and minimum sunspot counts. Cycles span from one minimum to the next.
Observational history
The idea of a cyclical solar cycle was first hypothesized by Christian Horrebow based on his regular observations of sunspots made between 1761 and 1776 from the Rundetaarn observatory in Copenhagen, Denmark. In 1775, Horrebow noted how "it appears that after the course of a certain number of years, the appearance of the Sun repeats itself with respect to the number and size of the spots". The solar cycle however would not be clearly identified until 1843 when Samuel Heinrich Schwabe noticed a periodic variation in the average number of sunspots after 17 years of solar observations. Schwabe continued to observe the sunspot cycle for another 23 years, until 1867. In 1852, Rudolf Wolf designated the first numbered solar cycle to have started in February 1755 based on Schwabe's and other observations. Wolf also created a standard sunspot number index, the Wolf number, which continues to be used today.
Between 1645 and 1715, very few sunspots were observed and recorded. This was first noted by Gustav Spörer and was later named the Maunder minimum after the wife-and-husband team Annie S. D. Maunder and Edward Walter Maunder who extensively researched this peculiar interval.
In the second half of the nineteenth century Richard Carrington and Spörer independently noted the phenomena of sunspots appearing at different heliographic latitudes at different parts of the cycle. (See Spörer's law.) Alfred Harrison Joy would later describe how the magnitude at which the sunspots are "tilted"—with the leading spot(s) closer to the equator than the trailing spot(s)―grows with the latitude of these regions. (See Joy's law.)
The cycle's physical basis was elucidated by George Ellery Hale and collaborators, who in 1908 showed that sunspots were strongly magnetized (the first detection of magnetic fields beyond the Earth). In 1919 they identified a number of patterns that would collectively become known as Hale's law:
In the same heliographic hemisphere, bipolar active regions tend to have the same leading polarity.
In the opposite hemisphere (that is, on the other side of the solar equator) these regions tend to have the opposite leading polarity.
Leading polarities in both hemispheres flip from one sunspot cycle to the next.
Hale's observations revealed that the complete magnetic cycle—which would later be referred to as a Hale cycle—spans two solar cycles, or 22 years, before returning to its original state (including polarity). Because nearly all manifestations are insensitive to polarity, the 11-year solar cycle remains the focus of research; however, the two halves of the Hale cycle are typically not identical: the 11-year cycles usually alternate between higher and lower sums of Wolf's sunspot numbers (the Gnevyshev-Ohl rule).
In 1961 the father-and-son team of Harold and Horace Babcock established that the solar cycle is a spatiotemporal magnetic process unfolding over the Sun as a whole. They observed that the solar surface is magnetized outside of sunspots, that this (weaker) magnetic field is to first order a dipole, and that this dipole undergoes polarity reversals with the same period as the sunspot cycle. Horace's Babcock Model described the Sun's oscillatory magnetic field as having a quasi-steady periodicity of 22 years. It covered the oscillatory exchange of energy between toroidal and poloidal solar magnetic field components.
Cycle history
Sunspot numbers over the past 11,400 years have been reconstructed using carbon-14 and beryllium-10 isotope ratios. The level of solar activity beginning in the 1940s is exceptional – the last period of similar magnitude occurred around 9,000 years ago (during the warm Boreal period). The Sun was at a similarly high level of magnetic activity for only ~10% of the past 11,400 years. Almost all earlier high-activity periods were shorter than the present episode. Fossil records suggest that the solar cycle has been stable for at least the last 700 million years. For example, the cycle length during the Early Permian is estimated to be 10.62 years and similarly in the Neoproterozoic.
Until 2009, it was thought that 28 cycles had spanned the 309 years between 1699 and 2008, giving an average length of 11.04 years, but research then showed that the longest of these (1784–1799) may actually have been two cycles. If so then the average length would be only around 10.7 years. Since observations began cycles as short as 9 years and as long as 14 years have been observed, and if the cycle of 1784–1799 is double then one of the two component cycles had to be less than 8 years in length. Significant amplitude variations also occur.
Several lists of proposed historical "grand minima" of solar activity exist.
Recent cycles
Cycle 25
Solar cycle 25 began in December 2019. Several predictions have been made for solar cycle 25 based on different methods, ranging from very weak to strong magnitude. A physics-based prediction relying on the data-driven solar dynamo and solar surface flux transport models seems to have predicted the strength of the solar polar field at the current minima correctly and forecasts a weak but not insignificant solar cycle 25 similar to or slightly stronger than cycle 24. Notably, they rule out the possibility of the Sun falling into a Maunder-minimum-like (inactive) state over the next decade. A preliminary consensus by a solar cycle 25 Prediction Panel was made in early 2019. The Panel, which was organized by NOAA's Space Weather Prediction Center (SWPC) and NASA, based on the published solar cycle 25 predictions, concluded that solar cycle 25 will be very similar to solar cycle 24. They anticipate that the solar cycle minimum before cycle 25 will be long and deep, just as the minimum that preceded cycle 24. They expect solar maximum to occur between 2023 and 2026 with a sunspot range of 95 to 130, given in terms of the revised sunspot number.
Cycle 24
Solar cycle 24 began on 4 January 2008, with minimal activity until early 2010. The cycle featured a "double-peaked" solar maximum. The first peak reached 99 in 2011 and the second in early 2014 at 101. Cycle 24 ended in December 2019 after 11.0 years.
Cycle 23
Solar cycle 23 lasted 11.6 years, beginning in May 1996 and ending in January 2008. The maximum smoothed sunspot number (monthly number of sunspots averaged over a twelve-month period) observed during the solar cycle was 120.8 (March 2000), and the minimum was 1.7. A total of 805 days had no sunspots during this cycle.
Phenomena
Because the solar cycle reflects magnetic activity, various magnetically driven solar phenomena follow the solar cycle, including sunspots, faculae/plage, network, and coronal mass ejections.
Sunspots
The Sun's apparent surface, the photosphere, radiates more actively when there are more sunspots. Satellite monitoring of solar luminosity revealed a direct relationship between the solar cycle and luminosity with a peak-to-peak amplitude of about 0.1%. Luminosity decreases by as much as 0.3% on a 10-day timescale when large groups of sunspots rotate across the Earth's view and increase by as much as 0.05% for up to 6 months due to faculae associated with large sunspot groups.
The best information today comes from SOHO (a cooperative project of the European Space Agency and NASA), such as the MDI magnetogram, where the solar "surface" magnetic field can be seen.
As each cycle begins, sunspots appear at mid-latitudes, and then move closer and closer to the equator until a solar minimum is reached. This pattern is best visualized in the form of the so-called butterfly diagram. Images of the Sun are divided into latitudinal strips, and the monthly-averaged fractional surface of sunspots is calculated. This is plotted vertically as a color-coded bar, and the process is repeated month after month to produce this time-series diagram.
[[File:Sun - btly - 2023.png|center|thumb|upright=2.5|This version of the sunspot butterfly diagram was constructed by the solar group at NASA Marshall Space Flight Center. The newest version can be found at solarcyclescience.com]
While magnetic field changes are concentrated at sunspots, the entire sun undergoes analogous changes, albeit of smaller magnitude.
[[File:LAMF - 2023.png|center|thumb|upright=2.5|Time vs. solar latitude diagram of the radial component of the solar magnetic field, averaged over successive solar rotation. The "butterfly" signature of sunspots is clearly visible at low latitudes. Diagram constructed by the solar group at NASA Marshall Space Flight Center. The newest version can be found at solarcyclescience.com]
Faculae and plage
Faculae are bright magnetic features on the photosphere. They extend into the chromosphere, where they are referred to as plage. The evolution of plage areas is typically tracked from solar observations in the Ca II K line (393.37 nm). The amount of facula and plage area varies in phase with the solar cycle, and they are more abundant than sunspots by approximately an order of magnitude. They exhibit a non linear relation to sunspots. Plage regions are also associated with strong magnetic fields in the solar surface.
Solar flares and coronal mass ejections
The solar magnetic field structures the corona, giving it its characteristic shape visible at times of solar eclipses. Complex coronal magnetic field structures evolve in response to fluid motions at the solar surface, and emergence of magnetic flux produced by dynamo action in the solar interior. For reasons not yet understood in detail, sometimes these structures lose stability, leading to solar flares and coronal mass ejections (CME). Flares consist of an abrupt emission of energy (primarily at ultraviolet and X-ray wavelengths), which may or may not be accompanied by a coronal mass ejection, which consists of injection of energetic particles (primarily ionized hydrogen) into interplanetary space. Flares and CME are caused by sudden localized release of magnetic energy, which drives emission of ultraviolet and X-ray radiation as well as energetic particles. These eruptive phenomena can have a significant impact on Earth's upper atmosphere and space environment, and are the primary drivers of what is now called space weather. Consequently, the occurrence of both geomagnetic storms and solar energetic particle events shows a strong solar cycle variation, peaking close to sunspot maximum.
The occurrence frequency of coronal mass ejections and flares is strongly modulated by the cycle. Flares of any given size are some 50 times more frequent at solar maximum than at minimum. Large coronal mass ejections occur on average a few times a day at solar maximum, down to one every few days at solar minimum. The size of these events themselves does not depend sensitively on the phase of the solar cycle. A case in point are the three large X-class flares that occurred in December 2006, very near solar minimum; an X9.0 flare on Dec 5 stands as one of the brightest on record.
Patterns
Along with the approximately 11-year sunspot cycle, a number of additional patterns and cycles have been hypothesized.
Waldmeier effect
The Waldmeier effect describes the observation that the maximum amplitudes of solar cycles are inversely proportional to the time between their solar minima and maxima. Therefore, cycles with larger maximum amplitudes tend to take less time to reach their maxima than cycles with smaller amplitudes. This effect was named after Max Waldmeier who first described it.
Gnevyshev–Ohl rule
The Gnevyshev–Ohl rule describes the tendency for the sum of the Wolf number over an odd solar cycle to exceed that of the preceding even cycle.
Gleissberg cycle
The Gleissberg cycle describes an amplitude modulation of solar cycles with a period of about 70–100 years, or seven or eight solar cycles. It was named after Wolfgang Gleißberg.
As pioneered by Ilya G. Usoskin and Sami Solanki, associated centennial variations in magnetic fields in the corona and heliosphere have been detected using carbon-14 and beryllium-10 cosmogenic isotopes stored in terrestrial reservoirs such as ice sheets and tree rings and by using historic observations of geomagnetic storm activity, which bridge the time gap between the end of the usable cosmogenic isotope data and the start of modern satellite data.
These variations have been successfully reproduced using models that employ magnetic flux continuity equations and observed sunspot numbers to quantify the emergence of magnetic flux from the top of the solar atmosphere and into the heliosphere, showing that sunspot observations, geomagnetic activity and cosmogenic isotopes offer a convergent understanding of solar activity variations.
Suess cycle
The Suess cycle, or de Vries cycle, is a cycle present in radiocarbon proxies of solar activity with a period of about 210 years.
It was named after Hans Eduard Suess and Hessel de Vries. Despite calculated radioisotope production rates being well correlated with the 400-year sunspot record, there is little evidence of the Suess cycle in the 400-year sunspot record by itself.
Other hypothesized cycles
Periodicity of solar activity with periods longer than the solar cycle of about 11 (22) years has been proposed, including:
The Hallstatt cycle (named after a cool and wet period in Europe when glaciers advanced) is hypothesized to extend for approximately 2,400 years.
In studies of carbon-14 ratios, cycles of 105, 131, 232, 385, 504, 805 and 2,241 years have been proposed, possibly matching cycles derived from other sources. Damon and Sonett proposed carbon 14-based medium- and short-term variations of periods 208 and 88 years; as well as suggesting a 2300-year radiocarbon period that modulates the 208-year period.
Brückner-Egeson-Lockyer cycle (30 to 40 year cycles).
A 2021 study investigates the changes of the Pleistocene climate over the last 800 kyr from European Project for Ice Coring in Antarctica (EPICA) temperature (δD) and CO2-CH4 records by using the benefits of the full-resolution methodology for time-series decomposition singular spectrum analysis, with a special focus on millennial-scale Sun-related signals. The quantitative impact of the three Sun-related cycles (unnamed ~9.7-kyr; proposed 'Heinrich-Bond' ~6.0-kyr; Hallstatt ~2.5-kyr), cumulatively explain ~4.0% (δD), 2.9% (CO2), and 6.6% (CH4) in variance. A cycle of ~3.6 kyr, which is little known in literature, results in a mean variance of 0.6% only, does not seem to be Sun-related, although a gravitational origin cannot be ruled out. These 800-kyr-long EPICA suborbital records, which include millennial-scale Sun-related signals, fill an important gap in the field of solar cycles demonstrating for the first time the minor role of solar activity in the regional budget of Earth's climate system during the Mid-Late Pleistocene.
Effects
Sun
Surface magnetism
Sunspots eventually decay, releasing magnetic flux in the photosphere. This flux is dispersed and churned by turbulent convection and solar large-scale flows. These transport mechanisms lead to the accumulation of magnetized decay products at high solar latitudes, eventually reversing the polarity of the polar fields (notice how the blue and yellow fields reverse in the Hathaway/NASA/MSFC graph above).
The dipolar component of the solar magnetic field reverses polarity around the time of solar maximum and reaches peak strength at the solar minimum.
Space
Spacecraft
CMEs (coronal mass ejections) produce a radiation flux of high-energy protons, sometimes known as solar cosmic rays. These can cause radiation damage to electronics and solar cells in satellites. Solar proton events also can cause single-event upset (SEU) events on electronics; at the same, the reduced flux of galactic cosmic radiation during solar maximum decreases the high-energy component of particle flux.
CME radiation is dangerous to astronauts on a space mission who are outside the shielding produced by the Earth's magnetic field. Future mission designs (e.g., for a Mars Mission) therefore incorporate a radiation-shielded "storm shelter" for astronauts to retreat to during such an event.
Gleißberg developed a CME forecasting method that relies on consecutive cycles.
The increased irradiance during solar maximum expands the envelope of the Earth's atmosphere, causing low-orbiting space debris to re-enter more quickly.
Galactic cosmic ray flux
The outward expansion of solar ejecta into interplanetary space provides overdensities of plasma that are efficient at scattering high-energy cosmic rays entering the solar system from elsewhere in the galaxy. The frequency of solar eruptive events is modulated by the cycle, changing the degree of cosmic ray scattering in the outer solar system accordingly. As a consequence, the cosmic ray flux in the inner Solar System is anticorrelated with the overall level of solar activity. This anticorrelation is clearly detected in cosmic ray flux measurements at the Earth's surface.
Some high-energy cosmic rays entering Earth's atmosphere collide hard enough with molecular atmospheric constituents that they occasionally cause nuclear spallation reactions. Fission products include radionuclides such as 14C and 10Be that settle on the Earth's surface. Their concentration can be measured in tree trunks or ice cores, allowing a reconstruction of solar activity levels into the distant past. Such reconstructions indicate that the overall level of solar activity since the middle of the twentieth century stands amongst the highest of the past 10,000 years, and that epochs of suppressed activity, of varying durations have occurred repeatedly over that time span.
Atmospheric
Solar irradiance
The total solar irradiance (TSI) is the amount of solar radiative energy incident on the Earth's upper atmosphere. TSI variations were undetectable until satellite observations began in late 1978. A series of radiometers were launched on satellites since the 1970s. TSI measurements varied from 1355 to 1375 W/m2 across more than ten satellites. One of the satellites, the ACRIMSAT was launched by the ACRIM group. The controversial 1989–1991 "ACRIM gap" between non-overlapping ACRIM satellites was interpolated by the ACRIM group into a composite showing +0.037%/decade rise. Another series based on the ACRIM data is produced by the PMOD group and shows a −0.008%/decade downward trend. This 0.045%/decade difference can impact climate models. However, reconstructed total solar irradiance with models favor the PMOD series, thus reconciling the ACRIM-gap issue.
Solar irradiance varies systematically over the cycle, both in total irradiance and in its relative components (UV vs visible and other frequencies). The solar luminosity is an estimated 0.07 percent brighter during the mid-cycle solar maximum than the terminal solar minimum. Photospheric magnetism appears to be the primary cause (96%) of 1996–2013 TSI variation. The ratio of ultraviolet to visible light varies.
TSI varies in phase with the solar magnetic activity cycle with an amplitude of about 0.1% around an average value of about 1361.5 W/m2 (the "solar constant"). Variations about the average of up to −0.3% are caused by large sunspot groups and of +0.05% by large faculae and the bright network on a 7-10-day timescale Satellite-era TSI variations show small but detectable trends.
TSI is higher at solar maximum, even though sunspots are darker (cooler) than the average photosphere. This is caused by magnetized structures other than sunspots during solar maxima, such as faculae and active elements of the "bright" network, that are brighter (hotter) than the average photosphere. They collectively overcompensate for the irradiance deficit associated with the cooler, but less numerous sunspots. The primary driver of TSI changes on solar rotational and solar cycle timescales is the varying photospheric coverage of these radiatively active solar magnetic structures.
Energy changes in UV irradiance involved in production and loss of ozone have atmospheric effects. The 30 hPa atmospheric pressure level changed height in phase with solar activity during solar cycles 20–23. UV irradiance increase caused higher ozone production, leading to stratospheric heating and to poleward displacements in the stratospheric and tropospheric wind systems.
Short-wavelength radiation
With a temperature of 5870 K, the photosphere emits a proportion of radiation in the extreme ultraviolet (EUV) and above. However, hotter upper layers of the Sun's atmosphere (chromosphere and corona) emit more short-wavelength radiation. Since the upper atmosphere is not homogeneous and contains significant magnetic structure, the solar ultraviolet (UV), EUV and X-ray flux varies markedly over the cycle.
The photo montage to the left illustrates this variation for soft X-ray, as observed by the Japanese satellite Yohkoh from after August 30, 1991, at the peak of cycle 22, to September 6, 2001, at the peak of cycle 23. Similar cycle-related variations are observed in the flux of solar UV or EUV radiation, as observed, for example, by the SOHO or TRACE satellites.
Even though it only accounts for a minuscule fraction of total solar radiation, the impact of solar UV, EUV and X-ray radiation on the Earth's upper atmosphere is profound. Solar UV flux is a major driver of stratospheric chemistry, and increases in ionizing radiation significantly affect ionosphere-influenced temperature and electrical conductivity.
Solar radio flux
Emission from the Sun at centimetric (radio) wavelength is due primarily to coronal plasma trapped in the magnetic fields overlying active regions. The F10.7 index is a measure of the solar radio flux per unit frequency at a wavelength of 10.7 cm, near the peak of the observed solar radio emission. F10.7 is often expressed in SFU or solar flux units (1 SFU = 10−22 W m−2 Hz−1). It represents a measure of diffuse, nonradiative coronal plasma heating. It is an excellent indicator of overall solar activity levels and correlates well with solar UV emissions.
Sunspot activity has a major effect on long distance radio communications, particularly on the shortwave bands although medium wave and low VHF frequencies are also affected. High levels of sunspot activity lead to improved signal propagation on higher frequency bands, although they also increase the levels of solar noise and ionospheric disturbances. These effects are caused by impact of the increased level of solar radiation on the ionosphere.
10.7 cm solar flux could interfere with point-to-point terrestrial communications.
Clouds
Speculations about the effects of cosmic-ray changes over the cycle potentially include:
Changes in ionization affect the aerosol abundance that serves as the condensation nucleus for cloud formation. During solar minima more cosmic rays reach Earth, potentially creating ultra-small aerosol particles as precursors to cloud condensation nuclei. Clouds formed from greater amounts of condensation nuclei are brighter, longer lived and likely to produce less precipitation.
A change in cosmic rays could affect certain types of clouds.
It was proposed that, particularly at high latitudes, cosmic ray variation may impact terrestrial low altitude cloud cover (unlike a lack of correlation with high altitude clouds), partially influenced by the solar-driven interplanetary magnetic field (as well as passage through the galactic arms over longer timeframes), but this hypothesis was not confirmed.
Later papers showed that production of clouds via cosmic rays could not be explained by nucleation particles. Accelerator results failed to produce sufficient, and sufficiently large, particles to result in cloud formation; this includes observations after a major solar storm. Observations after Chernobyl do not show any induced clouds.
Terrestrial
Organisms
The impact of the solar cycle on living organisms has been investigated (see chronobiology). Some researchers claim to have found connections with human health.
The amount of ultraviolet UVB light at 300 nm reaching the Earth's surface varies by a few percent over the solar cycle due to variations in the protective ozone layer. In the stratosphere, ozone is continuously regenerated by the splitting of O2 molecules by ultraviolet light. During a solar minimum, the decrease in ultraviolet light received from the Sun leads to a decrease in the concentration of ozone, allowing increased UVB to reach the Earth's surface.
Radio communication
Skywave modes of radio communication operate by bending (refracting) radio waves (electromagnetic radiation) through the Ionosphere. During the "peaks" of the solar cycle, the ionosphere becomes increasingly ionized by solar photons and cosmic rays. This affects the propagation of the radio wave in complex ways that can either facilitate or hinder communications. Forecasting of skywave modes is of considerable interest to commercial marine and aircraft communications, amateur radio operators and shortwave broadcasters. These users occupy frequencies within the High Frequency or 'HF' radio spectrum that are most affected by these solar and ionospheric variances. Changes in solar output affect the maximum usable frequency, a limit on the highest frequency usable for communications.
Climate
Both long-term and short-term variations in solar activity are proposed to potentially affect global climate, but it has proven challenging to show any link between solar variation and climate.
Early research attempted to correlate weather with limited success, followed by attempts to correlate solar activity with global temperature. The cycle also impacts regional climate. Measurements from the SORCE's Spectral Irradiance Monitor show that solar UV variability produces, for example, colder winters in the U.S. and northern Europe and warmer winters in Canada and southern Europe during solar minima.
Three proposed mechanisms mediate solar variations' climate impacts:
Total solar irradiance ("Radiative forcing").
Ultraviolet irradiance. The UV component varies by more than the total, so if UV were for some (as yet unknown) reason having a disproportionate effect, this might affect climate.
Solar wind-mediated galactic cosmic ray changes, which may affect cloud cover.
The solar cycle variation of 0.1% has small but detectable effects on the Earth's climate. Camp and Tung suggest that solar irradiance correlates with a variation of 0.18 K ±0.08 K (0.32 °F ±0.14 °F) in measured average global temperature between solar maximum and minimum.
Other effects include one study which found a relationship with wheat prices, and another one that found a weak correlation with the flow of water in the Paraná River. Eleven-year cycles have been found in tree-ring thicknesses and layers at the bottom of a lake hundreds of millions of years ago.
The current scientific consensus on climate change is that solar variations only play a marginal role in driving global climate change, since the measured magnitude of recent solar variation is much smaller than the forcing due to greenhouse gases. Also, average solar activity in the 2010s was no higher than in the 1950s (see above), whereas average global temperatures had risen markedly over that period. Otherwise, the level of understanding of solar impacts on weather is low.
Solar variations also affect the orbital decay of objects in low Earth orbit (LEO) by altering the density of the upper thermosphere.
Solar dynamo
The 11-year solar cycle is thought to be one-half of a 22-year Babcock–Leighton solar dynamo cycle, which corresponds to an oscillatory exchange of energy between toroidal and poloidal solar magnetic fields which is mediated by solar plasma flows which also provides energy to the dynamo system at every step. At solar-cycle maximum, the external poloidal dipolar magnetic field is near its dynamo-cycle minimum strength, but an internal toroidal quadrupolar field, generated through differential rotation within the tachocline, is near its maximum strength. At this point in the dynamo cycle, buoyant upwelling within the Convection zone forces emergence of the toroidal magnetic field through the photosphere, giving rise to pairs of sunspots, roughly aligned east–west with opposite magnetic polarities. The magnetic polarity of sunspot pairs alternates every solar cycle, a phenomenon described by Hale's law.
During the solar cycle's declining phase, energy shifts from the internal toroidal magnetic field to the external poloidal field, and sunspots diminish in number. At solar minimum, the toroidal field is, correspondingly, at minimum strength, sunspots are relatively rare and the poloidal field is at maximum strength. During the next cycle, differential rotation converts magnetic energy back from the poloidal to the toroidal field, with a polarity that is opposite to the previous cycle. The process carries on continuously, and in an idealized, simplified scenario, each 11-year sunspot cycle corresponds to a change in the polarity of the Sun's large-scale magnetic field.
Solar dynamo models indicate that plasma flux transport processes in the solar interior such as differential rotation, meridional circulation and turbulent pumping play an important role in the recycling of the toroidal and poloidal components of the solar magnetic field. The relative strengths of these flux transport processes also determine the "memory" of the solar cycle that plays an important role in physics-based predictions of the solar cycle. In particular, stochastically forced non-linear solar dynamo simulations establish that the solar cycle memory is short, lasting over one cycle, thus implying accurate predictions are possible only for the next solar cycle and not beyond. This postulate of a short one cycle memory in the solar dynamo mechanism was later observationally verified.
Although the tachocline has long been thought to be the key to generating the Sun's large-scale magnetic field, recent research has questioned this assumption. Radio observations of brown dwarfs have indicated that they also maintain large-scale magnetic fields and may display cycles of magnetic activity. The Sun has a radiative core surrounded by a convective envelope, and at the boundary of these two is the tachocline. However, brown dwarfs lack radiative cores and tachoclines. Their structure consists of a solar-like convective envelope that exists from core to surface. Since they lack a tachocline yet still display solar-like magnetic activity, it has been suggested that solar magnetic activity is only generated in the convective envelope.
Speculated influence of the planets
A 2012 paper proposed that the torque exerted by the planets on a non-spherical tachocline layer deep in the Sun may synchronize the solar dynamo. Their results were shown to be an artifact of the incorrectly applied smoothing method leading to aliasing. Additional models incorporating the influence of planetary forces on the Sun have since been proposed. However, the solar variability is known to be essentially stochastic and unpredictable beyond one solar cycle, which contradicts the idea of the deterministic planetary influence on solar dynamo. Modern dynamo models are able to reproduce the solar cycle without any planetary influence.
In 1974 the book The Jupiter Effect suggested that the alignment of the planets would alter the Sun's solar wind and, in turn, Earth's weather, culminating in multiple catastrophes on March 10, 1982. None of the catastrophes occurred. In 2023, a paper by Cionco et al. demonstrated the improbability that the suspected tidal effect on the Sun driven by Venus and Jupiter were significant on whole solar tidal generating potential.
| Physical sciences | Solar System | Astronomy |
206586 | https://en.wikipedia.org/wiki/Technological%20convergence | Technological convergence | Technological convergence is the tendency for technologies that were originally unrelated to become more closely integrated and even unified as they develop and advance. For example, watches, telephones, television, computers, and social media platforms began as separate and mostly unrelated technologies, but have converged in many ways into an interrelated telecommunication, media, and technology industry.
Definitions
"Convergence is a deep integration of knowledge, tools, and all relevant activities of human activity for a common goal, to allow society to answer new questions to change the respective physical or social ecosystem. Such changes in the respective ecosystem open new trends, pathways, and opportunities in the following divergent phase of the process".
Siddhartha Menon defines convergence as integration and digitalization. Integration, here, is defined as "a process of transformation measure by the degree to which diverse media such as phone, data broadcast and information technology infrastructures are combined into a single seamless all purpose network architecture platform". Digitalization is not so much defined by its physical infrastructure, but by the content or the medium. Jan van Dijk suggests that "digitalization means breaking down signals into bytes consisting of ones and zeros".
Convergence is defined by Blackman (1998) as a trend in the evolution of technology services and industry structures. Convergence is later defined more specifically as the coming together of telecommunications, computing and broadcasting into a single digital bit-stream.
Mueller stands against the statement that convergence is really a takeover of all forms of media by one technology: digital computers.
Acronyms
Some acronyms for converging scientific or technological fields include:
NBIC (Nanotechnology, Biotechnology, Information technology and Cognitive science)
GNR (Genetics, Nanotechnology and Robotics)
GRIN (Genetics, Robotics, Information, and Nano processes)
GRAIN (Genetics, Robotics, Artificial Intelligence, and Nanotechnology)
BANG (Bits, Atoms, Neurons, Genes)
Biotechnology
A 2010 citation analysis of patent data shows that biomedical devices are strongly connected to computing and mobile telecommunications, and that molecular bioengineering is strongly connected to several IT fields.
Bioconvergence is the integration of biology with engineering. Possible areas of bioconvergence include:
Materials inspired by biology (such as in electronics)
DNA data storage
Medical technologies:
Omics-based profiling
Miniaturized drug delivery
Tissue reconstruction
Traceable pharmaceutical packaging
More efficient bioreactors
Digital convergence
Digital convergence is the inclination for various digital innovations and media to become more similar with time. It enables the convergence of access devices and content as well as the industry participant operations and strategy. This is how this type of technological convergence creates opportunities, particularly in the area of product development and growth strategies for digital product companies. The same can be said in the case of individual content creators, such as vloggers on YouTube. The convergence in this example is demonstrated in the involvement of the Internet, home devices such as smart television, camera, the YouTube application, and digital content. In this setup, there are the so-called "spokes", which are the devices that connect to a central hub (such as a PC or smart TV). Here, the Internet serves as the intermediary, particularly through its interactivity tools and social networking, in order to create unique mixes of products and services via horizontal integration.
The above example highlights how digital convergence encompasses three phenomena:
previously stand-alone devices are being connected by networks and software, significantly enhancing functionalities;
previously stand-alone products are being converged onto the same platform, creating hybrid products in the process; and,
companies are crossing traditional boundaries such as hardware and software to provide new products and new sources of competition.
Another example is the convergence of different types of digital contents. According to Harry Strasser, former CTO of Siemens "[digital convergence will substantially impact people's lifestyle and work style]".
Cellphones
The functions of the cellphone changes as technology converges. Because of technological advancement, a cellphone functions as more than just a phone: it can also contain an Internet connection, video players, MP3 players, gaming, and a camera. Their areas of use have increased over time, partly substituting for other devices.
A mobile convergence device is one that, if connected to a keyboard, monitor, and mouse, can run applications as a desktop computer would. Convergent operating systems include the Linux operating systems Ubuntu Touch, Plasma Mobile and PureOS.
Convergence can also refer to being able to run the same app across different devices and being able to develop apps for different devices (such as smartphones, TVs and desktop computers) at once, with the same code base. This can be done via Linux applications that adapt to the device they are being used on (including native apps designed for such via frameworks like Kirigami) or by the use of multi-platform frameworks like the Quasar framework that use tools such as Apache Cordova, Electron and Capacitor, which can increase the userbase, the pace and ease of development and the number of reached platforms while decreasing development costs.
The Internet
The role of the Internet has changed from its original use as a communication tool to easier and faster access to information and services, mainly through a broadband connection. The television, radio and newspapers were the world's media for accessing news and entertainment; now, all three media have converged into one, and people all over the world can read and hear news and other information on the Internet. The convergence of the Internet and conventional TV became popular in the 2010s, through Smart TV, also sometimes referred to as "Connected TV" or "Hybrid TV", (not to be confused with IPTV, Internet TV, or with Web TV). Smart TV is used to describe the current trend of integration of the Internet and Web 2.0 features into modern television sets and set-top boxes, as well as the technological convergence between computers and these television sets or set-top boxes. These new devices most often also have a much higher focus on online interactive media, Internet TV, over-the-top content, as well as on-demand streaming media, and less focus on traditional broadcast media like previous generations of television sets and set-top boxes always have had.
Social movements
The integration of social movements in cyberspace is one of the potential strategies that social movements can use in the age of media convergence. Because of the neutrality of the Internet and the end-to-end design, the power structure of the Internet was designed to avoid discrimination between applications. Mexico's Zapatistas campaign for land rights was one of the most influential case in the information age; Manuel Castells defines the Zapatistas as "the first informational guerrilla movement". The Zapatista uprising had been marginalized by the popular press. The Zapatistas were able to construct a grassroots, decentralized social movement by using the Internet. The Zapatistas Effect, observed by Cleaver, continues to organize social movements on a global scale. A sophisticated webmetric analysis, which maps the links between different websites and seeks to identify important nodal points in a network, demonstrates that the Zapatistas cause binds together hundreds of global NGOs. The majority of the social movement organized by Zapatistas targets their campaign especially against global neoliberalism. A successful social movement not only need online support but also protest on the street. Papic wrote, "Social Media Alone Do Not Instigate Revolutions", which discusses how the use of social media in social movements needs good organization both online and offline.
Media
Media technological convergence is the tendency that as technology changes, different technological systems sometimes evolve toward performing similar tasks. It is the interlinking of computing and other information technologies, media content, media companies and communication networks that have arisen as the result of the evolution and popularization of the Internet as well as the activities, products and services that have emerged in the digital media space.
Generally, media convergence refers to the merging of both old and new media and can be seen as a product, a system or a process. Jenkins states that convergence is, "the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behaviour of media audiences who would go almost anywhere in search of the kinds of entertainment experiences they wanted". According to Jenkins, there are five areas of convergence: technological, economic, social or organic, cultural, and global. Media convergence is not just a technological shift or a technological process, it also includes shifts within the industrial, cultural, and social paradigms that encourage the consumer to seek out new information. Convergence, simply put, is how individual consumers interact with others on a social level and use various media platforms to create new experiences, new forms of media and content that connect us socially, and not just to other consumers, but to the corporate producers of media in ways that have not been as readily accessible in the past. However, Lugmayr and Dal Zotto argued, that media convergence takes place on technology, content, consumer, business model, and management level. They argue that media convergence is a matter of evolution and can be described through the triadic phenomena of convergence, divergence, and coexistence. Today's digital media ecosystems coexist, as e.g., mobile app stores provide vendor lock-ins into particular eco-systems; some technology platforms are converging under one technology, due to, for example, the usage of common communication protocols as in digital TV; and other media are diverging, as, for example, media content offerings are more and more specializing and provides a space for niche media.
Closely linked to the multilevel process of media convergence are also several developments in different areas of the media and communication sector which are also summarized under the term of media deconvergence. Many experts view this as simply being the tip of the iceberg, as all facets of institutional activity and social life such as business, government, art, journalism, health, and education, are increasingly being carried out in these digital media spaces across a growing network of information and communication technology devices. Also included in this topic is the basis of computer networks, wherein many different operating systems are able to communicate via different protocols.
Convergent services, such as VoIP, IPTV, Smart TV, and others, tend to replace the older technologies and thus can disrupt markets. IP-based convergence is inevitable and will result in new service and new demand in the market. When the old technology converges into the public-owned common, IP based services become access-independent or less dependent. The old service is access-dependent.
Advances in technology bring the ability for technological convergence that Rheingold believes can alter the "social-side effects," in that "the virtual, social and physical world are colliding, merging and coordinating." It was predicted in the late 1980s, around the time that CD-ROM was becoming commonplace, that a digital revolution would take place, and that old media would be pushed to one side by new media. Broadcasting is increasingly being replaced by the Internet, enabling consumers all over the world the freedom to access their preferred media content more easily and at a more available rate than ever before.
However, when the dot-com bubble of the 1990s suddenly popped, that poured cold water over the talk of such a digital revolution. In today's society, the idea of media convergence has once again emerged as a key point of reference as newer as well as established media companies attempt to visualize the future of the entertainment industry. If this revolutionary digital paradigm shift presumed that old media would be increasingly replaced by new media, the convergence paradigm that is currently emerging suggests that new and old media would interact in more complex ways than previously predicted. The paradigm shift that followed the digital revolution assumed that new media was going to change everything. When the dot com market crashed, there was a tendency to imagine that nothing had changed. The real truth lay somewhere in between as there were so many aspects of the current media environment to take into consideration. Many industry leaders are increasingly reverting to media convergence as a way of making sense in an era of disorientating change. In that respect, media convergence in theory is essentially an old concept taking on a new meaning. Media convergence, in reality, is more than just a shift in technology. It alters relationships between industries, technologies, audiences, genres and markets. Media convergence changes the rationality media industries operate in, and the way that media consumers process news and entertainment. Media convergence is essentially a process and not an outcome, so no single black box controls the flow of media. With proliferation of different media channels and increasing portability of new telecommunications and computing technologies, we have entered into an era where media constantly surrounds us.
Media convergence requires that media companies rethink existing assumptions about media from the consumer's point of view, as these affect marketing and programming decisions. Media producers must respond to newly empowered consumers. Conversely, it would seem that hardware is instead diverging whilst media content is converging. Media has developed into brands that can offer content in a number of forms. Two examples of this are Star Wars and The Matrix. Both are films, but are also books, video games, cartoons, and action figures. Branding encourages expansion of one concept, rather than the creation of new ideas. In contrast, hardware has diversified to accommodate media convergence. Hardware must be specific to each function. While most scholars argue that the flow of cross-media is accelerating, O'Donnell suggests, especially between films and video game, the semblance of media convergence is misunderstood by people outside of the media production industry. The conglomeration of media industry continues to sell the same story line in different media. For example, Batman is in comics, films, anime, and games. However, the data to create the image of batman in each media is created individually by different teams of creators. The same character and the same visual effect repetitively appear in different media is because of the synergy of media industry to make them similar as possible. In addition, convergence does not happen when the game of two different consoles is produced. No flows between two consoles because it is faster to create game from scratch for the industry.
One of the more interesting new media journalism forms is virtual reality. Reuters, a major international news service, has created and staffed a news “island” in the popular online virtual reality environment Second Life. Open to anyone, Second Life has emerged as a compelling 3D virtual reality for millions of citizens around the world who have created avatars (virtual representations of themselves) to populate and live in an altered state where personal flight is a reality, altered egos can flourish, and real money ( were spent during the 24 hours concluding at 10:19 a.m. eastern time January 7, 2008) can be made without ever setting foot into the real world. The Reuters Island in Second Life is a virtual version of the Reuters real-world news service but covering the domain of Second Life for the citizens of Second Life (numbering 11,807,742 residents as of January 5, 2008).
Media convergence in the digital era means the changes that are taking place with older forms of media and media companies. Media convergence has two roles, the first is the technological merging of different media channels – for example, magazines, radio programs, TV shows, and movies, now are available on the Internet through laptops, iPads, and smartphones. As discussed in Media Culture (by Campbell), convergence of technology is not new. It has been going on since the late 1920s. An example is RCA, the Radio Corporation of America, which purchased Victor Talking Machine Company and introduced machines that could receive radio and play recorded music. Next came the TV, and radio lost some of its appeal as people started watching television, which has both talking and music as well as visuals. As technology advances, convergence of media change to keep up. The second definition of media convergence Campbell discusses is cross-platform by media companies. This usually involves consolidating various media holdings, such as cable, phone, television (over the air, satellite, cable) and Internet access under one corporate umbrella. This is not for the consumer to have more media choices, this is for the benefit of the company to cut down on costs and maximize its profits. As stated in the article Convergence Culture and Media Work by Mark Deuze, “the convergence of production and consumption of media across companies, channels, genres, and technologies is an expression of the convergence of all aspects of everyday life: work and play, the local and the global, self and social identity."
History
Communication networks were designed to carry different types of information independently. The older media, such as television and radio, are broadcasting networks with passive audiences. Convergence of telecommunication technology permits the manipulation of all forms of information, voice, data, and video. Telecommunication has changed from a world of scarcity to one of seemingly limitless capacity. Consequently, the possibility of audience interactivity morphs the passive audience into an engaged audience. The historical roots of convergence can be traced back to the emergence of mobile telephony and the Internet, although the term properly applies only from the point in marketing history when fixed and mobile telephony began to be offered by operators as joined products. Fixed and mobile operators were, for most of the 1990s, independent companies. Even when the same organization marketed both products, these were sold and serviced independently.
In the 1990s, an implicit and often explicit assumption was that new media was going to replace the old media and Internet was going to replace broadcasting. In Nicholas Negroponte's Being Digital, Negroponte predicts the collapse of broadcast networks in favor of an era of narrow-casting. He also suggests that no government regulation can shatter the media conglomerate. "The monolithic empires of mass media are dissolving into an array of cottage industries... Media barons of today will be grasping to hold onto their centralized empires tomorrow.... The combined forces of technology and human nature will ultimately take a stronger hand in plurality than any laws Congress can invent." The new media companies claimed that the old media would be absorbed fully and completely into the orbit of the emerging technologies. George Gilder dismisses such claims saying, "The computer industry is converging with the television industry in the same sense that the automobile converged with the horse, the TV converged with the nickelodeon, the word-processing program converged with the typewriter, the CAD program converged with the drafting board, and digital desktop publishing converged with the Linotype machine and the letterpress." Gilder believes that computers had come not to transform mass culture but to destroy it.
Media companies put media convergence back to their agenda after the dot-com bubble burst. In 1994, Knight Ridder promulgated the concept of portable magazines, newspaper, and books: "Within news corporations it became increasingly obvious that an editorial model based on mere replication in the Internet of contents that had previously been written for print newspapers, radio, or television was no longer sufficient." The rise of digital communication in the late 20th century has made it possible for media organizations (or individuals) to deliver text, audio, and video material over the same wired, wireless, or fiber-optic connections. At the same time, it inspired some media organizations to explore multimedia delivery of information. This digital convergence of news media, in particular, was called "Mediamorphosis" by researcher Roger Fidler in his 1997 book by that name. Today, we are surrounded by a multi-level convergent media world where all modes of communication and information are continually reforming to adapt to the enduring demands of technologies, "changing the way we create, consume, learn and interact with each other".
Convergence culture
Henry Jenkins determines convergence culture to be the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behavior of media audiences who will go almost anywhere in search of the kinds of entertainment experiences they want. The convergence culture is an important factor in transmedia storytelling. Convergence culture introduces new stories and arguments from one form of media into many. Transmedia storytelling is defined by Jenkins as a process "where integral elements of a fiction get dispersed systematically across multiple delivery channels for the purpose of creating a unified and coordinated entertainment experience. Ideally, each medium makes its own unique contribution to the unfolding of the story". For instance, The Matrix starts as a film, which is followed by two other instalments, but in a convergence culture it is not constrained to that form. It becomes a story not only told in the movies but in animated shorts, video games and comic books, three different media platforms. Online, a wiki is created to keep track of the story's expanding canon. Fan films, discussion forums, and social media pages also form, expanding The Matrix to different online platforms. Convergence culture took what started as a film and expanded it across almost every type of media. Bert is Evil (images) Bert and Bin Laden appeared in CNN coverage of anti-American protest following September 11. The association of Bert and Bin Laden links back to the Ignacio's Photoshop project for fun.
Convergence culture is a part of participatory culture. Because average people can now access their interests on many types of media they can also have more of a say. Fans and consumers are able to participate in the creation and circulation of new content. Some companies take advantage of this and search for feedback from their customers through social media and sharing sites such as YouTube. Besides marketing and entertainment, convergence culture has also affected the way we interact with news and information. We can access news on multiple levels of media from the radio, TV, newspapers, and the Internet. The Internet allows more people to be able to report the news through independent broadcasts and therefore allows a multitude of perspectives to be put forward and accessed by people in many different areas. Convergence allows news to be gathered on a much larger scale. For instance, photographs were taken of torture at Abu Ghraib. These photos were shared and eventually posted on the Internet. This led to the breaking of a news story in newspapers, on TV, and the Internet.
Media scholar Henry Jenkins has described the media convergence with participatory culture as:
Appliances
Some media observers expect that we will eventually access all media content through one device, or "black box". As such, media business practice has been to identify the next "black box" to invest in and provide media for. This has caused a number of problems. Firstly, as "black boxes" are invented and abandoned, the individual is left with numerous devices that can perform the same task, rather than one dedicated for each task. For example, one may own both a computer and a video games console, subsequently owning two DVD players. This is contrary to the streamlined goal of the "black box" theory, and instead creates clutter. Secondly, technological convergence tends to be experimental in nature. This has led to consumers owning technologies with additional functions that are harder, if not impractical, to use rather than one specific device. Many people would only watch the TV for the duration of the meal's cooking time, or whilst in the kitchen, but would not use the microwave as the household TV. These examples show that in many cases technological convergence is unnecessary or unneeded.
Furthermore, although consumers primarily use a specialized media device for their needs, other "black box" devices that perform the same task can be used to suit their current situation. As a 2002 Cheskin Research report explained: "...Your email needs and expectations are different whether you're at home, work, school, commuting, the airport, etc., and these different devices are designed to suit your needs for accessing content depending on where you are- your situated context." Despite the creation of "black boxes", intended to perform all tasks, the trend is to use devices that can suit the consumer's physical position. Due to the variable utility of portable technology, convergence occurs in high-end mobile devices. They incorporate multimedia services, GPS, Internet access, and mobile telephony into a single device, heralding the rise of what has been termed the "smartphone," a device designed to remove the need to carry multiple devices. Convergence of media occurs when multiple products come together to form one product with the advantages of all of them, also known as the black box. This idea of one technology, concocted by Henry Jenkins, has become known more as a fallacy because of the inability to actually put all technical pieces into one. For example, while people can have email and Internet on their phone, they still want full computers with Internet and email in addition. Mobile phones are a good example, in that they incorporate digital cameras, MP3 players, voice recorders, and other devices. For the consumer, it means more features in less space; for media conglomerates it means remaining competitive.
However, convergence has a downside. Particularly in initial forms, converged devices are frequently less functional and reliable than their component parts (e.g., a mobile phone's web browser may not render some web pages correctly, due to not supporting certain rendering methods, such as the iPhone browser not supporting Flash content). As the number of functions in a single device escalates, the ability of that device to serve its original function decreases. As Rheingold asserts, technological convergence holds immense potential for the "improvement of life and liberty in some ways and (could) degrade it in others". He believes the same technology has the potential to be "used as both a weapon of social control and a means of resistance". Since technology has evolved in the past ten years or so, companies are beginning to converge technologies to create demand for new products. This includes phone companies integrating 3G and 4G on their phones. In the mid 20th century, television converged the technologies of movies and radio, and television is now being converged with the mobile phone industry and the Internet. Phone calls are also being made with the use of personal computers. Converging technologies combine multiple technologies into one. Newer mobile phones feature cameras, and can hold images, videos, music, and other media. Manufacturers now integrate more advanced features, such as video recording, GPS receivers, data storage, and security mechanisms into the traditional cellphone.
Telecommunications
Telecommunications convergence or network convergence describes emerging telecommunications technologies, and network architecture used to migrate multiple communications services into a single network. Specifically, this involves the converging of previously distinct media such as telephony and data communications into common interfaces on single devices, such as most smart phones can make phone calls and search the web.
Messaging
Combination services include those that integrate SMS with voice, such as voice SMS. Providers include Bubble Motion, Jott, Kirusa, and SpinVox. Several operators have launched services that combine SMS with mobile instant messaging (MIM) and presence. Text-to-landline services also exist, where subscribers can send text messages to any landline phone and are charged at standard rates. The text messages are converted into spoken language. This service has been popular in America, where fixed and mobile numbers are similar. Inbound SMS has been converging to enable reception of different formats (SMS, voice, MMS, etc.). In April 2008, O2 UK launched voice-enabled shortcodes, adding voice functionality to the five-digit codes already used for SMS. This type of convergence is helpful for media companies, broadcasters, enterprises, call centres and help desks who need to develop a consistent contact strategy with the consumer. Because SMS is very popular today, it became relevant to include text messaging as a contact possibility for consumers. To avoid having multiple numbers (one for voice calls, another one for SMS), a simple way is to merge the reception of both formats under one number. This means that a consumer can text or call one number and be sure that the message will be received.
Mobile
"Mobile service provisions" refers not only to the ability to purchase mobile phone services, but the ability to wirelessly access everything: voice, Internet, audio, and video. Advancements in WiMAX and other leading edge technologies provide the ability to transfer information over a wireless link at a variety of speeds, distances, and non-line-of-sight conditions.
Multi-play
Multi-play is a marketing term describing the provision of different telecommunication services, such as Internet access, television, telephone, and mobile phone service, by organizations that traditionally only offered one or two of these services. Multi-play is a catch-all phrase; usually, the terms triple play (voice, video and data) or quadruple play (voice, video, data and wireless) are used to describe a more specific meaning. A dual play service is a marketing term for the provisioning of the two services: it can be high-speed Internet (digital subscriber line) and telephone service over a single broadband connection in the case of phone companies, or high-speed Internet (cable modem) and TV service over a single broadband connection in the case of cable TV companies. The convergence can also concern the underlying communication infrastructure. An example of this is a triple play service, where communication services are packaged allowing consumers to purchase TV, Internet, and telephony in one subscription. The broadband cable market is transforming as pay-TV providers move aggressively into what was once considered the telco space. Meanwhile, customer expectations have risen as consumer and business customers alike seek rich content, multi-use devices, networked products and converged services including on-demand video, digital TV, high speed Internet, VoIP, and wireless applications. It is uncharted territory for most broadband companies.
A quadruple play service combines the triple play service of broadband Internet access, television, and telephone with wireless service provisions. This service set is also sometimes humorously referred to as "The Fantastic Four" or "Grand Slam". A fundamental aspect of the quadruple play is not only the long-awaited broadband convergence but also the players involved. Many of them, from the largest global service providers to whom we connect today via wires and cables to the smallest of startup service providers are interested. Opportunities are attractive: the big three telecom services – telephony, cable television, and wireless—could combine their industries. In the UK, the merger of NTL: Telewest and Virgin Mobile resulted in a company offering a quadruple play of cable television, broadband Internet, home telephone, and mobile telephone services.
Home network
Early in the 21st century, home LAN convergence so rapidly integrated home routers, wireless access points, and DSL modems that users were hard put to identify the resulting box they used to connect their computers to their Internet service. A general term for such a combined device is a residential gateway.
VoIP
The U.S. Federal Communications Commission (FCC) has not been able to decide how to regulate VoIP (Internet Telephony) because the convergent technology is still growing and changing. In addition to its growth, FCC is tentative to set regulation on VoIP in order to promote competition in the telecommunication industry. There is not a clear line between telecommunication service and the information service because of the growth of the new convergent media. Historically, telecommunication is subject to state regulation. The state of California concerned about the increasing popularity of Internet telephony will eventually obliterate funding for the Universal Service Fund. Some states attempt to assert their traditional role of common carrier oversight onto this new technology. Meisel and Needles (2005) suggests that the FCC, federal courts, and state regulatory bodies on access line charges will directly impact the speed in which Internet telephony market grows. On one hand, the FCC is hesitant to regulate convergent technology because VoIP with different feature from the old Telecommunication; no fixed model to build legislature yet. On the other hand, the regulations is needed because Service over the Internet might be quickly replaced telecommunication service, which will affect the entire economy.
Convergence has also raised several debates about classification of certain telecommunications services. As the lines between data transmission, and voice and media transmission are eroded, regulators are faced with the task of how best to classify the converging segments of the telecommunication sector. Traditionally, telecommunication regulation has focused on the operation of physical infrastructure, networks, and access to network. No content is regulated in the telecommunication because the content is considered private. In contrast, film and Television are regulated by contents. The rating system regulates its distribution to the audience. Self-regulation is promoted by the industry. Bogle senior persuaded the entire industry to pay 0.1 percent levy on all advertising and the money was used to give authority to the Advertising Standards Authority, which keeps the government away from setting legislature in the media industry.
The premises to regulate the new media, two-ways communications, concerns much about the change from old media to new media. Each medium has different features and characteristics. First, Internet, the new medium, manipulates all form of information – voice, data and video. Second, the old regulation on the old media, such as radio and Television, emphasized its regulation on the scarcity of the channels. Internet, on the other hand, has the limitless capacity, due to the end-to-end design. Third, Two-ways communication encourages interactivity between the content producers and the audiences.
"...Fundamental basis for classification, therefore, is to consider the need for regulation in terms of either market failure or in the public interests"(Blackman). The Electronic Frontier Foundation, founded in 1990, is a non profit organization that defends free speech, privacy, innovation, and consumer rights. The Digital Millennium Copyright Act regulates and protect the digital content producers and consumers.
Trends
Network neutrality is an issue. Wu and Lessig set out two reasons for network neutrality: firstly, by removing the risk of future discrimination, it incentivizes people to invest more in the development of broadband applications; secondly, it enables fair competition between applications without network bias. The two reasons also coincide with FCC's interest to stimulate investment and enhance innovation in broadband technology and services. Despite regulatory efforts of deregulation, privatization, and liberalization, the infrastructure barrier has been a negative factor in achieving effective competition. Kim et al. argues that IP dissociates the telephony application from the infrastructure and Internet telephony is at the forefront of such dissociation. The neutrality of the network is very important for fair competition. As the former FCC Charman Michael Copps put it: "From its inception, the Internet was designed, as those present during the course of its creating will tell you, to prevent government or a corporation or anyone else from controlling it. It was designed to defeat discrimination against users, ideas and technologies". Because of these reasons, Shin concludes that regulator should make sure to regulate application and infrastructure separately.
The layered model was first proposed by Solum and Chug, Sicker, and Nakahata. Sicker, Warbach and Witt have supported using a layered model to regulate the telecommunications industry with the emergence of convergence services. Many researchers have different layered approach, but they all agree that the emergence of convergent technology will create challenges and ambiguities for regulations. The key point of the layered model is that it reflects the reality of network architecture, and current business model. The layered model consists of:
Access layer – where the physical infrastructure resides: copper wires, cable, or fiber optic.
Transport layer – the provider of service.
Application layer – the interface between the data and the users.
Content layer – the layer which users see.
Shin combines the layered model and network neutrality as the principle to regulate the convergent media industry.
Robotics
Medical applications of robotics have become increasingly prominent in the robotics literature.
The use of robots in service sectors is much less than the use of robots in manufacturing.
| Technology | General | null |
206897 | https://en.wikipedia.org/wiki/Photographic%20plate | Photographic plate | Photographic plates preceded photographic film as a capture medium in photography. The light-sensitive emulsion of silver salts was coated on a glass plate, typically thinner than common window glass. They were heavily used in the late 19th century. With the spread of photographic film, the use of plates declined through the 20th. They were still used in some communities, particularly in science and medicine, until the late 20th century.
History
Glass plates were far superior to film for research-quality imaging because they were stable and less likely to bend or distort, especially in large-format frames for wide-field imaging. Early plates used the wet collodion process. The wet plate process was replaced late in the 19th century by gelatin dry plates.
A view camera nicknamed "The Mammoth" weighing was built by George R. Lawrence in 1899, specifically to photograph "The Alton Limited" train owned by the Chicago & Alton Railway. It took photographs on glass plates measuring × .
Glass plate photographic material largely faded from the consumer market in the early years of the 20th century, as more convenient and less fragile films were increasingly adopted. However, photographic plates were reportedly still being used by one photography business in London until the 1970s, and by one in Bradford called the Belle Vue Studio that closed in 1975. They were in wide use by the professional astronomical community as late as the 1990s. Workshops on the use of glass plate photography as an alternative medium or for artistic use are still being conducted.
Scientific uses
Astronomy
Many famous astronomical surveys were taken using photographic plates, including the first Palomar Observatory Sky Survey (POSS) of the 1950s, the follow-up POSS-II survey of the 1990s, and the UK Schmidt Telescope survey of southern declinations. A number of observatories, including Harvard College and Sonneberg, maintain large archives of photographic plates, which are used primarily for historical research on variable stars.
Many solar system objects were discovered by using photographic plates, superseding earlier visual methods. Discovery of minor planets using photographic plates was pioneered by Max Wolf beginning with his discovery of 323 Brucia in 1891. The first natural satellite discovered using photographic plates was Phoebe in 1898. Pluto was discovered using photographic plates in a blink comparator; its moon Charon was discovered 48 years later in 1978 by U.S. Naval Observatory astronomer James W. Christy by carefully examining a bulge in Pluto's image on a photographic plate.
Glass-backed plates, rather than film, were generally used in astronomy because they do not shrink or deform noticeably in the development process or under environmental changes. Several important applications of astrophotography, including astronomical spectroscopy and astrometry, continued using plates until digital imaging improved to the point where it could outmatch photographic results. Kodak and other manufacturers discontinued production of most kinds of plates as the market for them dwindled between 1980 and 2000, terminating most remaining astronomical use, including for sky surveys.
Physics
Photographic plates were also an important tool in early high-energy physics, as they are blackened by ionizing radiation. Ernest Rutherford was one of the first to study the absorption, in various materials, of the rays produced in radioactive decay, by using photographic plates to measure the intensity of the rays. Development of particle detection optimised nuclear emulsions in the 1930s and 1940s, first in physics laboratories, then by commercial manufacturers, enabled the discovery and measurement of both the pi-meson and K-meson, in 1947 and 1949, initiating a flood of new particle discoveries in the second half of the 20th century.
Electron microscopy
Photographic emulsions were originally coated on thin glass plates for imaging with electron microscopes, which provided a more rigid, stable and flatter plane compared to plastic films. Beginning in the 1970s, high-contrast, fine grain emulsions coated on thicker plastic films manufactured by Kodak, Ilford and DuPont replaced glass plates. These films have largely been replaced by digital imaging technologies.
Medical imaging
The sensitivity of certain types of photographic plates to ionizing radiation (usually X-rays) is also useful in medical imaging and material science applications, although they have been largely replaced with reusable and computer readable image plate detectors and other types of X-ray detectors.
Decline
The earliest flexible films of the late 1880s were sold for amateur use in medium-format cameras. The plastic was not of very high optical quality and tended to curl and otherwise not provide as desirably flat a support surface as a sheet of glass. Initially, a transparent plastic base was more expensive to produce than glass. Quality was eventually improved, manufacturing costs came down, and most amateurs gladly abandoned plates for films. After large-format high quality cut films for professional photographers were introduced in the late 1910s, the use of plates for ordinary photography of any kind became increasingly rare.
The persistent use of plates in astronomical and other scientific applications started to decline in the early 1980s as they were gradually replaced by charge-coupled devices (CCDs), which also provide outstanding dimensional stability. CCD cameras have several advantages over glass plates, including high efficiency, linear light response, and simplified image acquisition and processing. However, even the largest CCD formats (e.g., 8192 × 8192 pixels) still do not have the detecting area and resolution of most photographic plates, which has forced modern survey cameras to use large CCD arrays to obtain the same coverage.
The manufacture of photographic plates has been discontinued by Kodak, Agfa and other widely known traditional makers. Eastern European sources have subsequently catered to the minimal remaining demand, practically all of it for use in holography, which requires a recording medium with a large surface area and a submicroscopic level of resolution that currently (2014) available electronic image sensors cannot provide. In the realm of traditional photography, a small number of historical process enthusiasts make their own wet or dry plates from raw materials and use them in vintage large-format cameras.
Preservation
Several institutions have established archives to preserve photographic plates and prevent their valuable historical information from being lost. The emulsion on the plate can deteriorate. In addition, the glass plate medium is fragile and prone to cracking if not stored correctly.
Historical archives
The United States Library of Congress has a large collection of both wet and dry plate photographic negatives, dating from 1855 through 1900, over 7,500 of which have been digitized from the period 1861 to 1865.
The George Eastman Museum holds an extensive collection of photographic plates. In 1955, wet plate negatives measuring × were reported to have been discovered in 1951 as part of the Holtermann Collection. These purportedly were the largest glass negatives discovered at that time. These images were taken in 1875 by Charles Bayliss and formed the "Shore Tower" panorama of Sydney Harbour. Albumen contact prints made from these negatives are in the holdings of the Holtermann Collection, the negatives are listed among the current holdings of the Collection.
Scientific archives
Preservation of photographic plates is a particular need in astronomy, where changes often occur slowly and the plates represent irreplaceable records of the sky and astronomical objects that extend back over 100 years. The method of digitization of astronomical plates enables free and easy access to those unique astronomical data and it is one of the most popular approaches to preserve them. This approach was applied at the Baldone Astrophysical Observatory where about 22,000 glass and film plates of the Schmidt Telescope were scanned and cataloged. Another example of an astronomical plate archive is the Astronomical Photographic Data Archive (APDA) at the Pisgah Astronomical Research Institute (PARI). APDA was created in response to recommendations of a group of international scientists who gathered in 2007 to discuss how to best preserve astronomical plates (see the Osborn and Robbins reference listed under | Technology | Photography | null |
206919 | https://en.wikipedia.org/wiki/Seyfert%20galaxy | Seyfert galaxy | Seyfert galaxies are one of the two largest groups of active galaxies, along with quasar host galaxies. They have quasar-like nuclei (very luminous sources of electromagnetic radiation that are outside of our own galaxy) with very high surface brightnesses whose spectra reveal strong, high-ionisation emission lines, but unlike quasars, their host galaxies are clearly detectable.
Seyfert galaxies account for about 10% of all galaxies and are some of the most intensely studied objects in astronomy, as they are thought to be powered by the same phenomena that occur in quasars, although they are closer and less luminous than quasars. These galaxies have supermassive black holes at their centers which are surrounded by accretion discs of in-falling material. The accretion discs are believed to be the source of the observed ultraviolet radiation. Ultraviolet emission and absorption lines provide the best diagnostics for the composition of the surrounding material.
Seen in visible light, most Seyfert galaxies look like normal spiral galaxies, but when studied under other wavelengths, it becomes clear that the luminosity of their cores is of comparable intensity to the luminosity of whole galaxies the size of the Milky Way.
Seyfert galaxies are named after Carl Seyfert, who first described this class in 1943.
Discovery
Seyfert galaxies were first detected in 1908 by Edward A. Fath and Vesto Slipher, who were using the Lick Observatory to look at the spectra of astronomical objects that were thought to be "spiral nebulae". They noticed that NGC 1068 showed six bright emission lines, which was considered unusual as most objects observed showed an absorption spectrum corresponding to stars.
In 1926, Edwin Hubble looked at the emission lines of NGC 1068 and two other such "nebulae" and classified them as extragalactic objects. In 1943, Carl Keenan Seyfert discovered more galaxies similar to NGC 1068 and reported that these galaxies have very bright stellar-like nuclei that produce broad emission lines. In 1944 Cygnus A was detected at 160 MHz, and detection was confirmed in 1948 when it was established that it was a discrete source. Its double radio structure became apparent with the use of interferometry. In the next few years, other radio sources such as supernova remnants were discovered. By the end of the 1950s, more important characteristics of Seyfert galaxies were discovered, including the fact that their nuclei are extremely compact (< 100 pc, i.e. "unresolved"), have high mass (≈109±1 solar masses), and the duration of peak nuclear emissions is relatively short (> 108 years).
In the 1960s and 1970s, research to further understand the properties of Seyfert galaxies was carried out. A few direct measurements of the actual sizes of Seyfert nuclei were taken, and it was established that the emission lines in NGC 1068 were produced in a region over a thousand light years in diameter. Controversy existed over whether Seyfert redshifts were of cosmological origin. Confirming estimates of the distance to Seyfert galaxies and their age were limited since their nuclei vary in brightness over a time scale of a few years; therefore arguments involving distance to such galaxies and the constant speed of light cannot always be used to determine their age. In the same time period, research had been undertaken to survey, identify and catalogue galaxies, including Seyferts. Beginning in 1967, Benjamin Markarian published lists containing a few hundred galaxies distinguished by their very strong ultraviolet emission, with measurements on the position of some of them being improved in 1973 by other researchers. At the time, it was believed that 1% of spiral galaxies are Seyferts. By 1977, it was found that very few Seyfert galaxies are ellipticals, most of them being spiral or barred spiral galaxies. During the same time period, efforts have been made to gather spectrophotometric data for Seyfert galaxies. It became obvious that not all spectra from Seyfert galaxies look the same, so they have been subclassified according to the characteristics of their emission spectra. A simple division into types I and II has been devised, with the classes depending on the relative width of their emission lines. It has been later noticed that some Seyfert nuclei show intermediate properties, resulting in their being further subclassified into types 1.2, 1.5, 1.8 and 1.9 (see #Classification). Early surveys for Seyfert galaxies were biased in counting only the brightest representatives of this group. More recent surveys that count galaxies with low-luminosity and obscured Seyfert nuclei suggest that the Seyfert phenomenon is actually quite common, occurring in 16% ± 5% of galaxies; indeed, several dozen galaxies exhibiting the Seyfert phenomenon exist in the close vicinity (≈27 Mpc) of our own galaxy. Seyfert galaxies form a substantial fraction of the galaxies appearing in the Markarian Catalogue, a list of galaxies displaying an ultraviolet excess in their nuclei.
Characteristics
An active galactic nucleus (AGN) is a compact region at the center of a galaxy that has a higher than normal luminosity over portions of the electromagnetic spectrum. A galaxy having an active nucleus is called an active galaxy. Active galactic nuclei are the most luminous sources of electromagnetic radiation in the Universe, and their evolution puts constraints on cosmological models. Depending on the type, their luminosity varies over a timescale from a few hours to a few years. The two largest subclasses of active galaxies are quasars and Seyfert galaxies, the main difference between the two being the amount of radiation they emit. In a typical Seyfert galaxy, the nuclear source emits at visible wavelengths an amount of radiation comparable to that of the whole galaxy's constituent stars, while in a quasar, the nuclear source is brighter than the constituent stars by at least a factor of 100. Seyfert galaxies have extremely bright nuclei, with luminosities ranging between 108 and 1011 solar luminosities. Only about 5% of them are radio bright; their emissions are moderate in gamma rays and bright in X-rays. Their visible and infrared spectra show very bright emission lines of hydrogen, helium, nitrogen, and oxygen. These emission lines exhibit strong Doppler broadening, which implies velocities from , and are believed to originate near an accretion disc surrounding the central black hole.
Eddington luminosity
A lower limit to the mass of the central black hole can be calculated using the Eddington luminosity. This limit arises because light exhibits radiation pressure. Assume that a black hole is surrounded by a disc of luminous gas. Both the attractive gravitational force acting on electron-ion pairs in the disc and the repulsive force exerted by radiation pressure follow an inverse-square law. If the gravitational force exerted by the black hole is less than the repulsive force due to radiation pressure, the disc will be blown away by radiation pressure.
Emissions
The emission lines seen on the spectrum of a Seyfert galaxy may come from the surface of the accretion disc itself, or may come from clouds of gas illuminated by the central engine in an ionization cone. The exact geometry of the emitting region is difficult to determine due to poor resolution of the galactic center. However, each part of the accretion disc has a different velocity relative to our line of sight, and the faster the gas is rotating around the black hole, the broader the emission line will be. Similarly, an illuminated disc wind also has a position-dependent velocity.
The narrow lines are believed to originate from the outer part of the active galactic nucleus, where velocities are lower, while the broad lines originate closer to the black hole. This is confirmed by the fact that the narrow lines do not vary detectably, which implies that the emitting region is large, contrary to the broad lines which can vary on relatively short timescales. Reverberation mapping is a technique which uses this variability to try to determine the location and morphology of the emitting region. This technique measures the structure and kinematics of the broad line emitting region by observing the changes in the emitted lines as a response to changes in the continuum. The use of reverberation mapping requires the assumption that the continuum originates in a single central source. For 35 AGN, reverberation mapping has been used to calculate the mass of the central black holes and the size of the broad line regions.
In the few radio-loud Seyfert galaxies that have been observed, the radio emission is believed to represent synchrotron emission from the jet. The infrared emission is due to radiation in other bands being reprocessed by dust near the nucleus. The highest energy photons are believed to be created by inverse Compton scattering by a high-temperature corona near the black hole.
Classification
Seyferts were first classified as Type I or II, depending on the emission lines shown by their spectra. The spectra of Type I Seyfert galaxies show broad lines that include both allowed lines, like H I, He I or He II and narrower forbidden lines, like O III. They show some narrower allowed lines as well, but even these narrow lines are much broader than the lines shown by normal galaxies. However, the spectra of Type II Seyfert galaxies show only narrow lines, both permitted and forbidden. Forbidden lines are spectral lines that occur due to electron transitions not normally allowed by the selection rules of quantum mechanics, but that still have a small probability of spontaneously occurring. The term "forbidden" is slightly misleading, as the electron transitions causing them are not forbidden but highly improbable.
In some cases, the spectra show both broad and narrow permitted lines, which is why they are classified as an intermediate type between Type I and Type II, such as Type 1.5 Seyfert. The spectra of some of these galaxies have changed from Type 1.5 to Type II in a matter of a few years. However, the characteristic broad Hα emission line has rarely, if ever, disappeared. The origin of the differences between Type I and Type II Seyfert galaxies is not known yet. There are a few cases where galaxies have been identified as Type II only because the broad components of the spectral lines have been very hard to detect. It is believed by some that all Type II Seyferts are in fact Type I, where the broad components of the lines are impossible to detect because of the angle we are at with respect to the galaxy. Specifically, in Type I Seyfert galaxies, we observe the central compact source more or less directly, therefore sampling the high velocity clouds in the broad line emission region moving around the supermassive black hole thought to be at the center of the galaxy. By contrast, in Type II Seyfert galaxies, the active nuclei are obscured and only the colder outer regions located further away from the clouds' broad line emission region are seen. This theory is known as the "Unification scheme" of Seyfert galaxies. However, it is not yet clear if this hypothesis can explain all the observed differences between the two types.
Type I Seyfert galaxies
Type I Seyferts are very bright sources of ultraviolet light and X-rays in addition to the visible light coming from their cores. They have two sets of emission lines on their spectra: narrow lines with widths (measured in velocity units) of several hundred km/s, and broad lines with widths up to 104 km/s. The broad lines originate above the accretion disc of the supermassive black hole thought to power the galaxy, while the narrow lines occur beyond the broad line region of the accretion disc. Both emissions are caused by heavily ionised gas. The broad line emission arises in a region 0.1–1 parsec across. The broad line emission region, RBLR, can be estimated from the time delay corresponding to the time taken by light to travel from the continuum source to the line-emitting gas.
Type II Seyfert galaxies
Type II Seyfert galaxies have the characteristic bright core, as well as appearing bright when viewed at infrared wavelengths. Their spectra contain narrow lines associated with forbidden transitions, and broader lines associated with allowed strong dipole or intercombination transitions. NGC 3147 is considered the best candidate to be a true Type II Seyfert galaxy. In some Type II Seyfert galaxies, analysis with a technique called spectro-polarimetry (spectroscopy of polarised light component) revealed obscured Type I regions. In the case of NGC 1068, nuclear light reflected off a dust cloud was measured, which led scientists to believe in the presence of an obscuring dust torus around a bright continuum and broad emission line nucleus. When the galaxy is viewed from the side, the nucleus is indirectly observed through reflection by gas and dust above and below the torus. This reflection causes the polarisation.
Type 1.2, 1.5, 1.8 and 1.9 Seyfert galaxies
In 1981, Donald Osterbrock introduced the notations Type 1.5, 1.8 and 1.9, where the subclasses are based on the optical appearance of the spectrum, with the numerically larger subclasses having weaker broad-line components relative to the narrow lines. For example, Type 1.9 only shows a broad component in the Hα line, and not in higher order Balmer lines. In Type 1.8, very weak broad lines can be detected in the Hβ lines as well as Hα, even if they are very weak compared to the Hα. In Type 1.5, the strength of the Hα and Hβ lines are comparable.
Other Seyfert-like galaxies
In addition to the Seyfert progression from Type I to Type II (including Type 1.2 to Type 1.9), there are other types of galaxies that are very similar to Seyferts or that can be considered as subclasses of them. Very similar to Seyferts are the low-ionisation narrow-line-emission radio galaxies (LINER), discovered in 1980. These galaxies have strong emission lines from weakly ionised or neutral atoms, while the emission lines from strongly ionised atoms are relatively weak by comparison. LINERs share a large amount of traits with low-luminosity Seyferts. In fact, when seen in visible light, the global characteristics of their host galaxies are indistinguishable. Also, they both show a broad-line emission region, but the line-emitting region in LINERs has a lower density than in Seyferts. An example of such a galaxy is M104 in the Virgo constellation, also known as the Sombrero Galaxy. A galaxy that is both a LINER and a Type I Seyfert is NGC 7213, a galaxy that is relatively close compared to other AGNs. Another very interesting subclass are the narrow-line Type I galaxies (NLSy1), which have been subject to extensive research in recent years. They have much narrower lines than the broad lines from classic Type I galaxies, steep hard and soft X-ray spectra and strong Fe[II] emission. Their properties suggest that NLSy1 galaxies are young AGNs with high accretion rates, suggesting a relatively small but growing central black hole mass. There are theories suggesting that NLSy1s are galaxies in an early stage of evolution, and links between them and ultraluminous infrared galaxies or Type II galaxies have been proposed.
Evolution
The majority of active galaxies are very distant and show large Doppler shifts. This suggests that active galaxies occurred in the early Universe and, due to cosmic expansion, are receding away from the Milky Way at very high speeds. Quasars are the furthest active galaxies, some of them being observed at distances 12 billion light years away. Seyfert galaxies are much closer than quasars. Because light has a finite speed, looking across large distances in the Universe is equivalent to looking back in time. Therefore, the observation of active galactic nuclei at large distances and their scarcity in the nearby Universe suggests that they were much more common in the early Universe, implying that active galactic nuclei could be early stages of galactic evolution. This leads to the question about what would be the local (modern-day) counterparts of AGNs found at large redshifts. It has been proposed that NLSy1s could be the small redshift counterparts of quasars found at large redshifts The two have many similar properties, for example: high metallicities or similar pattern of emission lines (strong Fe [II], weak O [III]). Some observations suggest that AGN emission from the nucleus is not spherically symmetric and that the nucleus often shows axial symmetry, with radiation escaping in a conical region. Based on these observations, models have been devised to explain the different classes of AGNs as due to their different orientations with respect to the observational line of sight. Such models are called unified models. Unified models explain the difference between Type I and Type II galaxies as being the result of Type II galaxies being surrounded by obscuring toruses which prevent telescopes from seeing the broad line region. Quasars and blazars can be fit quite easily in this model. The main problem of such a unification scheme is trying to explain why some AGN are radio loud while others are radio quiet. It has been suggested that these differences may be due to differences in the spin of the central black hole.
Examples
Here are some examples of Seyfert galaxies:
Circinus Galaxy, which has rings of gas ejected from its center
Centaurus A or NGC 5128, apparently the brightest Seyfert galaxy as seen from Earth; a giant elliptical galaxy and also classified as a radio galaxy notable for its relativistic jet spanning more than a million light-years in length
Cygnus A, the first-identified radio galaxy and the brightest radio source in the sky as seen in frequencies above 1 GHz
Messier 51a (NGC 5194), the Whirlpool Galaxy, one of the best-known galaxies in the sky
Messier 64 (NGC 4826), with two counter-rotating disks that are approximately equal in mass
Messier 66 (NGC 3627), a part of the Leo Triplet
Messier 77 (NGC 1068), one of the first Seyfert galaxies classified
Messier 81 (NGC 3031), the second-brightest Seyfert galaxy in the sky after Centaurus A
Messier 88 (NGC 4501), a member of the large Virgo Cluster and one of the brightest Seyfert galaxies in the sky
Messier 106 (NGC 4258), one of the best-known Seyfert galaxies, which has a water vapor megamaser in its nucleus seen by 22 GHz line of ortho-H2O
NGC 262, an example of a galaxy with an extended gaseous H I halo
NGC 1097, which has four narrow optical jets coming out from its nucleus
NGC 1275, whose central black hole produces the lowest B-flat note ever recorded
NGC 1365, notable for its central black hole spinning almost the speed of light
NGC 1566, one of the first Seyfert galaxies classified
NGC 1672, which has a nucleus engulfed by intense starburst regions
NGC 1808, also a starburst galaxy
NGC 3079, which has a giant bubble of hot gas coming out from its center
NGC 3185, member of the Hickson 44 group
NGC 3259, also a strong source of X-rays
NGC 3783, also a strong source of X-rays
NGC 3982, also a starburst galaxy
NGC 4151, which has two supermassive black holes in its center
NGC 4395, an example of a low-surface-brightness galaxy with an intermediate-mass black hole in its center
NGC 4725, one of the closest and brightest Seyfert galaxies to Earth; it has a very long spiraling cloud of gas surrounding its center seen in infrared
NGC 4945, a galaxy relatively close to Centaurus A
NGC 5033, has a Seyfert nucleus displaced from its kinematic center
NGC 5548, an example of a lenticular Seyfert galaxy
NGC 6240, also classified as an ultraluminous infrared galaxy (ULIRG)
NGC 6251, the X-ray-brightest low-excitation radio galaxy in the 3CRR catalog
NGC 6264, a Seyfert II with an associated AGN
NGC 7479, a spiral galaxy with radio arms opening in a direction opposite to the optical arms
NGC 7742, an unbarred spiral galaxy; also known as the Fried Egg Galaxy
IC 2560, a spiral galaxy with a nucleus similar to NGC 1097
SDSS J1430+2303, a Seyfert I, predicted to host a supermassive black hole binary very close to the point of merger
| Physical sciences | Active galactic nucleus | null |
206940 | https://en.wikipedia.org/wiki/Ice%20storm | Ice storm | An ice storm, also known as a glaze event or a silver storm, is a type of winter storm characterized by freezing rain. The U.S. National Weather Service defines an ice storm as a storm which results in the accumulation of at least of ice on exposed surfaces. They are generally not violent storms but instead are commonly perceived as gentle rains occurring at temperatures just below freezing.
Formation
The formation of ice begins with a layer of above-freezing air above a layer of sub-freezing temperatures closer to the surface. Frozen precipitation melts to rain while falling into the warm air layer, and then begins to refreeze in the cold layer below. If the precipitate refreezes while still in the air, it will land on the ground as sleet. Alternatively, the liquid droplets can continue to fall without freezing, passing through the cold air just above the surface. This thin layer of air then cools the rain to a temperature below freezing (). However, the drops themselves do not freeze, a phenomenon called supercooling (or forming "supercooled drops"). When the supercooled drops strike ground or anything else below (e.g. power lines, tree branches, aircraft), a layer of ice accumulates as the cold water drips off, forming a slowly thickening film of ice, hence freezing rain.
While meteorologists can predict when and where an ice storm will occur, some storms still occur with little or no warning. In the United States, most ice storms occur in the northeastern region, but damaging storms have occurred farther south; an ice storm in February 1994 resulted in tremendous ice accumulation as far south as Mississippi, and caused reported damage in nine states.
Effect
The freezing rain from an ice storm covers everything with heavy, smooth glaze ice. In addition to hazardous driving or walking conditions, branches or even whole trees may break from the weight of ice. Falling branches can block roads, tear down power and telephone lines, and cause other damage. Even without falling trees and tree branches, the weight of the ice itself can easily snap power lines and also break and bring down power/utility poles; even electricity pylons with steel frames. This can leave people without power for anywhere from several days to a month. According to most meteorologists, just of ice accumulation can add about of weight per line span. Damage from ice storms is easily capable of shutting down entire metropolitan areas.
Additionally, the loss of power during ice storms has indirectly caused numerous illnesses and deaths due to unintentional carbon monoxide (CO) poisoning. At lower levels, CO poisoning causes symptoms such as nausea, dizziness, fatigue, and headache, but high levels can cause unconsciousness, heart failure, and death. The relatively high incidence of CO poisoning during ice storms occurs due to the use of alternative methods of heating and cooking during prolonged power outages, common after severe ice storms. Gas generators, charcoal and propane barbecues, and kerosene heaters contribute to CO poisoning when they operate in confined locations. CO is produced when appliances burn fuel without enough oxygen present, such as basements and other indoor locations.
Loss of electricity during ice storms can indirectly lead to hypothermia and result in death. It can also lead to ruptured pipes due to water freezing inside the pipes.
Gallery
| Physical sciences | Storms | Earth science |
206947 | https://en.wikipedia.org/wiki/H%20II%20region | H II region | An H II region or HII region is a region of interstellar atomic hydrogen that is ionized. It is typically in a molecular cloud of partially ionized gas in which star formation has recently taken place, with a size ranging from one to hundreds of light years, and density from a few to about a million particles per cubic centimetre. The Orion Nebula, now known to be an H II region, was observed in 1610 by Nicolas-Claude Fabri de Peiresc by telescope, the first such object discovered.
The regions may be of any shape because the distribution of the stars and gas inside them is irregular. The short-lived blue stars created in these regions emit copious amounts of ultraviolet light that ionize the surrounding gas. H II regions—sometimes several hundred light-years across—are often associated with giant molecular clouds. They often appear clumpy and filamentary, sometimes showing intricate shapes such as the Horsehead Nebula. H II regions may give birth to thousands of stars over a period of several million years. In the end, supernova explosions and strong stellar winds from the most massive stars in the resulting star cluster disperse the gases of the H II region, leaving a cluster of stars which have formed.
H II regions can be observed at considerable distances in the universe, and the study of extragalactic H II regions is important in determining the distances and chemical composition of galaxies. Spiral and irregular galaxies contain many H II regions, while elliptical galaxies are almost devoid of them. In spiral galaxies, including our Milky Way, H II regions are concentrated in the spiral arms, while in irregular galaxies they are distributed chaotically. Some galaxies contain huge H II regions, which may contain tens of thousands of stars. Examples include the 30 Doradus region in the Large Magellanic Cloud and NGC 604 in the Triangulum Galaxy.
Terminology
The term H II is pronounced "H two" by astronomers. "H" is the chemical symbol for hydrogen, and "II" is the Roman numeral for 2. It is customary in astronomy to use the Roman numeral I for neutral atoms, II for singly-ionised—H II is H+ in other sciences—III for doubly-ionised, e.g. O III is O2+, etc. H II, or H+, consists of free protons. An H I region consists of neutral atomic hydrogen, and a molecular cloud of molecular hydrogen, H2. In spoken discussion with non-astronomers there is sometimes confusion between the identical spoken forms of "H II" and "H2".
Observations
A few of the brightest H II regions are visible to the naked eye. However, none seem to have been noticed before the advent of the telescope in the early 17th century. Even Galileo did not notice the Orion Nebula when he first observed the star cluster within it (previously cataloged as a single star, θ Orionis, by Johann Bayer). The French observer Nicolas-Claude Fabri de Peiresc is credited with the discovery of the Orion Nebula in 1610. Since that early observation large numbers of H II regions have been discovered in the Milky Way and other galaxies.
William Herschel observed the Orion Nebula in 1774, and described it later as "an unformed fiery mist, the chaotic material of future suns". In early days astronomers distinguished between "diffuse nebulae" (now known to be H II regions), which retained their fuzzy appearance under magnification through a large telescope, and nebulae that could be resolved into stars, now known to be galaxies external to our own.
Confirmation of Herschel's hypothesis of star formation had to wait another hundred years, when William Huggins together with his wife Mary Huggins turned his spectroscope on various nebulae. Some, such as the Andromeda Nebula, had spectra quite similar to those of stars, but turned out to be galaxies consisting of hundreds of millions of individual stars. Others looked very different. Rather than a strong continuum with absorption lines superimposed, the Orion Nebula and other similar objects showed only a small number of emission lines. In planetary nebulae, the brightest of these spectral lines was at a wavelength of 500.7 nanometres, which did not correspond with a line of any known chemical element. At first it was hypothesized that the line might be due to an unknown element, which was named nebulium—a similar idea had led to the discovery of helium through analysis of the Sun's spectrum in 1868. However, while helium was isolated on earth soon after its discovery in the spectrum of the sun, nebulium was not. In the early 20th century, Henry Norris Russell proposed that rather than being a new element, the line at 500.7 nm was due to a familiar element in unfamiliar conditions.
Interstellar matter, considered dense in an astronomical context, is at high vacuum by laboratory standards. Physicists showed in the 1920s that in gas at extremely low density, electrons can populate excited metastable energy levels in atoms and ions, which at higher densities are rapidly de-excited by collisions. Electron transitions from these levels in doubly ionized oxygen give rise to the 500.7 nm line. These spectral lines, which can only be seen in very low density gases, are called forbidden lines. Spectroscopic observations thus showed that planetary nebulae consisted largely of extremely rarefied ionised oxygen gas (OIII).
During the 20th century, observations showed that H II regions often contained hot, bright stars. These stars are many times more massive than the Sun, and are the shortest-lived stars, with total lifetimes of only a few million years (compared to stars like the Sun, which live for several billion years). Therefore, it was surmised that H II regions must be regions in which new stars were forming. Over a period of several million years, a cluster of stars will form in an H II region, before radiation pressure from the hot young stars causes the nebula to disperse.
Origin and lifetime
The precursor to an H II region is a giant molecular cloud (GMC). A GMC is a cold (10–20 K) and dense cloud consisting mostly of molecular hydrogen. GMCs can exist in a stable state for long periods of time, but shock waves due to supernovae, collisions between clouds, and magnetic interactions can trigger its collapse. When this happens, via a process of collapse and fragmentation of the cloud, stars are born (see stellar evolution for a lengthier description).
As stars are born within a GMC, the most massive will reach temperatures hot enough to ionise the surrounding gas. Soon after the formation of an ionising radiation field, energetic photons create an ionisation front, which sweeps through the surrounding gas at supersonic speeds. At greater and greater distances from the ionising star, the ionisation front slows, while the pressure of the newly ionised gas causes the ionised volume to expand. Eventually, the ionisation front slows to subsonic speeds, and is overtaken by the shock front caused by the expansion of the material ejected from the nebula. The H II region has been born.
The lifetime of an H II region is of the order of a few million years. Radiation pressure from the hot young stars will eventually drive most of the gas away. In fact, the whole process tends to be very inefficient, with less than 10 percent of the gas in the H II region forming into stars before the rest is blown off. Contributing to the loss of gas are the supernova explosions of the most massive stars, which will occur after only 1–2 million years.
Destruction of stellar nurseries
Stars form in clumps of cool molecular gas that hide the nascent stars. It is only when the radiation pressure from a star drives away its 'cocoon' that it becomes visible. The hot, blue stars that are powerful enough to ionize significant amounts of hydrogen and form H II regions will do this quickly, and light up the region in which they just formed. The dense regions which contain younger or less massive still-forming stars and which have not yet blown away the material from which they are forming are often seen in silhouette against the rest of the ionised nebula. Bart Bok and E. F. Reilly searched astronomical photographs in the 1940s for "relatively small dark nebulae", following suggestions that stars might be formed from condensations in the interstellar medium; they found several such "approximately circular or oval dark objects of small size", which they referred to as "globules", since referred to as Bok globules. Bok proposed at the December 1946 Harvard Observatory Centennial Symposia that these globules were likely sites of star formation. It was confirmed in 1990 that they were indeed stellar birthplaces. The hot young stars dissipate these globules, as the radiation from the stars powering the H II region drives the material away. In this sense, the stars which generate H II regions act to destroy stellar nurseries. In doing so, however, one last burst of star formation may be triggered, as radiation pressure and mechanical pressure from supernova may act to squeeze globules, thereby enhancing the density within them.
The young stars in H II regions show evidence for containing planetary systems. The Hubble Space Telescope has revealed hundreds of protoplanetary disks (proplyds) in the Orion Nebula. At least half the young stars in the Orion Nebula appear to be surrounded by disks of gas and dust, thought to contain many times as much matter as would be needed to create a planetary system like the Solar System.
Characteristics
Physical properties
H II regions vary greatly in their physical properties. They range in size from so-called ultra-compact (UCHII) regions perhaps only a light-year or less across, to giant H II regions several hundred light-years across. Their size is also known as the Stromgren radius and essentially depends on the intensity of the source of ionising photons and the density of the region. Their densities range from over a million particles per cm3 in the ultra-compact H II regions to only a few particles per cm3 in the largest and most extended regions. This implies total masses between perhaps 100 and 105 solar masses.
There are also "ultra-dense H II" regions (UDHII).
Depending on the size of an H II region there may be several thousand stars within it. This makes H II regions more complicated than planetary nebulae, which have only one central ionising source. Typically H II regions reach temperatures of 10,000 K. They are mostly ionised gases with weak magnetic fields with strengths of several nanoteslas. Nevertheless, H II regions are almost always associated with a cold molecular gas, which originated from the same parent GMC. Magnetic fields are produced by these weak moving electric charges in the ionised gas, suggesting that H II regions might contain electric fields.
A number of H II regions also show signs of being permeated by a plasma with temperatures exceeding 10,000,000 K, sufficiently hot to emit X-rays. X-ray observatories such as Einstein and Chandra have noted diffuse X-ray emissions in a number of star-forming regions, notably the Orion Nebula, Messier 17, and the Carina Nebula. The hot gas is likely supplied by the strong stellar winds from O-type stars, which may be heated by supersonic shock waves in the winds, through collisions between winds from different stars, or through colliding winds channeled by magnetic fields. This plasma will rapidly expand to fill available cavities in the molecular clouds due to the high speed of sound in the gas at this temperature. It will also leak out through holes in the periphery of the H II region, which appears to be happening in Messier 17.
Chemically, H II regions consist of about 90% hydrogen. The strongest hydrogen emission line, the H-alpha line at 656.3 nm, gives H II regions their characteristic red colour. (This emission line comes from excited un-ionized hydrogen.) H-beta is also emitted, but at approximately 1/3 of the intensity of H-alpha. Most of the rest of an H II region consists of helium, with trace amounts of heavier elements. Across the galaxy, it is found that the amount of heavy elements in H II regions decreases with increasing distance from the galactic centre. This is because over the lifetime of the galaxy, star formation rates have been greater in the denser central regions, resulting in greater enrichment of those regions of the interstellar medium with the products of nucleosynthesis.
Numbers and distribution
H II regions are found only in spiral galaxies like the Milky Way and irregular galaxies. They are not seen in elliptical galaxies. In irregular galaxies, they may be dispersed throughout the galaxy, but in spirals they are most abundant within the spiral arms. A large spiral galaxy may contain thousands of H II regions.
The reason H II regions rarely appear in elliptical galaxies is that ellipticals are believed to form through galaxy mergers. In galaxy clusters, such mergers are frequent. When galaxies collide, individual stars almost never collide, but the GMCs and H II regions in the colliding galaxies are severely agitated. Under these conditions, enormous bursts of star formation are triggered, so rapid that most of the gas is converted into stars rather than the normal rate of 10% or less.
Galaxies undergoing such rapid star formation are known as starburst galaxies. The post-merger elliptical galaxy has a very low gas content, and so H II regions can no longer form. Twenty-first century observations have shown that a very small number of H II regions exist outside galaxies altogether. These intergalactic H II regions may be the remnants of tidal disruptions of small galaxies, and in some cases may represent a new generation of stars in a galaxy's most recently accreted gas.
Morphology
H II regions come in an enormous variety of sizes. They are usually clumpy and inhomogeneous on all scales from the smallest to largest. Each star within an H II region ionises a roughly spherical region—known as a Strömgren sphere—of the surrounding gas, but the combination of ionisation spheres of multiple stars within a H II region and the expansion of the heated nebula into surrounding gases creates sharp density gradients that result in complex shapes. Supernova explosions may also sculpt H II regions. In some cases, the formation of a large star cluster within an H II region results in the region being hollowed out from within. This is the case for NGC 604, a giant H II region in the Triangulum Galaxy. For a H II region which cannot be resolved, some information on the spatial structure (the electron density as a function of the distance from the center, and an estimate of the clumpiness) can be inferred by performing an inverse Laplace transform on the frequency spectrum.
Notable regions
Notable Galactic H II regions include the Orion Nebula, the Eta Carinae Nebula, and the Berkeley 59 / Cepheus OB4 Complex. The Orion Nebula, about 500 pc (1,500 light-years) from Earth, is part of OMC-1, a giant molecular cloud that, if visible, would be seen to fill most of the constellation of Orion. The Horsehead Nebula and Barnard's Loop are two other illuminated parts of this cloud of gas. The Orion Nebula is actually a thin layer of ionised gas on the outer border of the OMC-1 cloud. The stars in the Trapezium cluster, and especially θ1 Orionis, are responsible for this ionisation.
The Large Magellanic Cloud, a satellite galaxy of the Milky Way at about 50 kpc (), contains a giant H II region called the Tarantula Nebula. Measuring at about () across, this nebula is the most massive and the second-largest H II region in the Local Group. It is much bigger than the Orion Nebula, and is forming thousands of stars, some with masses of over 100 times that of the sun—OB and Wolf-Rayet stars. If the Tarantula Nebula were as close to Earth as the Orion Nebula, it would shine about as brightly as the full moon in the night sky. The supernova SN 1987A occurred in the outskirts of the Tarantula Nebula.
Another giant H II region—NGC 604 is located in M33 spiral galaxy, which is at 817 kpc (2.66 million light years). Measuring at approximately () across, NGC 604 is the second-most-massive H II region in the Local Group after the Tarantula Nebula, although it is slightly larger in size than the latter. It contains around 200 hot OB and Wolf-Rayet stars, which heat the gas inside it to millions of degrees, producing bright X-ray emissions. The total mass of the hot gas in NGC 604 is about 6,000 Solar masses.
Current issues
As with planetary nebulae, estimates of the abundance of elements in H II regions are subject to some uncertainty. There are two different ways of determining the abundance of metals (metals in this case are elements other than hydrogen and helium) in nebulae, which rely on different types of spectral lines, and large discrepancies are sometimes seen between the results derived from the two methods. Some astronomers put this down to the presence of small temperature fluctuations within H II regions; others claim that the discrepancies are too large to be explained by temperature effects, and hypothesise the existence of cold knots containing very little hydrogen to explain the observations.
The full details of massive star formation within H II regions are not yet well known. Two major problems hamper research in this area. First, the distance from Earth to large H II regions is considerable, with the nearest H II (California Nebula) region at 300 pc (1,000 light-years); other H II regions are several times that distance from Earth. Secondly, the formation of these stars is deeply obscured by dust, and visible light observations are impossible. Radio and infrared light can penetrate the dust, but the youngest stars may not emit much light at these wavelengths.
| Physical sciences | Basics_3 | null |
206953 | https://en.wikipedia.org/wiki/Sandgrouse | Sandgrouse | Sandgrouse is the common name for Pteroclidae , a family of sixteen species of bird, members of the order Pterocliformes . They are traditionally placed in two genera. The two central Asian species are classified as Syrrhaptes and the other fourteen species, from Africa and Asia, are placed in the genus Pterocles. They are ground-dwelling birds restricted to treeless, open country, such as plains, savannahs, and semi-deserts. They are distributed across northern, southern, and eastern Africa, Madagascar, the Middle East, and India through central Asia. The ranges of the black-bellied sandgrouse and the pin-tailed sandgrouse even extend into the Iberian Peninsula and France, and Pallas's sandgrouse occasionally breaks out in large numbers from its normal range in Asia.
Description
Sandgrouse have small, pigeon-like heads and necks and sturdy compact bodies. They range in size from in length and from in weight. The adults are sexually dimorphic with the males being slightly larger and more brightly colored than the females. They have eleven strong primary feathers and long pointed wings, giving them a fast and direct flight. The muscles of the wings are powerful and the birds are capable of rapid take off and sustained flight. In some species, the central feathers in the tail are extended into long points.
The legs are short and members of the genus Syrrhaptes have feathers growing on both the legs and toes, and no hind toes, while members of the genus Pterocles have legs feathered just at the front, no feathers on the toes, and rudimentary hind toes raised off the ground.
The plumage is cryptic, generally being in shades of sandy brown, grey and buff, and variously mottled and barred, enabling the birds to merge into the dusty landscape. There is a dense layer of under down which helps insulate the bird from extremes of heat and cold. The feathers of the belly are specially adapted for absorbing water and retaining it, allowing adults, particularly males, to carry water to chicks that may be many miles away from watering holes. The amount of water that can be carried in this way is 15 to 20 millilitres (0.5 to 0.7 fluid ounces).
Distribution
Members of the genus Syrrhaptes are found in the steppes of central Asia. Their range extends from the Caspian Sea through southern Siberia, Tibet, and Mongolia to northern and central China. They are normally resident, but Pallas's sandgrouse can be locally migratory and very occasionally is irruptive, appearing in areas well outside its normal range. This happened in 1863 and 1888, and a major irruption took place in 1908 when many birds were seen as far afield as Ireland and the United Kingdom where they bred in Yorkshire and Moray.
Members of the genus Pterocles are mainly found in the drier parts of northern, eastern, and southern Africa, though the range of some species extends into the Middle East and western Asia. The Madagascar sandgrouse is restricted to Madagascar. The black-bellied sandgrouse and the pin-tailed sandgrouse also occur in Spain, Portugal, and southern France. Most species are sedentary though some make local migrations, typically to lower altitudes in winter.
Behaviour and ecology
Diet and feeding
Sandgrouse are principally seed eaters. Other food items eaten include green shoots and leaves, bulbs, and berries. Insect food such as ants and termites may also be eaten, especially during the breeding season. The diet of many sandgrouse is highly specialised, with the seeds of a small number of plant species being dominant. This may depend on local availability but in other cases it reflects actual selection of favoured seeds over others by the sandgrouse. Seeds of leguminous plants are usually an important part of the diet. In agricultural areas oats and other grain are readily taken. Seeds are either collected from the ground or directly from the plants.
Foraging techniques vary between species that coexist, which reduces competition; in Namibia, double-banded sandgrouse feed slowly and methodically whilst Namaqua sandgrouse feed rapidly, exploring loose soil with their beaks and flicking it away sideways. Grit is also swallowed to help grind up food in the gizzard.
Sandgrouse are gregarious, feeding in flocks of up to 100 birds. As a consequence of their dry diet, they need to visit water sources regularly. Drinking times vary among the species. Ten species drink at dawn, four at dusk, and two at indeterminate times. When drinking, water is sucked into the beak, which is then raised to let the water flow down into the crop. By repeating this procedure rapidly, enough water to last twenty four hours can be swallowed in a few seconds. As they travel to water holes, they call to members of their own species and many hundreds or thousands synchronise their arrival at the drinking site despite converging from many different locations scattered over hundreds of square miles (kilometres) of territory.
They are vulnerable to attack while watering but with a large number of birds milling about, predators find it difficult to select a target bird and are likely to have been spotted before they can get close to the flock. The choice of a watering site is influenced by the topography of the nearby ground. The sandgrouse tend to avoid sites with cover for mammalian predators and their greatest risk is usually from predatory birds.
Sandgrouse travel tens of miles to their traditional water holes and tend to disregard temporary water sources which may appear periodically. This clearly has a survival value, because a dried up water source in an arid region could result in dehydration and death. The Burchell's sandgrouse in the Kalahari Desert sometimes travels over daily to reach a water source. Not all species need to drink every day, and the Tibetan sandgrouse does not need to travel to drink, because of the abundance of water from melting snowfields in its habitat.
Breeding
Sandgrouse are monogamous. The breeding season usually coincides with a crop of seeds after the local rainy season and at this time the feeding flocks tend to break up into pairs. The nesting site is a slight depression in the ground, sometimes lined with a few pieces of dry foliage. Most typically, three cryptic eggs are laid, though occasionally there may be two or four. The intricately patterned, precocial downy young, and egg colouration (though not shape) closely resemble those of many Charadriiformes. Eggs are near elliptical. Incubation duties are shared; in most species, the males incubate at night while the females sit on the eggs during the day. The eggs usually hatch after 20–25 days.
The precocial chicks are covered with down and leave the nest as soon as the last hatchling has dried out. The parents do not provide them with food and they learn, with parental guidance, what is edible and what is not. The chicks obtain their water from the soaked downy feathers on the adults' breasts. Chicks are too small and young to thermoregulate at first, and their parents shade them during the hottest part of the day, and brood them to keep warm at night. The chicks remain with their parents, as a family group, for several months.
Taxonomy
The Pteroclidae was formerly included in the Galliformes due to the similarities the family shares with the true grouse. However, it was later discovered that these similarities are superficial and a result of convergent evolution. Sandgrouse were later placed near the Columbiformes largely due to their reported ability to drink by the "sucking" or "pumping" action of peristalsis of the esophagus, an unusual characteristic. More recently, it has been reported that they cannot suck up water in this way, and they are now treated separately in the order Pterocliformes. They have been considered near passerine birds, and are thought by some to be closer to the shorebirds (Charadriiformes).
In the DNA-study by Fain and Houde (2004) they were included in the Metaves, together with the Columbiformes. In the larger study by Hackett et al. (2008) they were once again positioned close to the Columbiformes, in Columbimorphae, but also with the Mesites.
Phylogeny
Living Pterocliformes, based on the work by John Boyd.
Species
Relations with humans
Sandgrouse have little interaction with people, primarily because most species live in arid unpopulated areas and at low densities. They are not generally sought after as game birds as they are not especially palatable, although they have on occasion been taken in great numbers at water holes. An attempt to introduce them into Nevada failed but they have been introduced to Hawaii. No species is considered to be threatened although there have been some localised range contractions, particularly in Europe. A subspecies of the chestnut-bellied sandgrouse, P. e. floweri, was last seen in the Nile Valley of Egypt in 1979. It is thought to be extinct, but the reasons for this are unknown.
| Biology and health sciences | Columbimorphae | Animals |
206964 | https://en.wikipedia.org/wiki/Homing%20pigeon | Homing pigeon | The homing pigeon is a variety of domestic pigeon (Columba livia domestica), selectively bred for its ability to find its way home over extremely long distances. Because of this skill, homing pigeons were used to carry messages, a practice referred to as "pigeon post". Until the introduction of telephones, they were used commercially to deliver communication; when used during wars, they were called "war pigeons".
The homing pigeon is also called a mail pigeon or messenger pigeon, and colloquially a homer. Perhaps most commonly, the homing pigeon is called a carrier pigeon; this nomenclature can be confusing, though, since it is distinct from the English carrier, an ancient breed of fancy pigeon. Modern-day homing pigeons do have English carrier blood in them because they are in part descendants of the old-style carriers.
The domestic pigeon is derived from the wild rock dove (Columba livia sspp.); the rock dove has an innate homing ability, meaning that it will generally return to its nest using magnetoreception. Flights as long as have been recorded by birds in competitive homing pigeon racing; birds bred for this are colloquially called racing homers. Homing pigeons' average flying speed over moderate distances is around and speeds of up to have been observed in top racers for short distances.
History
Homing pigeons were potentially being used for pigeon post in Ancient Egypt by 1350 BCE. Messages were tied around the legs of the pigeon, which was freed and could reach its original nest. Pliny the Elder described pigeons used in a similar fashion as military messengers around the first century CE.
By the 19th century homing pigeons were used extensively for military communications.
The sport of flying messenger pigeons was well-established as early as 3000 years ago. They were used to proclaim the winner of the Ancient Olympics.
Messenger pigeons were used as early as 1150 in Baghdad and also later by Genghis Khan. By 1167 a regular service between Baghdad and Syria had been established by Sultan Nur ad-Din. In Damietta, by the mouth of the Nile, the Spanish traveller Pedro Tafur saw carrier pigeons for the first time, in 1436, though he imagined that the birds made round trips, out and back. The Republic of Genoa equipped their system of watch towers in the Mediterranean Sea with pigeon posts. Tipu Sultan of Mysore (1750–1799) also used messenger pigeons; they returned to the Jamia Masjid mosque in Srirangapatna, which was his headquarters. The pigeon holes may be seen in the mosque's minarets to this day.
In 1818, a great pigeon race called the Cannonball Run took place at Brussels. In 1860, Paul Reuter, who later founded Reuters press agency, used a fleet of over 45 pigeons to deliver news and stock prices between Brussels and Aachen, the terminus of early telegraph lines. The outcome of the 1815 Battle of Waterloo has often been claimed to have been delivered to London by pigeon but there is no evidence for this, and it is very unlikely; the pigeon post was rare until the 1820s. During the Franco-Prussian War pigeons were used to carry mail between besieged Paris and the French unoccupied territory. In December 1870, it took ten hours for a pigeon carrying microfilms to fly from Perpignan to Brussels.
Historically, pigeons carried messages only one way, to their home. They had to be transported manually before another flight. However, by placing their food at one location and their home at another location, pigeons have been trained to fly back and forth up to twice a day reliably, covering round-trip flights up to 160 km (100 mi). Their reliability has lent itself to occasional use on mail routes, such as the Great Barrier Pigeongram Service established between the Auckland, New Zealand, suburb of Newton and Great Barrier Island in November 1897, possibly the first regular air mail service in the world. The world's first "airmail" stamps were issued for the Great Barrier Pigeon-Gram Service from 1898 to 1908.
In the 19th century, newspapers sometimes used carrier pigeons. To get news from Europe quicker, some New York City newspapers used carrier pigeons. The distance from Europe to Halifax, Nova Scotia, is relatively short. So reporters stationed themselves in Halifax, wrote the information received from incoming ships, and put the messages in capsules attached to the legs of homing pigeons. The birds would then fly from Halifax to New York City where the information would be published.
Homing pigeons were still employed in the 21st century by certain remote police departments in Odisha state in eastern India to provide emergency communication services following natural disasters. In March 2002, it was announced that India's Police Pigeon Service messenger system in Odisha was to be retired, due to the expanded use of the Internet. The Taliban banned the keeping or use of pigeons, including racing pigeons, in Afghanistan in the late 1990s.
To this day, pigeons are still entered into competitions.
Navigation
Research has been performed with the intention of discovering how pigeons, after being transported, can find their way back from distant places they have never visited before. Most researchers believe that homing ability is based on a "map and compass" model, with the compass feature allowing birds to orient and the map feature allowing birds to determine their location relative to a goal site (home loft). While the compass mechanism appears to rely on the sun, the map mechanism has been highly debated. Some researchers believe that the map mechanism relies on the ability of birds to detect the Earth's magnetic field.
A prominent theory is that the birds are able to detect a magnetic field to help them find their way home. Scientific research previously suggested that on top of a pigeon's beak a large number of iron particles are found which remain aligned to Earth's magnetic north like a natural compass, thus acting as compass which helps pigeon in determining its home. However, a 2012 study disproved this theory, putting the field back on course to search for an explanation as to how animals detect magnetic fields.
A light-mediated mechanism that involves the eyes and is lateralized has been examined somewhat, but developments have implicated the trigeminal nerve in magnetoreception. Research by Floriano Papi (Italy, early 1970s) and more recent work, largely by Hans Wallraff, suggest that pigeons also orient themselves using the spatial distribution of atmospheric odors, known as olfactory navigation.
Other research indicates that homing pigeons also navigate through visual landmarks by following familiar roads and other human-made features, making 90-degree turns and following habitual routes, much the same way that humans navigate.
Research by Jon Hagstrum of the US Geological Survey suggests that homing pigeons use low-frequency infrasound to navigate. Sound waves as low as 0.1 Hz have been observed to disrupt or redirect pigeon navigation. The pigeon ear, being far too small to interpret such a long wave, directs pigeons to fly in a circle when first taking air, in order to mentally map such long infrasound waves.
Various experiments suggest that different breeds of homing pigeons rely on different cues to different extents. Charles Walcott at Cornell University was able to demonstrate that while pigeons from one loft were confused by a magnetic anomaly in the Earth it had no effect on birds from another loft away. Other experiments have shown that altering the perceived time of day with artificial lighting or using air conditioning to eliminate odors in the pigeons' home roost affected the pigeons' ability to return home.
GPS tracing studies indicate that gravitational anomalies may play a role as well.
Roles
Postal carriage
A message may be written on thin light paper, rolled into a small tube, and attached to a messenger pigeon's leg. They will only travel to one "mentally marked" point that they have identified as their home, so "pigeon post" can only work when the sender is actually holding the receiver's pigeons.
With training, pigeons can carry up to 75 g (2.5 oz) on their backs. As early as 1903, the German apothecary Julius Neubronner used carrier pigeons to both receive and deliver urgent medication. In 1977, a similar system of 30 carrier pigeons was set up for the transport of laboratory specimens between two English hospitals. Every morning a basket with pigeons was taken from Plymouth General Hospital to Devonport Hospital. The birds then delivered unbreakable vials back to Plymouth as needed. The carrier pigeons became unnecessary in 1983 because of the closure of one of the hospitals. In the 1980s a similar system existed between two French hospitals located in Granville and Avranche.
Wartime communication
Birds were used extensively during World War I. One homing pigeon, Cher Ami, was awarded the French Croix de guerre for his heroic service in delivering 12 important messages, despite having been very badly injured.
During World War II, the Irish Paddy, the American G.I. Joe and the English Mary of Exeter all received the Dickin Medal. They were among 32 pigeons to receive this award, for their gallantry and bravery in saving human lives with their actions. Eighty-two homing pigeons were dropped into the Netherlands with the First Airborne Division Signals as part of Operation Market Garden in World War II. The pigeons' loft was located in London, which would have required them to fly to deliver their messages. Also in World War II, hundreds of homing pigeons with the Confidential Pigeon Service were airdropped into northwest Europe to serve as intelligence vectors for local resistance agents. Birds played a vital part in the Invasion of Normandy as radios could not be used for fear of vital information being intercepted by the enemy.
During the Second World War, the use of pigeons for sending messages was highlighted in Britain by the Princesses Elizabeth and Margaret as Girl Guides joining other Guides sending messages to the World Chief Guide in 1943, as part of a campaign to raise money for homing pigeons.
Computing
The humorous IP over Avian Carriers (RFC 1149) is an Internet protocol for the transmission of messages via homing pigeon. Originally intended as an April Fools' Day RFC entry, this protocol was implemented and used, once, to transmit a message in Bergen, Norway, on 28 April 2001.
In September 2009, a South African IT company based in Durban pitted an 11-month-old bird armed with a data packed 4 GB memory stick against the ADSL service from the country's biggest Internet service provider, Telkom. The pigeon, Winston, took an hour and eight minutes to carry the data . In all, the data transfer took two hours, six minutes, and fifty-seven seconds—the same amount of time it took to transfer 4% of the data over the ADSL.
Smuggling
Homing pigeons have been reported to be used as a smuggling technique, getting objects and narcotics across borders and into prisons. For instance, between 2009 and 2015, pigeons have been reported to carry contraband items such as mobile phones, SIM cards, phone batteries and USB cords into prisons in the Brazilian state of São Paulo. There have also been cases where homing pigeons were used to transport drugs into prisons.
| Biology and health sciences | Pigeons | Animals |
206979 | https://en.wikipedia.org/wiki/Colorectal%20cancer | Colorectal cancer | Colorectal cancer (CRC), also known as bowel cancer, colon cancer, or rectal cancer, is the development of cancer from the colon or rectum (parts of the large intestine). Signs and symptoms may include blood in the stool, a change in bowel movements, weight loss, abdominal pain and fatigue. Most colorectal cancers are due to lifestyle factors and genetic disorders. Risk factors include diet, obesity, smoking, and lack of physical activity. Dietary factors that increase the risk include red meat, processed meat, and alcohol. Another risk factor is inflammatory bowel disease, which includes Crohn's disease and ulcerative colitis. Some of the inherited genetic disorders that can cause colorectal cancer include familial adenomatous polyposis and hereditary non-polyposis colon cancer; however, these represent less than 5% of cases. It typically starts as a benign tumor, often in the form of a polyp, which over time becomes cancerous.
Colorectal cancer may be diagnosed by obtaining a sample of the colon during a sigmoidoscopy or colonoscopy. This is then followed by medical imaging to determine whether the disease has spread. Screening is effective for preventing and decreasing deaths from colorectal cancer. Screening, by one of a number of methods, is recommended starting from the age of 45 to 75. It was recommended starting at age 50 but it was changed to 45 due to increasing numbers of colon cancers. During colonoscopy, small polyps may be removed if found. If a large polyp or tumor is found, a biopsy may be performed to check if it is cancerous. Aspirin and other non-steroidal anti-inflammatory drugs decrease the risk of pain during polyp excision. Their general use is not recommended for this purpose, however, due to side effects.
Treatments used for colorectal cancer may include some combination of surgery, radiation therapy, chemotherapy, and targeted therapy. Cancers that are confined within the wall of the colon may be curable with surgery, while cancer that has spread widely is usually not curable, with management being directed towards improving quality of life and symptoms. The five-year survival rate in the United States was around 65% in 2014. The individual likelihood of survival depends on how advanced the cancer is, whether or not all the cancer can be removed with surgery, and the person's overall health. Globally, colorectal cancer is the third most common type of cancer, making up about 10% of all cases. In 2018, there were 1.09 million new cases and 551,000 deaths from the disease. It is more common in developed countries, where more than 65% of cases are found. It is less common in women than men.
Signs and symptoms
The signs and symptoms of colorectal cancer depend on the location of the tumor in the bowel, and whether it has spread elsewhere in the body (metastasis). The classic warning signs include: worsening constipation, blood in the stool, decrease in stool caliber (thickness), loss of appetite, loss of weight, and nausea or vomiting in someone over 50 years old. Around 50% of people who have colorectal cancer do not report any symptoms.
Rectal bleeding or anemia are high-risk symptoms in people over the age of 50. Weight loss and changes in a person's bowel habit are typically only concerning if they are associated with rectal bleeding.
Cause
75–95% of colorectal cancer cases occur in people with little or no genetic risk. Risk factors include older age, male sex, high intake of fat, sugar, alcohol, red meat, processed meats, obesity, smoking, and a lack of physical exercise. The Rectal Cancer Survival Calculator developed by the MD Anderson Cancer Center additionally considers race to be a risk factor; however, there are equity issues concerning whether this might lead to inequity in clinical decision making. Approximately 10% of cases are linked to insufficient activity. The risk from alcohol appears to increase at greater than one drink per day. Drinking five glasses of water a day is linked to a decrease in the risk of colorectal cancer and adenomatous polyps. Streptococcus gallolyticus is associated with colorectal cancer. Some strains of Streptococcus bovis/Streptococcus equinus complex are consumed by millions of people daily and thus may be safe. 25 to 80% of people with Streptococcus bovis/gallolyticus bacteremia have concomitant colorectal tumors. Seroprevalence of Streptococcus bovis/gallolyticus is considered as a candidate practical marker for the early prediction of an underlying bowel lesion at high risk population. It has been suggested that the presence of antibodies to Streptococcus bovis/gallolyticus antigens or the antigens themselves in the bloodstream may act as markers for the carcinogenesis in the colon.
Pathogenic Escherichia coli may increase the risk of colorectal cancer by producing the genotoxic metabolite, colibactin.
Inflammatory bowel disease
People with inflammatory bowel disease (ulcerative colitis and Crohn's disease) are at increased risk of colon cancer. The risk increases the longer a person has the disease, and the worse the severity of inflammation. In these high risk groups, both prevention with aspirin and regular colonoscopies are recommended. Endoscopic surveillance in this high-risk population may reduce the development of colorectal cancer through early diagnosis and may also reduce the chances of dying from colon cancer. People with inflammatory bowel disease account for less than 2% of colon cancer cases yearly. In those with Crohn's disease, 2% get colorectal cancer after 10 years, 8% after 20 years, and 18% after 30 years. In people who have ulcerative colitis, approximately 16% develop either a cancer precursor or cancer of the colon over 30 years.
Genetics
Those with a family history in two or more first-degree relatives (such as a parent or sibling) have a two to threefold greater risk of disease, and this group accounts for about 20% of all cases. A number of genetic syndromes are also associated with higher rates of colorectal cancer. The most common of these is hereditary nonpolyposis colorectal cancer (HNPCC, or Lynch syndrome) which is present in about 3% of people with colorectal cancer. Other syndromes that are strongly associated with colorectal cancer include Gardner syndrome and familial adenomatous polyposis (FAP). For people with these syndromes, cancer almost always occurs and makes up 1% of the cancer cases. A total proctocolectomy may be recommended for people with FAP as a preventive measure due to the high risk of malignancy. Colectomy, removal of the colon, may not suffice as a preventive measure because of the high risk of rectal cancer if the rectum remains. The most common polyposis syndrome affecting the colon is serrated polyposis syndrome, which is associated with a 25-40% risk of CRC.
Mutations in the pair of genes (POLE and POLD1) have been associated with familial colon cancer.
Most deaths due to colon cancer are associated with metastatic disease. A gene that appears to contribute to the potential for metastatic disease, metastasis associated in colon cancer 1 (MACC1), has been isolated. It is a transcriptional factor that influences the expression of hepatocyte growth factor. This gene is associated with the proliferation, invasion, and scattering of colon cancer cells in cell culture, and tumor growth and metastasis in mice. MACC1 may be a potential target for cancer intervention, but this possibility needs to be confirmed with clinical studies.
Epigenetic factors, such as abnormal DNA methylation of tumor suppressor promoters, play a role in the development of colorectal cancer.
Ashkenazi Jews have a 6% higher risk rate of getting adenomas and then colon cancer due to mutations in the APC gene being more common.
Pathogenesis
Colorectal cancer is a disease originating from the epithelial cells lining the colon or rectum of the gastrointestinal tract, most frequently as a result of genetic mutations in the Wnt signaling pathway that increases signaling activity. The Wnt signaling pathway normally plays an important role for normal function of these cells including maintaining this lining. Mutations can be inherited or acquired, and most probably occur in the intestinal crypt stem cell. The most commonly mutated gene in all colorectal cancer is the APC gene, which produces the APC protein. The APC protein prevents the accumulation of β-catenin protein. Without APC, β-catenin accumulates to high levels and translocates (moves) into the nucleus, binds to DNA, and activates the transcription of proto-oncogenes. These genes are normally important for stem cell renewal and differentiation, but when inappropriately expressed at high levels, they can cause cancer. While APC is mutated in most colon cancers, some cancers have increased β-catenin because of mutations in β-catenin (CTNNB1) that block its own breakdown, or have mutations in other genes with function similar to APC such as AXIN1, AXIN2, TCF7L2, or NKD1.
Beyond the defects in the Wnt signaling pathway, other mutations must occur for the cell to become cancerous. The p53 protein, produced by the TP53 gene, normally monitors cell division and induces their programmed death if they have Wnt pathway defects. Eventually, a cell line acquires a mutation in the TP53 gene and transforms the tissue from a benign epithelial tumor into an invasive epithelial cell cancer. Sometimes the gene encoding p53 is not mutated, but another protective protein named BAX is mutated instead.
Other proteins responsible for programmed cell death that are commonly deactivated in colorectal cancers are TGF-β and DCC (Deleted in Colorectal Cancer). TGF-β has a deactivating mutation in at least half of colorectal cancers. Sometimes TGF-β is not deactivated, but a downstream protein named SMAD is deactivated. DCC commonly has a deleted segment of a chromosome in colorectal cancer.
Approximately 70% of all human genes are expressed in colorectal cancer, with just over 1% of having increased expression in colorectal cancer compared to other forms of cancer. Some genes are oncogenes: they are overexpressed in colorectal cancer. For example, genes encoding the proteins KRAS, RAF, and PI3K, which normally stimulate the cell to divide in response to growth factors, can acquire mutations that result in over-activation of cell proliferation. The chronological order of mutations is sometimes important. If a previous APC mutation occurred, a primary KRAS mutation often progresses to cancer rather than a self-limiting hyperplastic or borderline lesion. PTEN, a tumor suppressor, normally inhibits PI3K, but can sometimes become mutated and deactivated.
Comprehensive, genome-scale analysis has revealed that colorectal carcinomas can be categorized into hypermutated and non-hypermutated tumor types. In addition to the oncogenic and inactivating mutations described for the genes above, non-hypermutated samples also contain mutated CTNNB1, FAM123B, SOX9, ATM, and ARID1A. Progressing through a distinct set of genetic events, hypermutated tumors display mutated forms of ACVR2A, TGFBR2, MSH3, MSH6, SLC9A9, TCF7L2, and BRAF. The common theme among these genes, across both tumor types, is their involvement in Wnt and TGF-β signaling pathways, which results in increased activity of MYC, a central player in colorectal cancer.
Mismatch repair (MMR) deficient tumours are characterized by a relatively high number of poly-nucleotide tandem repeats. This is caused by a deficiency in MMR proteins – which are typically caused by epigenetic silencing and or inherited mutations (e.g., Lynch syndrome). 15 to 18 percent of colorectal cancer tumours have MMR deficiencies, with 3 percent developing due to Lynch syndrome. The role of the mismatch repair system is to protect the integrity of the genetic material within cells (i.e., error detecting and correcting). Consequently, a deficiency in MMR proteins may lead to an inability to detect and repair genetic damage, allowing for further cancer-causing mutations to occur and colorectal cancer to progress.
The polyp to cancer progression sequence is the classical model of colorectal cancer pathogenesis. In this adenoma-carcinoma sequence, normal epithelial cells progress to dysplastic cells such as adenomas, and then to carcinoma, by a process of progressive genetic mutation. Central to the polyp to CRC sequence are gene mutations, epigenetic alterations, and local inflammatory changes. The polyp to CRC sequence can be used as an underlying framework to illustrate how specific molecular changes lead to various cancer subtypes.
Field defects
The term "field cancerization" was first used in 1953 to describe an area or "field" of epithelium that has been preconditioned (by what were largely unknown processes at the time) to predispose it towards development of cancer. Since then, the terms "field cancerization", "field carcinogenesis", "field defect", and "field effect" have been used to describe pre-malignant or pre-neoplastic tissue in which new cancers are likely to arise.
Field defects are important in progression to colon cancer.
However, as pointed out by Rubin, "The vast majority of studies in cancer research has been done on well-defined tumors in vivo, or on discrete neoplastic foci in vitro. Yet there is evidence that more than 80% of the somatic mutations found in mutator phenotype human colorectal tumors occur before the onset of terminal clonal expansion." Similarly, Vogelstein et al. pointed out that more than half of somatic mutations identified in tumors occurred in a pre-neoplastic phase (in a field defect), during growth of apparently normal cells. Likewise, epigenetic alterations present in tumors may have occurred in pre-neoplastic field defects.
An expanded view of field effect has been termed "etiologic field effect", which encompasses not only molecular and pathologic changes in pre-neoplastic cells but also influences of exogenous environmental factors and molecular changes in the local microenvironment on neoplastic evolution from tumor initiation to death.
Epigenetics
Epigenetic alterations are much more frequent in colon cancer than genetic (mutational) alterations. As described by Vogelstein et al., an average cancer of the colon has only 1 or 2 oncogene mutations and 1 to 5 tumor suppressor mutations (together designated "driver mutations"), with about 60 further "passenger" mutations. The oncogenes and tumor suppressor genes are well studied and are described above under Pathogenesis.
In addition to epigenetic alteration of expression of miRNAs, other common types of epigenetic alterations in cancers that change gene expression levels include direct hypermethylation or hypomethylation of CpG islands of protein-encoding genes and alterations in histones and chromosomal architecture that influence gene expression. As an example, 147 hypermethylations and 27 hypomethylations of protein coding genes were frequently associated with colorectal cancers. Of the hypermethylated genes, 10 were hypermethylated in 100% of colon cancers, and many others were hypermethylated in more than 50% of colon cancers. In addition, 11 hypermethylations and 96 hypomethylations of miRNAs were also associated with colorectal cancers. Abnormal (aberrant) methylation occurs as a normal consequence of normal aging and the risk of colorectal cancer increases as a person gets older. The source and trigger of this age-related methylation is unknown. Approximately half of the genes that show age-related methylation changes are the same genes that have been identified to be involved in the development of colorectal cancer. These findings may suggest a reason for age being associated with the increased risk of developing colorectal cancer.
Epigenetic reductions of DNA repair enzyme expression may likely lead to the genomic and epigenomic instability characteristic of cancer. As summarized in the articles Carcinogenesis and Neoplasm, for sporadic cancers in general, a deficiency in DNA repair is occasionally due to a mutation in a DNA repair gene, but is much more frequently due to epigenetic alterations that reduce or silence expression of DNA repair genes.
Epigenetic alterations involved in the development of colorectal cancer may affect a person's response to chemotherapy.
Genomics and epigenomics
Consensus molecular subtypes (CMS) classification of colorectal cancer was first introduced in 2015. CMS classification so far has been considered the most robust classification system available for CRC that has a clear biological interpretability and the basis for future clinical stratification and subtype-based targeted interventions.
A novel Epigenome-based Classification (EpiC) of colorectal cancer was proposed in 2021 introducing 4 enhancer subtypes in people with CRC. Chromatin states using 6 histone marks are characterized to identify EpiC subtypes. A combinatorial therapeutic approach based on the previously introduced consensus molecular subtypes (CMSs) and EpiCs could significantly enhance current treatment strategies.
Diagnosis
Colorectal cancer diagnosis is performed by sampling of areas of the colon suspicious for possible tumor development, typically during colonoscopy or sigmoidoscopy, depending on the location of the lesion.
Medical imaging
A colorectal cancer is sometimes initially discovered on CT scan.
Presence of metastases is determined by a CT scan of the chest, abdomen and pelvis. Other potential imaging tests such as PET and MRI may be used in certain cases. MRI is particularly useful to determine local stage of the tumor and to plan the optimal surgical approach.
MRI is also performed after completion of neoadjuvant chemoradiotherapy to identify patients who achieve complete response. Patients with complete response on both MRI and endoscopy may not require surgical resection and can avoid unnecessary surgical morbidity and complications. Patients selected for non-surgical treatment of rectal cancer should have periodic MRI scans, receive physical examinations, and undergo endoscopy procedures to detect any tumor re-growth which can occur in a minority of these patients. When local recurrence occurs, periodic follow up can detect it when it is still small and curable with salvage surgery. In addition, MRI tumor regression grades can be assigned after chemoradiotherapy which correlate with patients' long-term survival outcomes.
Histopathology
The histopathologic characteristics of the tumor are reported from the analysis of tissue taken from a biopsy or surgery. A pathology report contains a description of the microscopical characteristics of the tumor tissue, including both tumor cells and how the tumor invades into healthy tissues and finally if the tumor appears to be completely removed. The most common form of colon cancer is adenocarcinoma, constituting between 95% and 98% of all cases of colorectal cancer. Other, rarer types include lymphoma, adenosquamous and squamous cell carcinoma. Some subtypes are more aggressive. Immunohistochemistry may be used in uncertain cases.
Staging
Staging of the cancer is based on both radiological and pathological findings. As with most other forms of cancer, tumor staging is based on the TNM system which considers how much the initial tumor has spread and the presence of metastases in lymph nodes and more distant organs. The AJCC 8th edition was published in 2018.
Prevention
It has been estimated that about half of colorectal cancer cases are due to lifestyle factors, and about a quarter of all cases are preventable. Increasing surveillance, engaging in physical activity, consuming a diet high in fiber, quitting smoking and limiting alcohol consumption decrease the risk.
Lifestyle
Lifestyle risk factors with strong evidence include lack of exercise, cigarette smoking, alcohol, and obesity. The risk of colon cancer can be reduced by maintaining a normal body weight through a combination of sufficient exercise and eating a healthy diet.
Current research consistently links eating more red meat and processed meat to a higher risk of the disease. Starting in the 1970s, dietary recommendations to prevent colorectal cancer often included increasing the consumption of whole grains, fruits and vegetables, and reducing the intake of red meat and processed meats. This was based on animal studies and retrospective observational studies. However, large scale prospective studies have failed to demonstrate a significant protective effect, and due to the multiple causes of cancer and the complexity of studying correlations between diet and health, it is uncertain whether any specific dietary interventions will have significant protective effects. In 2018 the National Cancer Institute stated that "There is no reliable evidence that a diet started in adulthood that is low in fat and meat and high in fiber, fruits, and vegetables reduces the risk of CRC by a clinically important degree."
Consuming alcoholic drinks and consuming processed meat both increase the risk of colorectal cancer.
The 2014 World Health Organization cancer report noted that it has been hypothesized that dietary fiber might help prevent colorectal cancer, but that most studies at the time had not yet studied the correlation. A 2019 review, however, found evidence of benefit from dietary fiber and whole grains. The World Cancer Research Fund listed the benefit of fiber for prevention of colorectal cancer as "probable" as of 2017. A 2022 umbrella review says there is "convincing evidence" for that association.
Higher physical activity is recommended. Physical exercise is associated with a modest reduction in colon but not rectal cancer risk. High levels of physical activity reduce the risk of colon cancer by about 21%. Sitting regularly for prolonged periods is associated with higher mortality from colon cancer. Regular exercise does not negate the risk but does lower it.
Medication and supplements
Aspirin and celecoxib appear to decrease the risk of colorectal cancer in those at high risk. Aspirin is recommended in those who are 50 to 60 years old, do not have an increased risk of bleeding, and are at risk for cardiovascular disease to prevent colorectal cancer. It is not recommended in those at average risk.
There is tentative evidence for calcium supplementation, but it is not sufficient to make a recommendation.
Adequate Vitamin D intake and blood levels are associated with a lower risk of colon cancer.
Screening
As more than 80% of colorectal cancers arise from adenomatous polyps, screening for this cancer is effective for both early detection and for prevention. Diagnosis of cases of colorectal cancer through screening tends to occur 2–3 years before diagnosis of cases with symptoms. Any polyps that are detected can be removed, usually by colonoscopy or sigmoidoscopy, and thus prevent them from turning into cancer. Screening has the potential to reduce colorectal cancer deaths by 60%.
The three main screening tests are colonoscopy, fecal occult blood testing, and flexible sigmoidoscopy. Of the three, only sigmoidoscopy cannot screen the right side of the colon where 42% of cancers are found. Flexible sigmoidoscopy, however, has the best evidence for decreasing the risk of death from any cause.
Fecal occult blood testing (FOBT) of the stool is typically recommended every two years and can be either guaiac-based or immunochemical. If abnormal FOBT results are found, participants are typically referred for a follow-up colonoscopy examination. When done once every 1–2 years, FOBT screening reduces colorectal cancer deaths by 16% and among those participating in screening, colorectal cancer deaths can be reduced up to 23%, although it has not been proven to reduce all-cause mortality. Immunochemical tests are accurate and do not require dietary or medication changes before testing. However, research in the UK has found that for these immunochemical tests, the threshold for further investigation is set at a point that may miss more than half of bowel cancer cases. The research suggests that the NHS England's Bowel Cancer Screening Programme could make better use of the test's ability to provide the exact concentration of blood in faeces (rather than only whether it is above or below a cutoff level).
Other options include virtual colonoscopy and stool DNA screening testing (FIT-DNA). Virtual colonoscopy via a CT scan appears as good as standard colonoscopy for detecting cancers and large adenomas but is expensive, associated with radiation exposure, and cannot remove any detected abnormal growths as standard colonoscopy can. Stool DNA screening test looks for biomarkers associated with colorectal cancer and precancerous lesions, including altered DNA and blood hemoglobin. A positive result should be followed by colonoscopy. FIT-DNA has more false positives than FIT and thus results in more adverse effects. Further study is required as of 2016 to determine whether a three-year screening interval is correct.
Recommendations
In the United States, screening is typically recommended between ages 50 and 75 years. The American Cancer Society recommends starting at the age of 45. For those between 76 and 85 years old, the decision to screen should be individualized. For those at high risk, screenings usually begin at around 40.
Several screening methods are recommended including stool-based tests every 2 years, sigmoidoscopy every 10 years with fecal immunochemical testing every two years, and colonoscopy every 10 years. It is unclear which of these two methods is better. Colonoscopy may find more cancers in the first part of the colon, but is associated with greater cost and more complications. For people with average risk who have had a high-quality colonoscopy with normal results, the American Gastroenterological Association does not recommend any type of screening in the 10 years following the colonoscopy. For people over 75 or those with a life expectancy of less than 10 years, screening is not recommended. It takes about 10 years after screening for one out of a 1000 people to benefit. The USPSTF list seven potential strategies for screening, with the most important thing being that at least one of these strategies is appropriately used.
In Canada, among those 50 to 75 years old at normal risk, fecal immunochemical testing or FOBT is recommended every two years or sigmoidoscopy every 10 years. Colonoscopy is less preferred.
Some countries have national colorectal screening programs which offer FOBT screening for all adults within a certain age group, typically starting between ages 50 and 60. Examples of countries with organised screening include the United Kingdom, Australia, the Netherlands, Hong Kong, and Taiwan.
The UK Bowel Cancer Screening Programme aims to find warning signs in people aged 60 to 74, by recommending a faecal immunochemical test (FIT) every two years. FIT measures blood in faeces, and people with levels above a certain threshold may have bowel tissue examined for signs of cancer. Growths having cancerous potential are removed.
Treatment
The treatment of colorectal cancer can be aimed at cure or palliation. The decision on which aim to adopt depends on various factors, including the person's health and preferences, as well as the stage of the tumor. Assessment in multidisciplinary teams is a critical part of determining whether the patient is suitable for surgery or not. When colorectal cancer is caught early, surgery can be curative. However, when it is detected at later stages (for which metastases are present), this is less likely and treatment is often directed at palliation, to relieve symptoms caused by the tumour and keep the person as comfortable as possible.
Surgery
At an early stage, colorectal cancer may be removed during a colonoscopy using one of several techniques, including endoscopic mucosal resection or endoscopic submucosal dissection. Endoscopic resection is possible if there is low possibility of lymph node metastasis and the size and location of the tumor make en bloc resection possible. For people with localized cancer, the preferred treatment is complete surgical removal with adequate margins, with the attempt of achieving a cure. The procedure of choice is a partial colectomy (or proctocolectomy for rectal lesions) where the affected part of the colon or rectum is removed along with parts of its mesocolon and blood supply to facilitate removal of draining lymph nodes. This can be done either by an open laparotomy or laparoscopically, depending on factors related to the individual person and lesion factors. The colon may then be reconnected or a person may have a colostomy.
If there are only a few metastases in the liver or lungs, these may also be removed. Chemotherapy may be used before surgery to shrink the cancer before attempting to remove it. The two most common sites of recurrence of colorectal cancer are the liver and lungs. For peritoneal carcinomatosis cytoreductive surgery, sometimes in combination with HIPEC can be used in an attempt to remove the cancer.
Chemotherapy
In both cancer of the colon and rectum, chemotherapy may be used in addition to surgery in certain cases. The decision to add chemotherapy in management of colon and rectal cancer depends on the stage of the disease.
In Stage I colon cancer, no chemotherapy is offered, and surgery is the definitive treatment. The role of chemotherapy in Stage II colon cancer is debatable, and is usually not offered unless risk factors such as T4 tumor, undifferentiated tumor, vascular and perineural invasion or inadequate lymph node sampling is identified. It is also known that the people who carry abnormalities of the mismatch repair genes do not benefit from chemotherapy. For Stage III and Stage IV colon cancer, chemotherapy is an integral part of treatment.
If cancer has spread to the lymph nodes or distant organs, which is the case with Stage III and Stage IV colon cancer respectively, adding chemotherapy agents fluorouracil, capecitabine or oxaliplatin increases life expectancy. If the lymph nodes do not contain cancer, the benefits of chemotherapy are controversial. If the cancer is widely metastatic or unresectable, treatment is then palliative. Typically in this setting, a number of different chemotherapy medications may be used. Chemotherapy drugs for this condition may include capecitabine, fluorouracil, irinotecan, oxaliplatin and UFT. The drugs capecitabine and fluorouracil are interchangeable, with capecitabine being an oral medication and fluorouracil being an intravenous medicine. Some specific regimens used for CRC are CAPOX, FOLFOX, FOLFOXIRI, and FOLFIRI. Antiangiogenic drugs such as bevacizumab are often added in first line therapy. Another class of drugs used in the second line setting are epidermal growth factor receptor inhibitors, of which the three FDA approved ones are aflibercept, cetuximab and panitumumab.
The primary difference in the approach to low stage rectal cancer is the incorporation of radiation therapy. Often, it is used in conjunction with chemotherapy in a neoadjuvant fashion to enable surgical resection, so that ultimately a colostomy is not required. However, it may not be possible in low lying tumors, in which case, a permanent colostomy may be required. Stage IV rectal cancer is treated similar to Stage IV colon cancer.
Stage IV colorectal cancer due to peritoneal carcinomatosis can be treated using HIPEC combined with cytoreductive surgery, in some people. Also, T4 colorectal cancer can be treated with HIPEC to avoid future relapses.
Radiation therapy
While a combination of radiation and chemotherapy may be useful for rectal cancer, for some people requiring treatment, chemoradiotherapy can increase acute treatment-related toxicity, and has not been shown to improve survival rates compared to radiotherapy alone, although it is associated with less local recurrence. For squamous cell carcinoma of the anal canal, chemoradiation therapy (CRT) with 5-FU and mitomycin C is preferred over radiation alone, offering improved survival outcomes but with increased risks of acute hematological toxicity.
The use of radiotherapy in colon cancer is not routine due to the sensitivity of the bowels to radiation. Radiation therapy's side effects (and occurrence rates) include acute (27%) and late (17%) dermatological toxicities, acute (14%) and late (27%) gastrointestinal toxicities, and late pelvic radiation disease (1-10%), e.g., irreversible lumbosacral plexopathy.
As with chemotherapy, radiotherapy can be used as a neoadjuvant for clinical stages T3 and T4 for rectal cancer. This results in downsizing or downstaging of the tumour, preparing it for surgical resection, and also decreases local recurrence rates. For locally advanced rectal cancer, neoadjuvant chemoradiotherapy has become the standard treatment. Additionally, when surgery is not possible radiation therapy has been suggested to be an effective treatment against CRC pulmonary metastases, which are developed by 10-15% of people with CRC.
Immunotherapy
Immunotherapy with immune checkpoint inhibitors has been found to be useful for a type of colorectal cancer with mismatch repair deficiency and microsatellite instability. Pembrolizumab is approved for advanced CRC tumours that are MMR deficient and have failed usual treatments. Most people who do improve, however, still worsen after months or years.
On the other hand, in a prospective phase 2 study published in June 2022 in The New England Journal of Medicine, 12 patients with Deficient Mismatch Repair () stage II or III rectal adenocarcinoma were administered single-agent dostarlimab, an anti–PD-1 monoclonal antibody, every three weeks for six months. After a median follow-up of 12 months (range, 6 to 25 months), all 12 patients had a complete clinical response with no evidence of tumor on MRI, 18F-fluorodeoxyglucose–positron-emission tomography, endoscopic evaluation, digital rectal examination, or biopsy. Moreover, no patient in the trial needed chemoradiotherapy or surgery, and no patient reported adverse events of grade 3 or higher. However, although the results of this study are promising, the study is small and has uncertainties about long-term outcomes.
Palliative care
Palliative care can be used at the same time as the cancer treatment and is recommended for any person who has advanced colon cancer or who has significant symptoms. Involvement of palliative care may be beneficial to improve the quality of life for both the person and his or her family, by improving symptoms, anxiety and preventing admissions to the hospital.
In people with incurable colorectal cancer, palliative care can consist of procedures that relieve symptoms or complications from the cancer but do not attempt to cure the underlying cancer, thereby improving quality of life. Surgical options may include non-curative surgical removal of some of the cancer tissue, bypassing part of the intestines, or stent placement. These procedures can be considered to improve symptoms and reduce complications such as bleeding from the tumor, abdominal pain and intestinal obstruction. Non-operative methods of symptomatic treatment include radiation therapy to decrease tumor size as well as pain medications.
Psychosocial Intervention
In addition to medical intervention, a variety of psychosocial interventions have been implemented to address psychosocial concerns in the context of colorectal cancer. Depression and anxiety are highly prevalent in patients diagnosed with CRC, therefore psychosocial interventions can be helpful for alleviating psychological distress. Many patients continue to experience symptoms of anxiety and depression following treatment, regardless of treatment outcome. Societal stigmas associated with colorectal cancer present further psychosocial challenges for CRC patients and their families.
Depression and Anxiety
Colorectal cancer patients have a 51% higher risk of experiencing depression than individuals without the disease. Additionally, CRC patients are at high risk of experiencing severe anxiety, low self-esteem, poor self-concept, and social anxiety.
Post-Treatment Distress
Regardless of treatment outcome, many CRC patients experience ongoing symptoms of anxiety, depression, and distress.
Survivorship of CRC can involve significant lifestyle adjustments. Postoperative afflictions may include stomas, bowel issues, incontinence, odor, and changes to sexual functioning. These changes can result in distorted body image, social anxiety, depression, and distress—all of which contribute to a poorer quality of life.
Colorectal cancer is the second leading cause of cancer-related death worldwide. Transitioning into palliative care and contending with mortality can be a deeply distressing experience for a CRC patient and their loved ones.
Stigma
Colorectal cancer is highly stigmatized and can elicit feelings of disgust from patients, healthcare professionals, family, intimate partners, and the general public. Patients with stomas are especially vulnerable to stigmatization due to unavoidable odors, gas, and unpleasant noises from stoma bags. Additionally, associated CRC risk factors like poor diet, alcohol consumption, and lack of physical activity prompt negative assumptions of blame and personal responsibility onto CRC patients. Judgement from others along with internalized self-blame and embarrassment can negatively affect self-esteem, sociability, and quality of life.
Methods of Intervention
Face-to-face interventions such as clinician-patient talk therapy, body-mind-spirit practices, and support group sessions have been identified as most effective in reducing anxiety and depression in CRC patients. Additionally, journaling exercises and over-the-phone talk therapy sessions have been implemented. Though deemed less effective, these non-face-to-face interventions are economically inclusive and have been found to reduce both depression and anxiety in CRC patients.
Follow-up
The U.S. National Comprehensive Cancer Network and American Society of Clinical Oncology provide guidelines for the follow-up of colon cancer. A medical history and physical examination are recommended every 3 to 6 months for 2 years, then every 6 months for 5 years. Carcinoembryonic antigen blood level measurements follow the same timing, but are only advised for people with T2 or greater lesions who are candidates for intervention. A CT-scan of the chest, abdomen and pelvis can be considered annually for the first 3 years for people who are at high risk of recurrence (for example, those who had poorly differentiated tumors or venous or lymphatic invasion) and are candidates for curative surgery (with the aim to cure). A colonoscopy can be done after 1 year, except if it could not be done during the initial staging because of an obstructing mass, in which case it should be performed after 3 to 6 months. If a villous polyp, a polyp >1 centimeter or high-grade dysplasia is found, it can be repeated after 3 years, then every 5 years. For other abnormalities, the colonoscopy can be repeated after 1 year.
Routine PET or ultrasound scanning, chest X-rays, complete blood count or liver function tests are not recommended.
For people who have undergone curative surgery or adjuvant therapy (or both) to treat non-metastatic colorectal cancer, intense surveillance and close follow-up have not been shown to provide additional survival benefits.
Exercise
Exercise may be recommended in the future as secondary therapy to cancer survivors. In epidemiological studies, exercise may decrease colorectal cancer-specific mortality and all-cause mortality. Results for the specific amounts of exercise needed to observe a benefit were conflicting. These differences may reflect differences in tumour biology and the expression of biomarkers. People with tumors that lacked CTNNB1 expression (β-catenin), involved in Wnt signalling pathway, required more than 18 Metabolic equivalent (MET) hours per week, a measure of exercise, to observe a reduction in colorectal cancer mortality. The mechanism of how exercise benefits survival may be involved in immune surveillance and inflammation pathways. In clinical studies, a pro-inflammatory response was found in people with stage II-III colorectal cancer who underwent 2 weeks of moderate exercise after completing their primary therapy. Oxidative balance may be another possible mechanism for benefits observed. A significant decrease in 8-oxo-dG was found in the urine of people who underwent 2 weeks of moderate exercise after primary therapy. Other possible mechanisms may involve metabolic hormone and sex-steroid hormones, although these pathways may be involved in other types of cancers.
Another potential biomarker may be p27. Survivors with tumors that expressed p27 and performed greater and equal to 18 MET hours per week were found to have reduced colorectal cancer mortality survival compared to those with less than 18 MET hours per week. Survivors without p27 expression who exercised were shown to have worse outcomes. The constitutive activation of PI3K/AKT/mTOR pathway may explain the loss of p27 and excess energy balance may up-regulate p27 to stop cancer cells from dividing.
Physical activity provides benefits to people with non-advanced colorectal cancer. Improvements in aerobic fitness, cancer-related fatigue and health-related quality of life have been reported in the short term. However, these improvements were not observed at the level of disease-related mental health, such as anxiety and depression.
Prognosis
Fewer than 600 genes are linked to outcomes in colorectal cancer. These include both unfavorable genes, where high expression is related to poor outcome, for example the heat shock 70 kDa protein 1 (HSPA1A), and favorable genes where high expression is associated with better survival, for example the putative RNA-binding protein 3 (RBM3). The prognosis is also correlated with a poor fidelity of the pre-mRNA splicing apparatus, and thus a high number of deviating alternative splicing.
Recurrence rates
The average five-year recurrence rate in people with colon cancer where surgery is successful is 5% for stage I cancers, 12% in stage II and 33% in stage III. However, depending on the number of risk factors it ranges from 9–22% in stage II and 17–44% in stage III. The average five-year recurrence rate in people with rectal cancer where surgery is successful is 9% for stage 0 (after pre-treatment) cancers, 8% for stage I cancers, 18% in stage II and 34% in stage III. Depending on the number of risk factors (0-2) the risk for distant metastasis in rectal cancer ranges from 4-11% in stage 0, 6-12% in stage I, 11-28% in stage II and 15-43% in stage III.
The recurrence rates have decreased over the past decades as a result of improvements in the colorectal cancer management. The risk of recurrence after five years of surveillance remain very low.
Survival rates
In Europe the five-year survival rate for colorectal cancer is less than 60%. In the developed world about a third of people who get the disease die from it.
Survival is directly related to detection and the type of cancer involved, but overall is poor for symptomatic cancers, as they are typically quite advanced. Survival rates for early stage detection are about five times that of late stage cancers. People with a tumor that has not breached the muscularis mucosa (TNM stage Tis, N0, M0) have a five-year survival rate of 100%, while those with invasive cancer of T1 (within the submucosal layer) or T2 (within the muscular layer) have an average five-year survival rate of approximately 90%. Those with a more invasive tumor yet without node involvement (T3-4, N0, M0) have an average five-year survival rate of approximately 70%. People with positive regional lymph nodes (any T, N1-3, M0) have an average five-year survival rate of approximately 40%, while those with distant metastases (any T, any N, M1) have a poor prognosis and the five year survival ranges from <5 percent to 31 percent.
Five-year overall survival (OS) in rectal cancer after modern preoperative treatment and surgery was 90% for stage 0, 86% for stage I, 78% for stage II, and 67% for stage III according to a nationwide, population-based study.
Whilst the impact of colorectal cancer on those who survive varies greatly there will often be a need to adapt to both physical and psychological outcomes of the illness and its treatment. For example, it is common for people to experience incontinence, sexual dysfunction, problems with stoma care and fear of cancer recurrence after primary treatment has concluded.
A qualitative systematic review published in 2021 highlighted that there are three main factors influencing adaptation to living with and beyond colorectal cancer: support mechanisms, severity of late effects of treatment and psychosocial adjustment. Therefore, it is essential that people are offered appropriate support to help them better adapt to life following treatment.
Epidemiology
Globally more than 1 million people get colorectal cancer every year resulting in about 715,000 deaths as of 2010 up from 490,000 in 1990.
, it is the second most common cause of cancer in women (9.2% of diagnoses) and the third most common in men (10.0%) with it being the fourth most common cause of cancer death after lung, stomach, and liver cancer. It is more common in developed than developing countries. Global incidence varies 10-fold, with highest rates in Australia, New Zealand, Europe and the US and lowest rates in Africa and South-Central Asia.
United States
In 2022, the incidence of colorectal cancer in the United States was anticipated to be about 151,000 adults, including over 106,000 new cases of colon cancer (some 54,000 men and 52,000 women) and about 45,000 new cases of rectal cancer. Since the 1980s, the incidence of colorectal cancer decreased, dropping by about 2% annually from 2014 to 2018 in adults aged 50 and older, due mainly to improved screening. However, incidence of colorectal cancer has increased in individuals aged 25 to 50. In early 2023, the American Cancer Society (ACS) reported that 20% of diagnoses (of colon cancer) in 2019 were in patients under age 55, which is about double the rate in 1995, and rates of advanced disease increased by about 3% annually in people younger than 50. It predicted that, in 2023, an estimated 19,550 diagnoses and 3,750 deaths would be in people younger than 50. Colorectal cancer also disproportionately affects the Black community, where the rates are the highest of any racial/ethnic group in the US. African Americans are about 20% more likely to get colorectal cancer and about 40% more likely to die from it than most other groups. Black Americans often experience greater obstacles to cancer prevention, detection, treatment, and survival, including systemic racial disparities that are complex and go beyond the obvious connection to cancer.
United Kingdom
In the UK about 41,000 people a year get colon cancer making it the fourth most common type.
Australia
One in 19 men and one in 28 women in Australia will develop colorectal cancer before the age of 75; one in 10 men and one in 15 women will develop it by 85 years of age.
Papua New Guinea
In Papua New Guinea and other Pacific Island States including the Solomon Islands, colorectal cancer is a very rare cancer compared to lung, stomach, liver or breast cancer. It is estimated that 8 in 100,000 people are likely to develop colorectal cancer every year, while 24 in 100,000 women are likely to develop breast cancer.
Early-onset colorectal cancer (EOCC)
A diagnosis of colorectal cancer in patients under 50 years of age is referred to as early-onset colorectal cancer (EOCC). Instances of EOCC have increased over the last decade, specifically in patient populations aged 20 to 40 years old throughout North America, Europe, Australia, and China.
Incidence by age
The incidence of colorectal cancer in younger populations has increased over the last decade. While advancements in diagnostic procedure may have some impact, reduced likelihood of screening among these populations suggests detection bias is not a major contributor to this trend. It is more likely that cohort effects are contributing.
The population experiencing the greatest rise in EOCC cases are men and women aged 20 to 29 years old, with incidence increasing by 7.9% per year between 2004 and 2016. Similarly, though less severe, men and women aged 30 to 39 experienced an increase in cases at a rate of 3.4% per year during that same time period. Despite these increases, the mortality rate for colorectal cancer has remained the same.
Risk factors
Risk factors associated with EOCC are akin to those of all colorectal cancer cases. Observed cohort-effects are likely the product of generational shifts in lifestyle and environmental factors.
Preventative screening
In 2018, the American Cancer Society modified their previous screening guideline for colorectal cancer from age 50 down to age 45 following the recognition of increasing cases of EOCC. Individuals under the age of 60 have been identified as most susceptible to non-participation in colorectal cancer screening.
History
Rectal cancer has been diagnosed in an Ancient Egyptian mummy who had lived in the Dakhleh Oasis during the Ptolemaic period.
Society and culture
In the United States, March is colorectal cancer awareness month.
Research
Preliminary in-vitro evidence suggests lactic acid bacteria (e.g., lactobacilli, streptococci or lactococci) may be protective against the development and progression of colorectal cancer through several mechanisms such as antioxidant activity, immunomodulation, promoting programmed cell death, antiproliferative effects, and epigenetic modification of cancer cells.
The Cancer Genome Atlas
The Colorectal Cancer Atlas integrating genomic and proteomic data pertaining to colorectal cancer tissues and cell lines have been developed.
| Biology and health sciences | Cancer | null |
207036 | https://en.wikipedia.org/wiki/Copepod | Copepod | Copepods (; meaning "oar-feet") are a group of small crustaceans found in nearly every freshwater and saltwater habitat. Some species are planktonic (living in the water column), some are benthic (living on the sediments), several species have parasitic phases, and some continental species may live in limnoterrestrial habitats and other wet terrestrial places, such as swamps, under leaf fall in wet forests, bogs, springs, ephemeral ponds, puddles, damp moss, or water-filled recesses of plants (phytotelmata) such as bromeliads and pitcher plants. Many live underground in marine and freshwater caves, sinkholes, or stream beds. Copepods are sometimes used as biodiversity indicators.
As with other crustaceans, copepods have a larval form. For copepods, the egg hatches into a nauplius form, with a head and a tail but no true thorax or abdomen. The larva molts several times until it resembles the adult and then, after more molts, achieves adult development. The nauplius form is so different from the adult form that it was once thought to be a separate species. The metamorphosis had, until 1832, led to copepods being misidentified as zoophytes or insects (albeit aquatic ones), or, for parasitic copepods, 'fish lice'.
Classification and diversity
Copepods are assigned to the class Copepoda within the superclass Multicrustacea in the subphylum Crustacea. An alternative treatment is as a subclass belonging to class Hexanauplia. They are divided into 10 orders. Some 13,000 species of copepods are known, and 2,800 of them live in fresh water.
Characteristics
Copepods vary considerably, but are typically long, with a teardrop-shaped body and large antennae. Like other crustaceans, they have an armoured exoskeleton, but they are so small that in most species, this thin armour and the entire body is almost totally transparent. Some polar copepods reach . Most copepods have a single median compound eye, usually bright red and in the centre of the transparent head. Subterranean species may be eyeless, and members of the genera Copilia and Corycaeus possess two eyes, each of which has a large anterior cuticular lens paired with a posterior internal lens to form a telescope. Like other crustaceans, copepods possess two pairs of antennae; the first pair is often long and conspicuous.
Free-living copepods of the orders Calanoida, Cyclopoida, and Harpacticoida typically have a short, cylindrical body, with a rounded or beaked head, although considerable variation exists in this pattern. The head is fused with the first one or two thoracic segments, while the remainder of the thorax has three to five segments, each with limbs. The first pair of thoracic appendages is modified to form maxillipeds, which assist in feeding. The abdomen is typically narrower than the thorax, and contains five segments without any appendages, except for some tail-like "rami" at the tip. Parasitic copepods (the other seven orders) vary widely in morphology and no generalizations are possible.
Because of their small size, copepods have no need of any heart or circulatory system (the members of the order Calanoida have a heart, but no blood vessels), and most also lack gills. Instead, they absorb oxygen directly into their bodies. Their excretory system consists of maxillary glands.
Behavior
The second pair of cephalic appendages in free-living copepods is usually the main time-averaged source of propulsion, beating like oars to pull the animal through the water. However, different groups have different modes of feeding and locomotion, ranging from almost immotile for several minutes (e.g. some harpacticoid copepods) to intermittent motion (e.g., some cyclopoid copepods) and continuous displacements with some escape reactions (e.g. most calanoid copepods).
Some copepods have extremely fast escape responses when a predator is sensed, and can jump with high speed over a few millimetres. Many species have neurons surrounded by myelin (for increased conduction speed), which is very rare among invertebrates (other examples are some annelids and malacostracan crustaceans like palaemonid shrimp and penaeids). Even rarer, the myelin is highly organized, resembling the well-organized wrapping found in vertebrates (Gnathostomata). Despite their fast escape response, copepods are successfully hunted by slow-swimming seahorses, which approach their prey so gradually, it senses no turbulence, then suck the copepod into their snout too suddenly for the copepod to escape.
Several species are bioluminescent and able to produce light. It is assumed this is an antipredatory defense mechanism.
Finding a mate in the three-dimensional space of open water is challenging. Some copepod females solve the problem by emitting pheromones, which leave a trail in the water that the male can follow. Copepods experience a low Reynolds number and therefore a high relative viscosity. One foraging strategy involves chemical detection of sinking marine snow aggregates and taking advantage of nearby low-pressure gradients to swim quickly towards food sources.
Diet
Most free-living copepods feed directly on phytoplankton, catching cells individually. A single copepod can consume up to 373,000 phytoplankton per day. They generally have to clear the equivalent to about a million times their own body volume of water every day to cover their nutritional needs. Some of the larger species are predators of their smaller relatives. Many benthic copepods eat organic detritus or the bacteria that grow in it, and their mouth parts are adapted for scraping and biting. Herbivorous copepods, particularly those in rich, cold seas, store up energy from their food as oil droplets while they feed in the spring and summer on plankton blooms. These droplets may take up over half of the volume of their bodies in polar species. Many copepods (e.g., fish lice like the Siphonostomatoida) are parasites, and feed on their host organisms. In fact, three of the 10 known orders of copepods are wholly or largely parasitic, with another three comprising most of the free-living species.
Life cycle
Most nonparasitic copepods are holoplanktonic, meaning they stay planktonic for all of their lifecycles, although harpacticoids, although free-living, tend to be benthic rather than planktonic.
During mating, the male copepod grips the female with his first pair of antennae, which is sometimes modified for this purpose. The male then produces an adhesive package of sperm and transfers it to the female's genital opening with his thoracic limbs. Eggs are sometimes laid directly into the water, but many species enclose them within a sac attached to the female's body until they hatch. In some pond-dwelling species, the eggs have a tough shell and can lie dormant for extended periods if the pond dries up.
Eggs hatch into nauplius larvae, which consist of a head with a small tail, but no thorax or true abdomen. The nauplius moults five or six times, before emerging as a "copepodid larva". This stage resembles the adult, but has a simple, unsegmented abdomen and only three pairs of thoracic limbs. After a further five moults, the copepod takes on the adult form. The entire process from hatching to adulthood can take a week to a year, depending on the species and environmental conditions such as temperature and nutrition (e.g., egg-to-adult time in the calanoid Parvocalanus crassirostris is ~7 days at but 19 days at .
Biophysics
Copepods jump out of the water - porpoising. The biophysics of this motion has been described by Waggett and Buskey 2007 and Kim et al 2015.
Ecology
Planktonic copepods are important to global ecology and the carbon cycle. They are usually the dominant members of the zooplankton, and are major food organisms for small fish such as the dragonet, banded killifish, Alaska pollock, and other crustaceans such as krill in the ocean and in fresh water. Some scientists say they form the largest animal biomass on earth. Copepods compete for this title with Antarctic krill (Euphausia superba). C. glacialis inhabits the edge of the Arctic icepack, especially in polynyas where light (and photosynthesis) is present, in which they alone comprise up to 80% of zooplankton biomass. They bloom as the ice recedes each spring. The ongoing large reduction in the annual ice pack minimum may force them to compete in the open ocean with the much less nourishing C. finmarchicus, which is spreading from the North Sea and the Norwegian Sea into the Barents Sea.
Because of their smaller size and relatively faster growth rates, and because they are more evenly distributed throughout more of the world's oceans, copepods almost certainly contribute far more to the secondary productivity of the world's oceans, and to the global ocean carbon sink than krill, and perhaps more than all other groups of organisms together. The surface layers of the oceans are believed to be the world's largest carbon sink, absorbing about 2 billion tons of carbon a year, the equivalent to perhaps a third of human carbon emissions, thus reducing their impact. Many planktonic copepods feed near the surface at night, then sink (by changing oils into more dense fats) into deeper water during the day to avoid visual predators. Their moulted exoskeletons, faecal pellets, and respiration at depth all bring carbon to the deep sea.
About half of the estimated 14,000 described species of copepods are parasitic
and many have adapted extremely modified bodies for their parasitic lifestyles.
They attach themselves to bony fish, sharks, marine mammals, and many kinds of invertebrates such as corals, other crustaceans, molluscs, sponges, and tunicates. They also live as ectoparasites on some freshwater fish.
Copepods as parasitic hosts
In addition to being parasites themselves, copepods are subject to parasitic infection. The most common parasites are marine dinoflagellates of the genus Blastodinium, which are gut parasites of many copepod species. Twelve species of Blastodinium are described, the majority of which were discovered in the Mediterranean Sea. Most Blastodinium species infect several different hosts, but species-specific infection of copepods does occur. Generally, adult copepod females and juveniles are infected.
During the naupliar stage, the copepod host ingests the unicellular dinospore of the parasite. The dinospore is not digested and continues to grow inside the intestinal lumen of the copepod. Eventually, the parasite divides into a multicellular arrangement called a trophont. This trophont is considered parasitic, contains thousands of cells, and can be several hundred micrometers in length. The trophont is greenish to brownish in color as a result of well-defined chloroplasts. At maturity, the trophont ruptures and Blastodinium spp. are released from the copepod anus as free dinospore cells. Not much is known about the dinospore stage of Blastodinium and its ability to persist outside of the copepod host in relatively high abundances.
The copepod Calanus finmarchicus, which dominates the northeastern Atlantic coast, has been shown to be greatly infected by this parasite. A 2014 study in this region found up to 58% of collected C. finmarchicus females to be infected. In this study, Blastodinium-infected females had no measurable feeding rate over a 24-hour period. This is compared to uninfected females which, on average, ate 2.93 × 104 cells per day. Blastodinium-infected females of C. finmarchicus exhibited characteristic signs of starvation, including decreased respiration, fecundity, and fecal pellet production. Though photosynthetic, Blastodinium spp. procure most of their energy from organic material in the copepod gut, thus contributing to host starvation. Underdeveloped or disintegrated ovaries and decreased fecal pellet size are a direct result of starvation in female copepods. Parasitic infection by Blastodinium spp. could have serious ramifications on the success of copepod species and the function of entire marine ecosystems. Blastodinium parasitism is not lethal, but has negative impacts on copepod physiology, which in turn may alter marine biogeochemical cycles.
Freshwater copepods of the Cyclops genus are the intermediate host of the Guinea worm (Dracunculus medinensis), the nematode that causes dracunculiasis disease in humans. This disease may be close to being eradicated through efforts by the U.S. Centers for Disease Control and Prevention and the World Health Organization.
Evolution
Despite their modern abundance, due to their small size and fragility, copepods are extremely rare in the fossil record. The oldest known fossils of copepods are from the late Carboniferous (Pennsylvanian) of Oman, around 303 million years old, which were found in a clast of bitumen from a glacial diamictite. The copepods present in the bitumen clast were likely residents of a subglacial lake which the bitumen had seeped upwards through while still liquid, before the clast subsequently solidified and was deposited by glaciers. Though most of the remains were undiagnostic, at least some likely belonged to the extant harpacticoid family Canthocamptidae, suggesting that copepods had already substantially diversified by this time. Possible microfossils of copepods are known from the Cambrian of North America. Transitions to parasitism have occurred within copepods independently at least 14 different times, with the oldest record of this being from damage to fossil echinoids done by cyclopoids from the Middle Jurassic of France, around 168 million years old.
Practical aspects
In marine aquaria
Live copepods are used in the saltwater aquarium hobby as a food source and are generally considered beneficial in most reef tanks. They are scavengers and also may feed on algae, including coralline algae. Live copepods are popular among hobbyists who are attempting to keep particularly difficult species such as the mandarin dragonet or scooter blenny. They are also popular to hobbyists who want to breed marine species in captivity. In a saltwater aquarium, copepods are typically stocked in the refugium.
Water supplies
Copepods are sometimes found in public main water supplies, especially systems where the water is not mechanically filtered, such as New York City, Boston, and San Francisco. This is not usually a problem in treated water supplies. In some tropical countries, such as Peru and Bangladesh, a correlation has been found between copepods' presence and cholera in untreated water, because the cholera bacteria attach to the surfaces of planktonic animals. The larvae of the guinea worm must develop within a copepod's digestive tract before being transmitted to humans. The risk of infection with these diseases can be reduced by filtering out the copepods (and other matter), for example with a cloth filter.
Copepods have been used successfully in Vietnam to control disease-bearing mosquitoes such as Aedes aegypti that transmit dengue fever and other human parasitic diseases.
The copepods can be added to water-storage containers where the mosquitoes breed. Copepods, primarily of the genera Mesocyclops and Macrocyclops (such as Macrocyclops albidus), can survive for periods of months in the containers, if the containers are not completely drained by their users. They attack, kill, and eat the younger first- and second-instar larvae of the mosquitoes. This biological control method is complemented by community trash removal and recycling to eliminate other possible mosquito-breeding sites. Because the water in these containers is drawn from uncontaminated sources such as rainfall, the risk of contamination by cholera bacteria is small, and in fact no cases of cholera have been linked to copepods introduced into water-storage containers. Trials using copepods to control container-breeding mosquitoes are underway in several other countries, including Thailand and the southern United States. The method, though, would be very ill-advised in areas where the guinea worm is endemic.
The presence of copepods in the New York City water supply system has caused problems for some Jewish people who observe kashrut. Copepods, being crustaceans, are not kosher, nor are they quite small enough to be ignored as nonfood microscopic organisms, since some specimens can be seen with the naked eye. Hence, large specimens are certainly non-Kosher. However, some species are visible to the naked eye, but are small enough that they only appear as little white specks. These are problematic, as it is a question as to whether they are considered visible enough to be non-Kosher.
When a group of rabbis in Brooklyn, New York, discovered these copepods in the summer of 2004, they triggered such debate in rabbinic circles that some observant Jews felt compelled to buy and install filters for their water. The water was ruled kosher by posek Yisrael Belsky, chief posek of the OU and one of the most scientifically literate poskim of his time. Meanwhile, Rabbi Dovid Feinstein, based on the ruling of Rabbi Yosef Shalom Elyashiv - the two widely considered to be the greatest poskim of their time - ruled it was not kosher until filtered. Several major kashrus organizations (e.g OU Kashrus and Star-K) require tap water to have filters.
In popular culture
The Nickelodeon television series SpongeBob SquarePants features a copepod named Sheldon J. Plankton as a recurring character.
| Biology and health sciences | Crustaceans | Animals |
207040 | https://en.wikipedia.org/wiki/First%20aid%20kit | First aid kit | A first aid kit or medical kit is a collection of supplies and equipment used to give immediate medical treatment, primarily to treat injuries and other mild or moderate medical conditions. There is a wide variation in the contents of first aid kits based on the knowledge and experience of those putting it together, the differing first aid requirements of the area where it may be used, and variations in legislation or regulation in a given area.
The international standard for first aid kits is that they should be identified with the ISO graphical symbol for first aid (from ISO 7010), which is an equal white cross on a green background.
Standard kits often come in durable plastic boxes, fabric pouches or in wall mounted cabinets. The type of container will vary depending on the purpose, and they range in size from wallet-sized through to a large box. It is recommended that all kits are kept in a clean dust- and damp-proof container, in order to keep the contents safe and aseptic.
Kits should be checked regularly and restocked if any items are damaged or are out of date.
Appearance
The International Organization for Standardization (ISO) sets a standard for first aid kits of being green, with a white cross, in order to make them easily recognizable to anyone requiring first aid.
The ISO only endorses the use of the green background and white cross, and this has been adopted as a standard across many countries and regions, including the entire EU. First aid kits are sometimes marked (by an individual or organization) with a red cross on white background, but use of this symbol by anyone but the International Committee of the Red Cross (ICRC) or associated agency is illegal under the terms of the First Geneva Convention, which designates the red cross as a protected symbol in all countries signatory to it. One of the few exceptions is in North America, where despite the passing of the First Geneva convention in 1864, and its ratification in the United States in 1881, Johnson & Johnson has used the red cross as a mark on its products since 1887 and registered the symbol as a U.S. trademark for medicinal and surgical plasters in 1905.
Some first aid kits may also feature the Star of Life, normally associated with emergency medical services, but which are also used to indicate that the service using it can offer an appropriate point of care. Though not supported by the ISO, a white cross on red background is also widely recognized as a first aid symbol. However, for very small medical institutions and domestic purposes, the white cross on a plain green background is preferred.
Contents of first aid kits
Commercially available first aid kits available via normal retail routes have traditionally been intended for treatment of minor injuries only. Typical contents include adhesive bandages, regular strength pain medication, gauze and low grade disinfectant.
Specialized first aid kits are available for various regions, vehicles or activities, which may focus on specific risks or concerns related to the activity. For example, first aid kits sold through marine supply stores for use in watercraft may contain seasickness remedies.
Airway, breathing and circulation
First aid treats the ABCs as the foundation of good treatment. For this reason, most modern commercial first aid kits (although not necessarily those assembled at home) will contain a suitable infection barrier for performing artificial respiration as part of cardiopulmonary resuscitation, examples include:
Pocket mask
Face shield
Advanced first aid kits may also contain items such as:
Oropharyngeal airway
Nasopharyngeal airway
Bag valve mask
Manual aspirator or suction unit
Sphygmomanometer (blood pressure cuff)
Stethoscope
Some first aid kits, specifically those used by event first aiders and emergency services, include bottled oxygen for resuscitation and therapy.
Basic items
Basic items on a first aid kit consists of:
Adhesive dressings and bandages
Antiseptic solution (most commonly povidone iodine or hydrogen peroxide)
Cotton balls or swabs
Emergency blanket
Gauze sponge
Gloves
Hand sanitizer
Ice pack
Alcohol
Saline solution
Tweezers
Trauma injuries
Trauma injuries, such as bleeding, bone fractures or burns, are usually the main focus of most first aid kits, with items such as bandages and dressings being found in the vast majority of all kits.
Adhesive bandages (band-aids, sticking plasters) - can include ones shaped for particular body parts, such as knuckles
Moleskin – for blister treatment and prevention
Dressings (sterile, applied directly to the wound)
Sterile eye pads
Sterile gauze pads
Sterile non-adherent pads, containing a non-stick teflon layer
Petrolatum gauze pads, used as an occlusive (air-tight) dressing for sucking chest wounds, as well as a non-stick dressing
Bandages (for securing dressings, not necessarily sterile)
Gauze roller bandages – absorbent, breathable, and often elastic
Elastic bandages – used for sprains, and pressure bandages
Adhesive, elastic roller bandages (commonly called 'Vet wrap') – very effective pressure bandages and durable, waterproof bandaging
Triangular bandages – used as slings, tourniquets, to tie splints, and many other uses
Butterfly closure strips – used like stitches to close wounds, usually only included for higher level response as can seal in infection in uncleaned wounds.
Saline – used for cleaning wounds or washing out foreign bodies from eyes
Soap – used with water to clean superficial wounds once bleeding is stopped
Antiseptic wipes or sprays for reducing the risk of infection in abrasions or around wounds. Dirty wounds must be cleaned for antiseptics to be effective.
Burn dressing, which is usually a sterile pad soaked in a cooling gel
Adhesive tape, hypoallergenic
Hemostatic agents may be included in first aid kits, especially military, combat or tactical kits, to promote clotting for severe bleeding.
Personal protective equipment
The use of personal protective equipment or PPE will vary by the kit, depending on its use and anticipated risk of infection. The adjuncts to artificial respiration are covered above, but other common infection control PPE includes:
Gloves which are single-use and disposable to prevent cross infection
Goggles or other eye protection
Surgical mask or N95 mask to reduce the possibility of airborne infection transmission (sometimes placed on patient instead of caregivers. For this purpose the mask should not have an exhale valve)
Apron
Instruments and equipment
Trauma shears for cutting clothing and general use
Scissors are less useful but often included (usually to cut medical equipment off or smaller)
Tweezers, for removing splinters, amongst others.
Lighter for sanitizing tweezers or pliers etc.
Alcohol pads for sanitizing equipment, or unbroken skin. This is sometimes used to debride wounds, however some training authorities advise against this as it may kill cells which bacteria can then feed on
Irrigation syringe – with catheter tip for cleaning wounds with sterile water, saline solution, or a weak iodine solution. The stream of liquid flushes out particles of dirt and debris.
Torch (also known as a flashlight)
Instant-acting chemical cold packs
Alcohol rub (hand sanitizer) or antiseptic hand wipes
Thermometer
Space blanket (lightweight plastic foil blanket, also known as "emergency blanket")
Penlight
Cotton swab
Cotton wool, for applying antiseptic lotions.
Safety pins, for pinning bandages.
Medication
Medication can be a controversial addition to a first aid kit, especially if it is for use on public. It is, however, common for personal or family first aid kits to contain certain medications. Dependent on scope of practice, the main types of medicine are life saving medications, which may be commonly found in first aid kits used by paid or assigned first aiders for members of the public or employees, painkillers, which are often found in personal kits, but may also be found in public provision and lastly symptomatic relief medicines, which are generally only found in personal kits.
Life saving
Aspirin primarily used for central medical chest pain as an anti-platelet
Epinephrine autoinjector (brand name Epipen) – often included in kits for wilderness use and in places such as summer camps, to temporarily reduce airway swelling in the event of anaphylactic shock. Note that epinephrine does not treat the anaphylactic shock itself; it only opens the airway to prevent suffocation and allow time for other treatments to be used or help to arrive. The effects of epinephrine (adrenaline) are short-lived, and swelling of the throat may return, requiring the use of additional epipens until other drugs can take effect, or more advanced airway methods (such as intubation) can be established.
Diphenhydramine (brand name Benadryl) – Used to treat or prevent anaphylactic shock. Best administered as soon as symptoms appear when impending anaphylactic shock is suspected. Once the airway is restricted, oral drugs can no longer be administered until the airway is clear again, such as after the administration of an epipen. A common recommendation for adults is to take two 25mg pills. Non-solid forms of the drug, such as liquid or dissolving strips, may be absorbed more rapidly than tablets or capsules, and therefore more effective in an emergency.
Pain killers
Paracetamol (also known as acetaminophen) is one of the most common pain-killing medications, as either tablet or syrup.
Anti-inflammatory painkillers such as ibuprofen, naproxen or other NSAIDs can be used as part of treating pain from injuries such as sprains, strains and bone fractures.
Codeine is both a painkiller and anti-diarrheal.
Symptomatic relief
Anti diarrhea medication such as loperamide – especially important in remote or third world locations where dehydration caused by diarrhea is a leading killer of children
Oral rehydration salts
Antihistamine, such as diphenhydramine
Poison treatments
Absorption, such as activated charcoal, Enterosgel and Atoxyl.
Emetics to induce vomiting, such as syrup of ipecac although first aid manuals now advise against inducing vomiting.
Smelling salts (ammonium carbonate)
Topical medications
Antiseptics / disinfectants
Antiseptic fluid, moist wipe or spray – For cleaning and disinfecting a wound. Typically benzalkonium chloride, which disinfects wounds with minimal stinging or harm to exposed tissue. Can also be used as an antibacterial hand wipe for the person providing aid.
Povidone iodine is an antiseptic in the form of liquid, swabstick, or towelette. Can be used in a weak dilution of clean water to prepare an irrigation solution for cleaning a wound.
Hydrogen peroxide is often included in home first aid kits, but is a poor choice for disinfecting wounds- it kills cells and delays healing
Alcohol pads – sometimes included for disinfecting instruments or unbroken skin (for example prior to draining a blister), or cleaning skin prior to applying an adhesive bandage. Alcohol should not be used on an open wound, as it kills skin cells and delays healing.
Medicated antiseptic ointments- for preventing infection in a minor wound, after it is cleaned. Not typically used on wounds that are bleeding heavily. Ointments typically contain one, two, or all three of the following antibacterial ingredients (those containing all three are typically called 'triple-antibiotic ointment') neomycin, polymyxin B sulfate or bacitracin zinc.
Burn gel – a water-based gel that acts as a cooling agent and often includes a mild anaesthetic such as lidocaine and, sometimes, an antiseptic such as tea tree oil
Anti-itch ointment
Hydrocortisone cream or injection
antihistamine cream containing diphenhydramine
Calamine lotion, for skin inflammations.
Anti-fungal cream
Tincture of benzoin – often in the form of an individually sealed swabstick or ampule, protects the skin and aids the adhesion of adhesive bandages, such as moleskin, Band-Aids, or wound closure ('butterfly') strips. Benzoin swabsticks are very prone to leaking and making a mess when kept in portable first aid kits; ampules are a more durable option. If swabsticks are used, it is advisable to keep them in a sealed zip lock bag.
Improvised uses
Besides the regular uses for first aid kits, they can be helpful in wilderness or survival situations. First aid kits can make up a part of a survival kit or a mini survival kit in addition to other tools.
Workplace first aid kits
In the United States, the Occupational Safety and Health Administration (OSHA) requires all job sites and workplaces to make available first aid equipment for use by injured employees. While providing regulations for some industries such as logging, in general the regulation lacks specifics on the contents of the first aid kit. This is understandable, as the regulation covers every means of employment, and different jobs have different types of injuries and different first-aid requirements. However, in a non-mandatory section, the OSHA regulations do refer to ANSI/ISEA Specification Z308.1 as the basis for the suggested minimum contents of a first aid kit. Another source for modern first aid kit information is United States Forest Service Specification 6170-6, which specifies the contents of several different-sized kits, intended to serve groups of differing size.
In general, the type of first aid facilities required in a workplace are determined by many factors, such as:
the laws and regulation of the state or territory in which it is located;
the type of industry concerned; for example, industries such as mining may have specific industry regulations detailing specialised instructions;
the type of hazards present in the workplace;
the number of employees in the workplace;
the number of different locations that the workplace is spread over;
the proximity to local services (doctors, hospital, ambulance).
Trauma, combat and tactical kits
Trauma kits, focused on major trauma have been implemented by combat medics with increased focus since the 1990s and have also become commonplace in United States Law Enforcement and for all American adults.
After the 2012 Sandy Hook School Shooting a collaborative effort between the American College of Surgeons (ACS), the Hartford Consensus, and federal agencies like the Department of Defense and the Department of Homeland Security worked together to create the Stop the Bleed campaign which is focused on teaching everyday Americans how to stop major bleeding and trauma and has helped to popularize the availability and access of IFAKs or Trauma Kits.
Trauma kits tend to have fewer items focused on basic items for scrapes and abrasions and instead focus on Tourniquets, Chest Seals, Hemostatic and non-treated gauze for wound packing, and pressure bandages among other things.
Historic first aid kits
As the understanding of first aid and lifesaving measures has advanced, and the nature of public health risks has changed, the contents of first aid kits have changed to reflect prevailing understandings and conditions. For example, earlier US Federal specifications for first aid kits included incision/suction-type snakebite kits and mercurochrome antiseptic. There are many historic components no longer used today, of course; some notable examples follow. As explained in the article on snakebite, the historic snakebite kit is no longer recommended. Mercurochrome was removed in 1998 by the US FDA from the generally recognized as safe category due to concerns over its mercury content. Another common item in early 20th century first aid kits, picric acid gauze for treating burns, is today considered a hazardous material due to its forming unstable and potentially explosive picrates when in contact with metal. Examples of modern additions include the CPR face shields and specific body-fluid barriers included in modern kits to assist in CPR and to help prevent the spread of bloodborne pathogens such as HIV.
| Biology and health sciences | General concepts | Health |
207249 | https://en.wikipedia.org/wiki/Cuckoo | Cuckoo | Cuckoos are birds in the Cuculidae ( ) family, the sole taxon in the order Cuculiformes ( ). The cuckoo family includes the common or European cuckoo, roadrunners, koels, malkohas, couas, coucals, and anis. The coucals and anis are sometimes separated as distinct families, the Centropodidae and Crotophagidae, respectively. The cuckoo order Cuculiformes is one of three that make up the Otidimorphae, the other two being the turacos and the bustards. The family Cuculidae contains 150 species, which are divided into 33 genera.
The cuckoos are generally medium-sized, slender birds. Most species live in trees, though a sizeable minority are ground-dwelling. The family has a cosmopolitan distribution; the majority of species are tropical. Some species are migratory. The cuckoos feed on insects, insect larvae, and a variety of other animals, as well as fruit. Some species are brood parasites, laying their eggs in the nests of other species and giving rise to the terms "cuckoo's egg" and "cuckold" as metaphors, but most species raise their own young.
Cuckoos have played a role in human culture for thousands of years, appearing in Greek mythology as sacred to the goddess Hera. In Europe, the cuckoo is associated with spring, and with cuckoldry, for example in Shakespeare's Love's Labour's Lost. In India, cuckoos are sacred to Kamadeva, the god of desire and longing, whereas in Japan, the cuckoo symbolises unrequited love.
Description
Cuckoos are medium-sized birds that range in size from the little bronze cuckoo, at and , to moderately large birds, ranging from in length, such as the giant coua of Madagascar, the coral-billed ground-cuckoo of Indochina, and various large Indo-Pacific coucals such as the goliath coucal of Halmahera, Timor coucal, buff-headed coucal, ivory-billed coucal, violaceous coucal, and larger forms of the pheasant coucal. The channel-billed cuckoo, at and is the largest parasitic cuckoo. Generally, little sexual dimorphism in size occurs, but where it exists, it can be either the male or the female that is larger. One of the most important distinguishing features of the family is the feet, which are zygodactyl, meaning that the two inner toes point forward and the two outer backward. The two basic body forms are arboreal species (such as the common cuckoo), which are slender and have short tarsi, and terrestrial species (such as the roadrunners), which are more heavy set and have long tarsi. Almost all species have long tails that are used for steering in terrestrial species and as a rudder during flight in the arboreal species. The wing shape also varies with lifestyle, with the more migratory species such as the black-billed cuckoo possessing long, narrow wings capable of strong, direct flight, and the more terrestrial and sedentary cuckoos such as the coucals and malkohas having shorter rounded wings and a more laboured, gliding flight.
The subfamily Cuculinae comprises the brood-parasitic cuckoos of the Old World. They tend to conform to the classic shape, with (usually) long tails, short legs, long, narrow wings, and an arboreal lifestyle. The largest species, the channel-billed cuckoo, also has the most outsized bill in the family, resembling that of a hornbill. The subfamily Phaenicophaeinae comprises the nonparasitic cuckoos of the Old World, and include the couas, malkohas, and ground cuckoos. They are more terrestrial cuckoos, with strong and often long legs and short, rounded wings. The subfamily typically has brighter plumage and brightly coloured bare skin around the eye. The coucals are another terrestrial Old World subfamily of long-tailed, long-legged, and short-winged cuckoos. They are large, heavyset birds with the largest, the greater black coucal, being around the same size as the channel-billed cuckoo. Genera of the subfamily Coccyzinae are arboreal and long-tailed, as well, with a number of large insular forms. The New World ground cuckoos are similar to the Asian ground-cuckoos in being long legged and terrestrial, and includes the long-billed roadrunner, which can reach speeds of when chasing prey. The final subfamily includes the atypical anis, which are the small, clumsy anis and the larger guira cuckoo. The anis have massive bills and smooth, glossy feathers.
The feathers of the cuckoos are generally soft, and often become waterlogged in heavy rain. Cuckoos often sun themselves after rain, and the anis hold their wings open in the manner of a vulture or cormorant while drying. Considerable variation in the plumage is exhibited by the family. Some species, particularly the brood parasites, have cryptic plumage, whereas others have bright and elaborate plumage. This is particularly true of the Chrysococcyx or glossy cuckoos, which have iridescent plumage. Some cuckoos have a resemblance to hawks in the genus Accipiter with barring on the underside; this apparently alarms potential hosts, allowing the female to access a host nest. The young of some brood parasites are coloured so as to resemble the young of the host. For example, the Asian koels breeding in India have black offspring to resemble their crow hosts, whereas in the Australian koels the chicks are brown like their honeyeater hosts. Sexual dimorphism in plumage is uncommon in the cuckoos, being most common in the parasitic Old World species. Cuckoos have 10 and 9–13 . All species have 10 , apart from the anis, which have eight.
Distribution and habitat
The cuckoos have a cosmopolitan distribution, ranging across all the world's continents except Antarctica. They are absent from the southwest of South America, the far north and northwest of North America, and the driest areas of the Middle East and North Africa (although they occur there as passage migrants). In the oceanic islands of the Atlantic and Indian Oceans they generally only occur as vagrants, but one species breeds on a number of Pacific islands and another is a winter migrant across much of the Pacific.
The Cuculinae are the most widespread subfamily of cuckoos, and are distributed across Europe, Asia, Africa, Australia, and Oceania. Amongst the Phaenicophaeinae, the malkohas and Asian ground cuckoos are restricted to southern Asia, the couas are endemic to Madagascar, and the yellowbill is widespread across Africa. The coucals are distributed from Africa through tropical Asia south into Australia and the Solomon Islands. The remaining three subfamilies have a New World distribution, all are found in both North and South America. The Coccyzinae reach the furthest north of the three subfamilies, breeding in Canada, whereas the anis reach as far north as Florida and the typical ground cuckoos are in the Southwest United States.
For the cuckoos, suitable habitat provides a source of food (principally insects and especially caterpillars) and a place to breed; for brood parasites the need is for suitable habitat for the host species. Cuckoos occur in a wide variety of habitats. The majority of species occur in forests and woodland, principally in the evergreen rainforests of the tropics, where they are typically but not exclusively arboreal. Some species inhabit or are even restricted to mangrove forests; these include the little bronze cuckoo of Australia, some malkohas, coucals, and the aptly named mangrove cuckoo of the New World. In addition to forests, some species of cuckoos occupy more open environments; this can include even arid areas such as deserts in the case of the greater roadrunner or the pallid cuckoo. Temperate migratory species, such as the common cuckoo, inhabit a wide range of habitats to make maximum use of the potential brood hosts, from reed beds (where they parasitise reed warblers) to treeless moors (where they parasitise meadow pipits).
Migration
Most species of cuckoo are sedentary, but some undertake regular seasonal migrations, and others undertake partial migrations over part of their range.
Species breeding at higher latitudes migrate to warmer climates during the winter due to food availability. The long-tailed koel, which breeds in New Zealand, flies to its wintering grounds in Polynesia, Micronesia, and Melanesia, a feat described as "perhaps the most remarkable overwater migration of any land bird." The yellow-billed cuckoo and black-billed cuckoo breed in North America and fly across the Caribbean Sea, a nonstop flight of . Other long migration flights include the lesser cuckoo, which flies from Africa to India, and the common cuckoo of Europe, which flies nonstop over the Mediterranean Sea and Sahara Desert on the voyage between Europe and central Africa.
Within Africa, 10 species make regular intracontinental migrations that are described as polarised; that is, they spend the nonbreeding season in the tropical centre of the continent and move north and south to breed in the more arid and open savannah and deserts. This is the same as the situation in the Neotropics, where no species have this migration pattern, or tropical Asia, where a single species does. About 83% of the Australian species are partial migrants within Australia or travel to New Guinea and Indonesia after the breeding season.
In some species, the migration is diurnal, as in the channel-billed cuckoo, or nocturnal, as in the yellow-billed cuckoo.
Behaviour and ecology
The cuckoos are, for the most part, solitary birds that seldom occur in pairs or groups. The biggest exception to this are the anis of the Americas, which have evolved cooperative breeding and other social behaviours. For the most part, the cuckoos are also diurnal as opposed to nocturnal, but many species call at night (see below). The cuckoos are also generally a shy and retiring family, more often heard than seen. The exception to this is again the anis, which are often extremely trusting towards humans and other species.
Most cuckoos are insectivores, and in particular are specialised in eating larger insects and caterpillars, including noxious, hairy types avoided by other birds. They are unusual among birds in processing their prey prior to swallowing, rubbing it back and forth on hard objects such as branches and then crushing it with special bony plates in the back of the mouth. They also take a wide range of other insects and animal prey. The lizard cuckoos of the Caribbean have, in the relative absence of birds of prey, specialised in taking lizards. Larger, ground types, such as coucals and roadrunners, also feed variously on snakes, lizards, small rodents, and other birds, which they bludgeon with their strong bills. Ground species may employ different techniques to catch prey. A study of two coua species in Madagascar found that Coquerel's coua obtained prey by walking and gleaning on the forest floor, whereas the red-capped ca-ca ran and pounced on prey. Both species also showed seasonal flexibility in prey and foraging techniques.
The parasitic cuckoos are generally not recorded as participating in mixed-species feeding flocks, although some studies in eastern Australia found several species participated in the nonbreeding season, but were mobbed and unable to do so in the breeding season. Ground cuckoos of the genus Neomorphus are sometimes seen feeding in association with army ant swarms, although they are not obligate ant followers, as are some antbirds. The anis are ground feeders that follow cattle and other large mammals when foraging; in a similar fashion to cattle egrets, they snatch prey flushed by the cattle, so enjoy higher foraging success rates in this way.
Several koels, couas, and the channel-billed cuckoo feed mainly on fruit, but they are not exclusively frugivores. The parasitic koels and channel-billed cuckoo in particular consume mainly fruit when raised by frugivore hosts such as the Australasian figbird and pied currawong. Other species occasionally take fruit, as well. Couas consume fruit in the dry season when prey is harder to find.
Breeding
The cuckoos are an extremely diverse group of birds with regards to breeding systems. Most are monogamous, but exceptions exist. The anis and the guira cuckoo lay their eggs in communal nests, which are built by all members of the group. Incubation, brooding, and territorial defence duties are shared by all members of the group. Within these species, the anis breed as groups of monogamous pairs, but the guira cuckoos are not monogamous within the group, exhibiting a polygynandrous breeding system. This group nesting behaviour is not completely cooperative; females compete and may remove others' eggs when laying theirs. Eggs are usually only ejected early in the breeding season in the anis, but can be ejected at any time by guria cuckoos. Polyandry has been confirmed in the African black coucal and is suspected to occur in the other coucals, perhaps explaining the reversed sexual dimorphism in the group.
Most cuckoo species, including malkohas, couas, coucals, and roadrunners, and most other American cuckoos, build their own nests, although a large minority engages in brood parasitism (see below). Most of these species nest in trees or bushes, but the coucals lay their eggs in nests on the ground or in low shrubs. Though on some occasions nonparasitic cuckoos parasitize other species, the parent still helps feed the chick.
The nests of cuckoos vary in the same way as the breeding systems. The nests of malkohas and Asian ground cuckoos are shallow platforms of twigs, but those of coucals are globular or domed nests of grasses. The New World cuckoos build saucers or bowls in the case of the New World ground cuckoos.
Nonparasitic cuckoos, like most other nonpasserines, lay white eggs, but many of the parasitic species lay coloured eggs to match those of their passerine hosts.
The young of all species are altricial. Nonparasitic cuckoos leave the nest before they can fly, and some New World species have the shortest incubation periods among birds.
Brood parasitism
About 56 of the Old World species and three of the New World cuckoo species (pheasant, pavonine, and striped) are brood parasites, laying their eggs in the nests of other birds and giving rise to the metaphor "cuckoo's egg". These species are obligate brood parasites, meaning that they only reproduce in this fashion. The best-known example is the European common cuckoo. In addition to the above noted species, others sometimes engage in nonobligate brood parasitism, laying their eggs in the nests of members of their own species, in addition to raising their own young. Brood parasitism has even been seen in greater roadrunners, where their eggs were seen in the nests of common ravens and northern mockingbirds. The shells of the eggs of brood-parasitic cuckoos are usually thicker and stronger than those of their hosts. This protects the egg if a host parent tries to damage it, and may make it resistant to cracking when dropped into a host nest. Cuckoo eggshells have two distinct layers. In some nesting cuckoos, a thick, outer, chalky layer is not present on the eggs of most brood-parasitic species, with some exceptions, and the eggshells of Old World parasitic cuckoos have a thick outer layer that is different from that of nesting cuckoos.
Parasitic cuckoo advanced laying and hatching
The cuckoo egg hatches earlier than the host eggs, and the cuckoo chick grows faster; in most cases, the chick evicts the eggs and/or young of the host species. The chick has no time to learn this behavior, nor does any parent stay around to teach it, so it must be an instinct passed on genetically.
One reason for the cuckoo egg's hatching sooner is that, after the egg is fully formed, the female cuckoo holds it in her oviduct for another 24 hours prior to laying. This means that the egg has already had 24 hours of internal incubation. Furthermore, the cuckoo's internal temperature is 3–4 °C higher than the temperature at which the egg is incubated in the nest, and the higher temperature means that the egg incubates faster, so at the time it is laid, the egg has already had the equivalent of 30 hours incubation in a nest.
The chick encourages the host to keep pace with its high growth rate with its rapid begging call and the chick's open mouth which serves as a sign stimulus.
Evolutionary arms race between cuckoo and host
Since obligate brood parasites need to successfully trick their host for them to reproduce, they have evolved adaptations at several stages of breeding. High costs of parasitism are exerted on the host, leading to strong selections on the host to recognize and reject parasitic eggs. The adaptations and counter-adaptations between hosts and parasites have led to a coevolution "arms race". This means that if one of the species involved were to stop adapting, it would lose the race to the other species, resulting in decreased fitness of the losing species. The egg-stage adaptation is the best studied stage of this arms race.
Cuckoos have various strategies for getting their eggs into host nests. Different species use different strategies based on host defensive strategies. Female cuckoos have secretive and fast laying behaviors, but in some cases, males have been shown to lure host adults away from their nests so that the females can lay their eggs in the nest. Some host species may directly try to prevent cuckoos laying eggs in their nest in the first place – birds whose nests are at high risk of cuckoo-contamination are known to "mob" attack cuckoos to drive them out of the area. Parasitic cuckoos are grouped into gentes, with each gens specializing in a particular host. Some evidence suggests that the gentes are genetically different from one another.
Host egg mimicry
Female parasitic cuckoos sometimes specialize and lay eggs that closely resemble the eggs of their chosen host. Some birds are able to distinguish cuckoo eggs from their own, leading to those eggs least like the host's being thrown out of the nest. Parasitic cuckoos that show the highest levels of egg mimicry are those whose hosts exhibit high levels of egg rejection behavior. Some hosts do not exhibit egg rejection behavior and the cuckoo eggs look very dissimilar from the host eggs. It has also been shown in a study of the European common cuckoos that females lay their egg in the nest of a host that has eggs that look similar to its own. Other species of cuckoo lay "cryptic" eggs, which are dark in color when their hosts' eggs are light. This is a trick to hide the egg from the host, and is exhibited in cuckoos that parasitize hosts with dark, domed nests. Some adult parasitic cuckoos completely destroy the host's clutch if they reject the cuckoo egg. In this case, raising the cuckoo chick is less of a cost than the alternative, total clutch destruction.
Cuckoo egg physiology can limit the degree of mimetic accuracy. Due to larger chick size on average for parasites compared to hosts, this is a physiological constraint on egg size, a minimum egg size needed to support a healthy cuckoo chick. In these cases, there is selective pressure on cuckoos to lessen their egg size to be a more effective mimic, but physiological constraints hinder the species from doing so.
Mimicry may also be imperfect due to a lack of strong selection pressures towards the parasite. Oriental reed warbler hosts do not discriminate between warbler-sized model eggs and slightly larger model cuckoo eggs. Since cuckoos in this situation can effectively parasitize despite laying eggs slightly larger than those of their hosts, there are little selective pressures to evolve "perfect" mimicry.
To select the most suitable host nests, cuckoos may "egg-match" as well. Daurian redstarts (Phoenicurus auroreus), another cuckoo host, lay clutches of either pink or blue eggs. Cuckoo eggs are more similar in reflectance and color to blue redstart eggs than pink ones. Furthermore, in-field observations revealed parasitism occurred more frequently in blue-egg redstart nests (19.3%) than in pink-egg redstart nests (7.9%). This suggests cuckoos prefer parasitizing nests containing eggs resembling their own. Experiments in the lab show similar findings: cuckoos parasitized artificial nests containing blue eggs more frequently than pink ones.
Two main hypotheses on the cognitive mechanisms mediate host distinguishing of eggs. One hypothesis is true recognition, which states that a host compares eggs present in its clutch to an internal template (learnt or innate), to identify if parasitic eggs are present. However, memorizing a template of a parasitic egg is costly and imperfect and likely not identical to each host's egg. The other one is the discordancy hypothesis, which states that a host compares eggs in the clutch and identifies the odd ones. However, if parasitic eggs made the majority of eggs in the clutch, then hosts ends up rejecting their own eggs. More recent studies have found that both mechanisms more likely contribute to host discrimination of parasitic eggs since one compensates for the limitations of the other.
Possible evidence of host benefits in the face of cuckoo parasitism
The parasitism is not necessarily entirely detrimental to the host species. A 16-year dataset was used in 2014 to find that carrion crow nests in a region of northern Spain were more successful overall (more likely to produce at least one crow fledgling) when parasitised by the great spotted cuckoo. The researchers attributed this to a strong-smelling predator-repelling substance secreted by cuckoo chicks when attacked, and noted that the interactions were not necessarily simply parasitic or mutualistic. This relationship was not observed for any other host species, or for any other species of cuckoo. Great spotted cuckoo chicks do not evict host eggs or young, and are smaller and weaker than carrion crow chicks, so both of these factors may have contributed to the effect observed.
However, subsequent research using a dataset from southern Spain failed to replicate these findings, and the second research team also criticised the methodology used in experiments described in the first paper. The authors of the first study have responded to points made in the second and both groups agree that further research is needed before the mutualistic effect can be considered proven.
Calls
Cuckoos are often highly secretive, and in many cases, best known for their wide repertoire of calls. These are usually relatively simple, resembling whistles, flutes, or hiccups. The calls are used to demonstrate ownership of a territory and to attract a mate. Within a species, the calls are remarkably consistent across the range, even in species with very large ranges. This suggests, along with the fact that many species are not raised by their true parents, that the calls of cuckoos are innate and not learnt. Although cuckoos are diurnal, many species call at night.
The cuckoo family gets its English and scientific names from the call of the male cuckoo, also familiar from cuckoo clocks. In most cuckoos, the calls are distinctive to particular species, and are useful for identification. Several cryptic species are best identified on the basis of their calls.
Phylogeny and evolution
The family Cuculidae was introduced by English zoologist William Elford Leach in a guide to the contents of the British Museum published in 1819.
Very little fossil record of cuckoos has been found, and their evolutionary history remains unclear. Dynamopterus was an Oligocene genus of large cuckoo, though it may have been related to cariamas, instead.
A 2014 genome analysis by Erich Jarvis and collaborators found a clade of birds that contains the orders Cuculiformes (cuckoos), Musophagiformes (turacos), and Otidiformes (bustards). This has been named the Otidimorphae. Relationships between the orders is unclear.
The following cladogram shows the phylogenetic relationships between the genera. It is from a 2005 study by Michael Sorenson and Robert Payne and is based solely on an analysis of mitochondrial DNA sequences. The number of species in each genus is taken from the list maintained by Frank Gill, Pamela Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Taxonomy and systematics
For the living members of each genus, see the article List of cuckoo species.
The family Cuculidae contains 150 species which are divided into 33 genera. These numbers include two species that have become extinct in historical times: the snail-eating coua from Madagascar and the Saint Helena cuckoo which is placed in its own genus Nannococcyx.
Subfamily Crotophaginae – New World group-living cuckoos
Genus Guira – guira cuckoo
Genus Crotophaga – true anis (3 species)
Subfamily Neomorphinae – New World ground cuckoos
Genus Tapera – striped cuckoo
Genus Dromococcyx (2 species)
Genus Morococcyx – lesser ground cuckoo
Genus Geococcyx – roadrunners (2 species)
Genus Neomorphus – Neotropical ground-cuckoos (5 species)
Subfamily Centropodinae – coucals
Genus Centropus – (29 species)
Subfamily Couinae – Malagasy and South East Asian ground cuckoos
Genus Carpococcyx – Asian ground-cuckoos (3 species)
Genus Coua – couas (9 living species, 1 recently extinct)
Subfamily Cuculinae
Genus Rhinortha – Raffles's malkoha
Tribe Phaenicophaeini
Genus Ceuthmochares – yellowbills (2 species)
Genus Taccocua – Sirkeer malkoha
Genus Zanclostomus – red-billed malkoha
Genus Phaenicophaeus – typical malkohas (6 species)
Genus Dasylophus – (2 species)
Genus Rhamphococcyx – yellow-billed malkoha
Genus Clamator – (4 species)
Genus Coccycua – formerly in Coccyzus and Piaya, includes Micrococcyx (3 species)
Genus Piaya – (2 species)
Genus Coccyzus – includes Saurothera and Hyetornis (13 species)
Tribe Cuculini – brood-parasitic cuckoos of the Old World
Genus Pachycoccyx – thick-billed cuckoo
Genus Microdynamis – dwarf koel
Genus Eudynamys – typical koels (3 species)
Genus Scythrops – channel-billed cuckoo
Genus Urodynamis – Pacific long-tailed cuckoo
Genus Chrysococcyx – bronze cuckoos (13 species)
Genus Cacomantis – (10 species)
Genus Surniculus – drongo-cuckoos (4 species)
Genus Cercococcyx – long-tailed cuckoos (4 species)
Genus Hierococcyx – hawk-cuckoos (8 species)
Genus Cuculus – typical cuckoos (11 species)
† Genus Nannococcyx – Saint Helena cuckoo (extinct)
Fossils
Genus Dynamopterus (fossil: Late Eocene/Early Oligocene of Caylus, Tarn-et-Garonne, France)
Genus Cursoricoccyx (fossil: Early Miocene of Logan County, USA) – Neomorphinae?
Cuculidae gen. et sp. indet. (fossil: Early Pliocene of Lee Creek Mine, USA)
Genus Neococcyx (fossil: Early Oligocene of Central North America)
Genus Eocuculus (fossil: Late Eocene of Teller County, USA)
In human culture
In Greek mythology, the god Zeus transformed himself into a cuckoo so that he could seduce the goddess Hera, to whom the bird was sacred. In England, William Shakespeare alludes to the common cuckoo's association with spring, and with cuckoldry, in the courtly springtime song in his play Love's Labours Lost. In India, cuckoos are sacred to Kamadeva, the god of desire and longing, whereas in Japan, the cuckoo symbolises unrequited love. Cuckoos are a sacred animal to the Bon religion of Tibet. Additionally, the brood parasitism of some cuckoo species gave rise to the term "cuckold", referring to the husband of an adulterous wife.
The orchestral composition "On Hearing the First Cuckoo in Spring" by Frederick Delius imitates sounds of the cuckoo.
| Biology and health sciences | Cuculiformes | null |
207277 | https://en.wikipedia.org/wiki/Common%20cuckoo | Common cuckoo | The cuckoo, common cuckoo, European cuckoo or Eurasian cuckoo (Cuculus canorus) is a member of the cuckoo order of birds, Cuculiformes, which includes the roadrunners, the anis and the coucals.
This species is a widespread summer migrant to Europe and Asia, and winters in Africa. It is a brood parasite, which means it lays eggs in the nests of other bird species, particularly of dunnocks, meadow pipits, and reed warblers. Although its eggs are larger than those of its hosts, the eggs in each type of host nest resemble the host's eggs. The adult too is a mimic, in its case of the sparrowhawk; since that species is a predator, the mimicry gives the female time to lay her eggs without being attacked.
Taxonomy
The species' binomial name is derived from the Latin (the cuckoo) and (melodious; from , meaning "to sing"). The cuckoo family gets its common name and genus name by onomatopoeia for the call of the male common cuckoo. The English word "cuckoo" comes from the Old French , and its earliest recorded usage in English is from around 1240, in the song . The song is written in Middle English, and the first two lines are In modern English, this translates to "Summer has come in / Loudly sing, Cuckoo!".
There are four subspecies worldwide:
C. c. canorus, the nominate subspecies, was first described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae. It occurs from Ireland through Scandinavia, northern Russia and Siberia to Japan in the east, and from the Pyrenees through Turkey, Kazakhstan, Mongolia, northern China and Korea. It winters in Africa and South Asia.
C. c. bakeri, first described by Hartert in 1912, breeds in western China to the Himalayan foothills in northern India, Nepal, Myanmar, northwestern Thailand and southern China. During the winter it is found in Assam, East Bengal and southeastern Asia.
C. c. bangsi was first described by Oberholser in 1919 and breeds in Iberia, the Balearic Islands and North Africa, spending the winter in Africa.
C. c. subtelephonus, first described by Zarudny in 1914, breeds in Central Asia from Turkestan to southern Mongolia. It migrates to southern Asia and Africa for the winter.
Lifespan and demography
Although the common cuckoo's global population appears to be declining, it is classified of being of least concern by the International Union for Conservation of Nature. It is estimated that the species numbers between 25 million and 100 million individuals worldwide, with around 12.6 million to 25.8 million of those birds breeding in Europe. The longest recorded lifespan of a common cuckoo in the United Kingdom is 6 years, 11 months and 2 days.
Description
The common cuckoo is long from bill to tail, with a tail of and a wingspan of . The legs are short. It has a greyish, slender body and long tail, similar to a sparrowhawk in flight, where the wingbeats are regular. During the breeding season, common cuckoos often settle on an open perch with drooped wings and raised tail. There is a rufous colour morph, which occurs occasionally in adult females but more often in juveniles. It has been hypothesized to have evolved as a deterrence to male harassment or host species mobbing.
All adult males are slate-grey; the grey throat extends well down the bird's breast with a sharp demarcation to the barred underparts. The iris, orbital ring, the base of the bill and feet are yellow. Grey adult females have a pinkish-buff or buff background to the barring and neck sides, and sometimes small rufous spots on the median and greater coverts and the outer webs of the secondary feathers.
Rufous morph adult females have reddish-brown upperparts with dark grey or black bars. The black upperpart bars are narrower than the rufous bars, as opposed to rufous juvenile birds, where the black bars are broader.
Common cuckoos in their first autumn have variable plumage. Some have strongly-barred chestnut-brown upperparts, while others are plain grey. Rufous-brown birds have heavily barred upperparts with some feathers edged with creamy-white. All have whitish edges to the upper wing-coverts and primaries. The secondaries and greater coverts have chestnut bars or spots. In spring, birds hatched in the previous year may retain some barred secondaries and wing-coverts. The most obvious identification features of juvenile common cuckoos are the white nape patch and white feather fringes.
Common cuckoos moult twice a year: a partial moult in summer and a complete moult in winter. Males weigh around and females . The common cuckoo looks very similar to the Oriental cuckoo, which is slightly shorter-winged on average. This resemblance extends even to the rufous morphs, which are also present in Oriental cuckoos. The presence of rufous morphs may well be ancestral to both Oriental cuckoos and common cuckoos.
Mimicry in adults
The barred underparts of the common cuckoo resemble those of the Eurasian sparrowhawk, a predator of adult birds. A study comparing the responses of Eurasian reed warblers, a host of cuckoo chicks, to manipulated taxidermy model cuckoos and sparrowhawks found that reed warblers were more aggressive to cuckoos with obscured underparts, suggesting that the resemblance to sparrowhawks is likely to help the cuckoo access the nests of potential hosts. Other small birds, great tits and blue tits, showed alarm and avoided attending feeders on seeing either (mounted) sparrowhawks or cuckoos; this implies that the cuckoo's hawklike appearance functions as protective mimicry, whether to reduce attacks by hawks or to make brood parasitism easier.
Hosts attack cuckoos more when they see neighbors mobbing cuckoos. The existence of the two plumage morphs in females may be due to frequency-dependent selection if this learning applies only to the morph that hosts see neighbors mob. In an experiment with dummy cuckoos of each morph and a sparrowhawk, reed warblers were more likely to attack both cuckoo morphs than the sparrowhawk, and even more likely to mob a certain cuckoo morph when they saw neighbors mobbing that morph, decreasing the reproductive success of that morph and selecting for the less common morph.
Voice and courting
The male's song, goo-ko, is usually given from an open perch. During the breeding season the male typically gives this vocalisation with intervals of 1–1.5 seconds, in groups of 10–20 with a rest of a few seconds between groups. The female has a loud bubbling call. The song starts as a descending minor third early in the year in April, and the interval gets wider, through a major third to a fourth as the season progresses, and in June the cuckoo "forgets its tune" and may make other calls such as ascending intervals. The wings are drooped when the bird is calling intensely, and when in the vicinity of a potential female, the male often wags its tail from side to side or the body may pivot from side to side.
Distribution and habitat
Essentially a bird of open land, the common cuckoo is a widespread summer migrant to Europe and Asia, and winters in Africa. Birds arrive in Europe in April and leave in September. The common cuckoo has also occurred as a vagrant in countries including Barbados, the United States, Greenland, the Faroe Islands, Iceland, Indonesia, Palau, Seychelles, Taiwan and China. Between 1995 and 2015, the distribution of cuckoos within the UK has shifted towards the north, with a decline by 69% in England but an increase by 33% in Scotland.
Behaviour
Food and feeding
The common cuckoo's diet consists of insects, with hairy caterpillars, which are distasteful to many birds, being a specialty of preference. It also occasionally eats eggs and chicks.
Breeding
The common cuckoo is an obligate brood parasite; it lays its eggs in the nests of other birds. Hatched cuckoo chicks may push host eggs out of the nest or be raised alongside the host's chicks. A female may visit up to 50 nests during a breeding season. Common cuckoos first breed at the age of two years.
Egg mimicry
More than 100 host species have been recorded: meadow pipit, dunnock and Eurasian reed warbler are the most common hosts in northern Europe; garden warbler, meadow pipit, pied wagtail and European robin in central Europe; brambling and common redstart in Finland; and great reed warbler in Hungary.
Female common cuckoos are divided into gentes – groups of females favouring a particular host species' nest and laying eggs that match those of that species in color and pattern. Evidence from mitochondrial DNA analyses suggest that each gente may have multiple independent origins due to parasitism of specific hosts by different ancestors. One hypothesis for the inheritance of egg appearance mimicry is that this trait is inherited from the female only, suggesting that it is carried on the sex-determining W chromosome (females are WZ, males ZZ). A genetic analysis of gentes supports this proposal by finding significant differentiation in mitochondrial DNA, but not in microsatellite DNA. A second proposal for the inheritance of this trait is that the genes controlling egg characteristics are carried on autosomes rather than just the W chromosome. Another genetic analysis of sympatric gentes supports this second proposal by finding significant genetic differentiation in both microsatellite DNA and mitochondrial DNA. Considering the tendency for common cuckoo males to mate with multiple females and produce offspring raised by more than one host species, it appears as though males do not contribute to the maintenance of common cuckoo gentes. However, it was found that only nine percent of offspring were raised outside of their father's presumed host species. Therefore, both males and females may contribute to the maintenance of common cuckoo egg mimicry polymorphism. It is notable that most non-parasitic cuckoo species lay white eggs, like most non-passerines other than ground-nesters.
As the common cuckoo evolves to lay eggs that better imitate the host's eggs, the host species adapts and is more able to distinguish the cuckoo egg. A study of 248 common cuckoo and host eggs demonstrated that female cuckoos that parasitised common redstart nests laid eggs that matched better than those that targeted dunnocks. Spectroscopy was used to model how the host species saw the cuckoo eggs. Cuckoos that target dunnock nests lay white, brown-speckled eggs, in contrast to the dunnock's own blue eggs. The theory suggests that common redstarts have been parasitised by common cuckoos for longer, and so have evolved to be better than the dunnocks at noticing the cuckoo eggs. The cuckoo, over time, has needed to evolve more accurate mimicking eggs to successfully parasitise the redstart. In contrast, cuckoos do not seem to have experienced evolutionary pressure to develop eggs which closely mimic the dunnock's, as dunnocks do not seem to be able to distinguish between the two species' eggs, despite the significant colour differences. The dunnock's inability to distinguish the eggs suggests that they have not been parasitised for very long, and have not yet evolved defences against it, unlike the redstart.
Studies performed on great reed warbler nests in central Hungary, showed an "unusually high" frequency of common cuckoo parasitism, with 64% of the nests parasitised. Of the nests targeted by cuckoos, 64% contained one cuckoo egg, 23% had two, 10% had three and 3% had four common cuckoo eggs. In total, 58% of the common cuckoo eggs were laid in nests that were multiply parasitised. When laying eggs in nests already parasitised, the female cuckoos removed one egg at random, showing no discrimination between the great reed warbler eggs and those of other cuckoos.
It was found that nests close to cuckoo perches were most vulnerable: multiple parasitised nests were closest to the vantage points, and unparasitised nests were farthest away. Nearly all the nests "in close vicinity" to the vantage points were parasitised. More visible nests were more likely to be selected by the common cuckoos. Female cuckoos use their vantage points to watch for potential hosts and find it easier to locate the more visible nests while they are egg-laying, however, novel studies highlight that host alarm calls might also play an important role during nest searching. In addition, cuckoos tend to lay the eggs on the host clutch initiation day or one day before.
The great reed warblers' responses to the common cuckoo eggs varied: 66% accepted the egg(s); 12% ejected them; 20% abandoned the nests entirely; 2% buried the eggs. 28% of the cuckoo eggs were described as "almost perfect" in their mimesis of the host eggs, and the warblers rejected "poorly mimetic" cuckoo eggs more often. The degree of mimicry made it difficult for both the great reed warblers and the observers to tell the eggs apart.
The egg measures and weighs , of which 7% is shell. Research has shown that the female common cuckoo is able to keep its egg inside its body for an extra 24 hours before laying it in a host's nest. This means the cuckoo chick can hatch before the host's chicks do, and it can eject the unhatched eggs from the nest. Scientists incubated common cuckoo eggs for 24 hours at the bird's body temperature of , and examined the embryos, which were found "much more advanced" than those of other species studied. The idea of 'internal incubation' was first put forward in 1802 and 18th- and 19th-century egg collectors had reported finding that cuckoo embryos were more advanced than those of the host species.
A study using digital photography and spectrometry along with an automatic analytical approach to analyse cuckoo eggs and predict the identity of bird females based on their egg appearance showed that individual cuckoo females lay eggs with a relatively constant appearance, and that eggs laid by more genetically distant females differ more in colour.
Complete list of common cuckoo's nest-host by Aleksander D. Numerov (2003); names of birds in whose nests cuckoo's eggs and chicks were found more than 10 times (in bold):
Yellow-bellied warbler (Abroscopus superciliaris)
Common linnet (Acanthis cannabina)
Common redpoll (Acanthis flammea)
Paddyfield warbler (Acrocephalus agricola)
Moustached warbler (Acrocephalus melanopogon)
Great reed warbler (Acrocephalus arundinaceus)
Black-browed reed warbler (Acrocephalus bistrigiceps)
Blyth's reed warbler (Acrocephalus dumetorum)
Aquatic warbler (Acrocephalus paludicola)
Marsh warbler (Acrocephalus palustris)
Sedge warbler (Acrocephalus schoenobaenus)
Eurasian reed warbler (Acrocephalus scirpaceus)
Clamorous reed warbler (Acrocephalus stentoreus)
Rusty-fronted barwing (Actinodura egertoni)
Long-tailed tit (Aegithalos caudatus)
Eurasian skylark (Alauda arvensis)
Dusky fulvetta (Alcippe brunnea)
Rufous-winged fulvetta (Alcippe castaneceps)
Yellow-throated fulvetta (Alcippe cinerea)
Nepal fulvetta (Alcippe nipalensis)
Brown-cheeked fulvetta (Alcippe poioicephala)
Tawny pipit (Anthus campestris)
Red-throated pipit (Anthus cervinus)
Blyth's pipit (Anthus godlewskii)
Olive-backed pipit (Anthus hodgsoni)
Australasian pipit (Anthus novaeseelandiae)
Meadow pipit (Anthus pratensis)
Rosy pipit (Anthus roseatus)
Buff-bellied pipit (Anthus rubescens)
Water pipit (Anthus spinoletta)
Upland pipit (Anthus sylvanus)
Tree pipit (Anthus trivialis)
Little spiderhunter (Arachnothera longirostris)
Streaked spiderhunter (Arachnothera magna)
Lesser shortwing (Brachypteryx leucophrys)
White-browed shortwing (Brachypteryx montana)
Red-capped lark (Calandrella cinerea)
Lapland longspur (Calcarius lapponicus)
Carduelis caniceps
European goldfinch (Carduelis carduelis)
Twite (Carduelis flavirostris)
Common rosefinch (Carpodacus erythrinus)
Pallas's rosefinch (Carpodacus roseus)
Short-toed treecreeper (Certhia brachydactyla)
Eurasian treecreeper (Certhia familiaris)
Cetti's warbler (Cettia cetti)
Brown-flanked bush warbler (Cettia fortipes)
Rufous-tailed scrub robin (Cercotrichas galactotes)
European greenfinch (Chloris chloris)
Grey-capped greenfinch (Chloris sinica)
Golden-fronted leafbird (Chloropsis aurifrons)
Orange-bellied leafbird (Chloropsis hardwickii)
Brown dipper (Cinclus pallasii)
Zitting cisticola (Cisticola juncidis)
Golden-headed cisticola (Cisticola exilis)
Hawfinch (Coccothraustes coccothraustes)
Purple cochoa (Cochoa purpurea)
Green cochoa (Cochoa viridis)
White-rumped shama (Copsychus malabaricus)
Oriental magpie-robin (Copsychus saularis)
Black-winged cuckooshrike (Coracina melaschistos)
Grey-headed canary-flycatcher (Culicicapa ceylonensis)
Azure-winged magpie (Cyanopica cyanus)
Blue-and-white flycatcher (Cyanoptila cyanomelana)
Blue-throated blue flycatcher (Cyornis rubeculoides)
Common house martin (Delichon urbica)
Bronzed drongo (Dicrurus aeneus)
Ashy drongo (Dicrurus leucophaeus)
Yellow-breasted bunting (Emberiza aureola)
Red-headed bunting (Emberiza bruniceps)
Corn bunting (Emberiza calandra)
Yellow-browed bunting (Emberiza chrysophrys)
Rock bunting (Emberiza cia)
Meadow bunting (Emberiza cioides)
Cirl bunting (Emberiza cirlus)
Yellowhammer (Emberiza citrinella)
Yellow-throated bunting (Emberiza elegans)
Chestnut-eared bunting (Emberiza fucata)
Ortolan bunting (Emberiza hortulana)
Emberiza icterica
Black-headed bunting (Emberiza melanocephala)
Little bunting (Emberiza pusilla)
Rustic bunting (Emberiza rustica)
Chestnut bunting (Emberiza rutila)
Common reed bunting (Emberiza schoeniclus)
Black-faced bunting (Emberiza spodocephala)
Tristram's bunting (Emberiza tristrami)
Black-backed forktail (Enicurus immaculatus)
Spotted forktail (Enicurus maculatus)
Slaty-backed forktail (Enicurus schistaceus)
European robin (Erithacus rubecula)
Horned lark (Eremophila alpestris)
Japanese grosbeak (Eophona personata)
Slaty-backed flycatcher (Ficedula hodgsonii)
European pied flycatcher (Ficedula hypoleuca)
Narcissus flycatcher (Ficedula narcissina)
Red-breasted flycatcher (Ficedula parva)
Ultramarine flycatcher (Ficedula superciliaris)
Slaty-blue flycatcher (Ficedula tricolor)
Common chaffinch (Fringilla coelebs)
Brambling (Fringilla montifringilla)
Crested lark (Galerida cristata)
Streaked laughingthrush (Garrulax lineatus)
Ashy bulbul (Hemixos flavala)
Rufous-backed sibia (Heterophasia annectans)
Grey sibia (Heterophasia gracilis)
Booted warbler (Iduna caligata)
Icterine warbler (Hippolais icterina)
Eastern olivaceous warbler (Hippolais pallida)
Melodious warbler (Hippolais polyglotta)
Sykes's warbler (Iduna rama)
Barn swallow (Hirundo rustica)
Black-naped monarch (Hypothymis azurea)
Malagasy bulbul (Hypsipetes madagascariensis)
Mountain bulbul (Ixos mcclellandi)
White-bellied redstart (Luscinia phoenicuroides)
Bull-headed shrike (Lanius bucephalus)
Red-backed shrike (Lanius collurio)
Brown shrike (Lanius cristatus)
Great grey shrike (Lanius excubitor)
Lesser grey shrike (Lanius minor)
Long-tailed shrike (Lanius schach)
Woodchat shrike (Lanius senator)
Tiger shrike (Lanius tigrinus)
Silver-eared mesia (Leiothrix argentauris)
Red-billed leiothrix (Leiothrix lutea)
White-browed tit-warbler (Leptopoecile sophiae)
Red-faced liocichla (Liocichla phoenicea)
River warbler (Locustella fluviatilis)
Savi's warbler (Locustella luscinioides)
Brown bush warbler (Locustella luteoventris)
Common grasshopper warbler (Locustella naevia)
Middendorff's grasshopper warbler (Locustella ochotensis)
Woodlark (Lullula arborea)
Indian blue robin (Luscinia brunnea)
Siberian rubythroat (Calliope calliope)
Siberian blue robin (Luscinia cyane)
Thrush nightingale (Luscinia luscinia)
Common nightingale (Luscinia megarhynchos)
Himalayan rubythroat (Luscinia pectoralis)
Bluethroat (Luscinia svecica)
Pin-striped tit-babbler (Macronous gularis)
Striated grassbird (Megalurus palustris)
Blue-winged minla (Minla cyanouroptera)
Blue-capped rock thrush (Monticola cinclorhyncha)
Monticola erythrogastra
White-throated rock thrush (Monticola gularis)
Chestnut-bellied rock thrush (Monticola rufiventris)
Common rock thrush (Monticola saxatilis)
Blue rock thrush (Monticola solitarius)
White wagtail (Motacilla alba)
Grey wagtail (Motacilla cinerea)
Citrine wagtail (Motacilla citreola)
Western yellow wagtail (Motacilla flava)
Japanese wagtail (Motacilla grandis)
White wagtail (Motacilla alba)
Motacilla sordidus
Brown-breasted flycatcher (Muscicapa muttui)
Spotted flycatcher (Muscicapa striata)
Verditer flycatcher (Eumyias thalassinus)
White-winged grosbeak (Mycerobas carnipes)
Blue whistling thrush (Myophonus caeruleus)
Streaked wren-babbler (Napothera brevicaudata)
Eyebrowed wren-babbler (Napothera epilepidota)
Large niltava (Niltava grandis)
Small niltava (Niltava macgrigoriae)
Rufous-bellied niltava (Niltava sundara)
Western black-eared wheatear (Oenanthe hispanica)
Isabelline wheatear (Oenanthe isabellina)
Northern wheatear (Oenanthe oenanthe)
Pied wheatear (Oenanthe pleschanka)
Eurasian golden oriole (Oriolus oriolus)
Dark-necked tailorbird (Orthotomus atrogularis)
Common tailorbird (Orthotomus sutorius)
Bearded reedling (Panurus biarmicus)
Black-breasted parrotbill (Paradoxornis flavirostris)
Vinous-throated parrotbill (Sinosuthora webbiana)
Eurasian blue tit (Cyanistes caeruleus)
Great tit (Parus major)
Yellow-cheeked tit (Parus spilonotus)
House sparrow (Passer domesticus)
Spanish sparrow (Passer hispaniolensis)
Eurasian tree sparrow (Passer montanus)
Russet sparrow (Passer rutilans)
Spot-throated babbler (Pellorneum albiventre)
Buff-breasted babbler (Pellorneum tickelli)
Puff-throated babbler (Pellorneum ruficeps)
Grey-chinned minivet (Pericrocotus solaris)
Daurian redstart (Phoenicurus auroreus)
Eversmann's redstart (Phoenicurus erythronotus)
Blue-fronted redstart (Phoenicurus frontalis)
Plumbeous water redstart (Phoenicurus fuliginosus)
Moussier's redstart (Phoenicurus moussieri)
Black redstart (Phoenicurus ochruros)
Common redstart (Phoenicurus phoenicurus)
Thick-billed warbler (Phragmaticola aedon)
Western Bonelli's warbler (Phylloscopus bonelli)
Arctic warbler (Phylloscopus borealis)
Yellow-vented warbler (Phylloscopus cantator)
Common chiffchaff (Phylloscopus collybita)
Sulphur-bellied warbler (Phylloscopus griseolus)
Yellow-browed warbler (Phylloscopus inornatus)
Pallas's leaf warbler (Phylloscopus proregulus)
Blyth's leaf warbler (Phylloscopus reguloides)
Wood warbler (Phylloscopus sibilatrix)
Radde's warbler (Phylloscopus schwarzi)
Willow warbler (Phylloscopus trochilus)
Eurasian magpie (Pica pica)
Scaly-breasted cupwing (Pnoepyga albiventer)
Pygmy cupwing (Pnoepyga pusilla)
Rusty-cheeked scimitar babbler (Pomatorhinus erythrogenys)
Coral-billed scimitar babbler (Pomatorhinus ferruginosus)
Streak-breasted scimitar babbler (Pomatorhinus ruficollis)
White-browed scimitar babbler (Pomatorhinus schisticeps)
Black-throated prinia (Prinia atrogularis)
Himalayan prinia (Prinia crinigera)
Yellow-bellied prinia (Prinia flaviventris)
Graceful prinia (Prinia gracilis)
Rufescent prinia (Prinia rufescens)
Tawny-flanked prinia (Prinia subflava)
Black-throated accentor (Prunella atrogularis)
Alpine accentor (Prunella collaris)
Brown accentor (Prunella fulvescens)
Dunnock (Prunella modularis)
Robin accentor (Prunella rubeculoides)
Rufous-breasted accentor (Prunella strophiata)
Trilling shrike-babbler (Pteruthius aenobarbus)
Red-vented bulbul (Pycnonotus cafer)
Flavescent bulbul (Pycnonotus flavescens)
Himalayan bulbul (Pycnonotus leucogenys)
Black-capped bulbul (Pycnonotus melanicterus)
Eurasian bullfinch (Pyrrhula pyrrhula)
Goldcrest (Regulus regulus)
White-throated fantail (Rhipidura albicollis)
White-browed fantail (Rhipidura aureola)
Desert finch (Rhodospiza obsoleta)
Long-billed wren-babbler (Rimator malacoptilus)
Pied bush chat (Saxicola caprata)
Grey bush chat (Saxicola ferrea)
White-tailed stonechat (Saxicola leucurus)
Whinchat (Saxicola rubetra)
Siberian stonechat (Saxicola maurus)
Streaked scrub warbler (Scotocerca inquieta)
Green-crowned warbler (Seicercus burkii)
Chestnut-crowned warbler (Seicercus castaniceps)
Grey-hooded warbler (Phylloscopus xanthoschistos)
Atlantic canary (Serinus canaria)
Red-fronted serin (Serinus pusillus)
Indian nuthatch (Sitta castanea)
Velvet-fronted nuthatch (Sitta frontalis)
Tawny-breasted wren-babbler (Spelaeornis longicaudatus)
Eurasian siskin (Spinus spinus)
Crested finchbill (Spizixos canifrons)
Grey-throated babbler (Stachyris nigriceps)
Rufous-fronted babbler (Stachyris rufifrons)
Common starling (Sturnus vulgaris)
Eurasian blackcap (Sylvia atricapilla)
Garden warbler (Sylvia borin)
Eastern subalpine warbler (Sylvia cantillans)
Common whitethroat (Sylvia communis)
Spectacled warbler (Sylvia conspicillata)
Lesser whitethroat (Sylvia curruca)
Tristram's warbler (Sylvia deserticola)
Western Orphean warbler (Sylvia hortensis)
Sardinian warbler (Sylvia melanocephala)
Barred warbler (Sylvia nisoria)
Dartford warbler (Sylvia undata)
Indian paradise flycatcher (Terpsiphone paradisi)
Grey-bellied tesia (Tesia cyaniventer)
Chestnut-capped babbler (Timalia pileata)
Brown-capped laughingthrush (Trochalopteron austeni)
Striped laughingthrush (Trochalopteron virgatum)
Eurasian wren (Troglodytes troglodytes)
Japanese thrush (Turdus cardis)
Black-breasted thrush (Turdus dissimilis)
Redwing (Turdus iliacus)
Common blackbird (Turdus merula)
Eyebrowed thrush (Turdus obscurus)
Song thrush (Turdus philomelos)
Fieldfare (Turdus pilaris)
Ring ouzel (Turdus torquatus)
Tickell's thrush (Turdus unicolor)
Mistle thrush (Turdus viscivorus)
Long-tailed rosefinch (Uragus sibiricus)
Pale-footed bush warbler (Urosphena pallidipes)
Whiskered yuhina (Yuhina flavicollis)
Rufous-vented yuhina (Yuhina occipitalis)
Orange-headed thrush (Geokichla citrina)
Dark-sided thrush (Zoothera marginata)
Long-billed thrush (Zoothera monticola)
Indian white-eye (Zosterops palpebrosa)
Chicks
The naked, altricial chick hatches after 11–13 days. It methodically evicts all host progeny from host nests. It is a much larger bird than its hosts, and needs to monopolize the food supplied by the parents. The chick will roll the other eggs out of the nest by pushing them with its back over the edge. If the host's eggs hatch before the cuckoo's, the cuckoo chick will push the other chicks out of the nest in a similar way. At 14 days old, the common cuckoo chick is about three times the size of an adult Eurasian reed warbler.
The necessity of eviction behavior is unclear. One hypothesis is that competing with host chicks leads to decreased cuckoo chick weight, which is selective pressure for eviction behavior. An analysis of the amount of food provided to common cuckoo chicks by host parents in the presence and absence of host siblings showed that when competing against host siblings, cuckoo chicks did not receive enough food, showing an inability to compete. Selection pressure for eviction behavior may come from cuckoo chicks lacking the correct visual begging signals, hosts distributing food to all nestlings equally, or host recognition of the parasite. Another hypothesis is that decreased cuckoo chick weight is not selective pressure for eviction behavior. An analysis of resources provided to cuckoo chicks in the presence and absence of host siblings also showed that the weights of cuckoos raised with host chicks were much smaller upon fledging than cuckoos raised alone, but within 12 days cuckoos raised with siblings grew faster than cuckoos raised alone and made up for developmental differences, showing a flexibility that would not necessarily select for eviction behavior.
Species whose broods are parasitised by the common cuckoo have evolved to discriminate against cuckoo eggs but not chicks. Experiments have shown that common cuckoo chicks persuade their host parents to feed them by making a rapid begging call that sounds "remarkably like a whole brood of host chicks". The researchers suggested that "the cuckoo needs vocal trickery to stimulate adequate care to compensate for the fact that it presents a visual stimulus of just one gape". However, a cuckoo chick needs the amount of food of a whole brood of host nestlings, and it struggles to elicit that much from the host parents with only the vocal stimulus. This may reflect a tradeoff—the cuckoo chick benefits from eviction by receiving all the food provided, but faces a cost in being the only one influencing feeding rate. For this reason, cuckoo chicks exploit host parental care by remaining with the host parent longer than host chicks do, both before and after fledging.
Common cuckoo chicks fledge about 17–21 days after hatching, compared to 12–13 days for Eurasian reed warblers. If the hen cuckoo is out-of-phase with a clutch of Eurasian reed warbler eggs, she will eat them all so that the hosts are forced to start another brood.
The common cuckoo's behaviour was firstly observed and described by Aristotle and the combination of behaviour and anatomical adaptation by Edward Jenner, who was elected as Fellow of the Royal Society in 1788 for this work rather than for his development of the smallpox vaccine. It was first documented on film in 1922 by Edgar Chance and Oliver G. Pike, in their film The Cuckoo's Secret.
A study in Japan found that young common cuckoos probably acquire species-specific feather lice from body-to-body contact with other cuckoos between the time of leaving the nest and returning to the breeding area in spring. A total of 21 nestlings were examined shortly before they left their hosts' nests and none carried feather lice. However, young birds returning to Japan for the first time were found just as likely as older individuals to be lousy.
As a biodiversity indicator
The occurrence of common cuckoo in Europe is a good surrogate for biodiversity facets including taxonomic diversity and functional diversity in bird communities, and better than the traditional use of top predators as bioindicators. The reason for this is the strong correlation between the cuckoo's host species richness and overall bird species richness, due to co-evolutionary relationships. This may be useful for citizen science.
In culture
Aristotle was aware of the old tale that cuckoos turned into hawks in winter. The tale was an explanation for their absence outside the summer season, later accepted by Pliny the Elder in his Natural History. Aristotle rejected the claim, observing in his History of Animals that cuckoos do not have the predators' talons or hooked bills. These Classical era accounts were known to the Early Modern English naturalist, William Turner.
The 13th-century medieval English round, "Sumer Is Icumen In", celebrates the cuckoo as a sign of spring, the beginning of summer, in the first stanza, and in the chorus:
Middle English
Svmer is icumen in
Lhude sing cuccu
Groweþ sed
and bloweþ med
and springþ þe wde nu
Sing cuccu
Modern English
Summer has arrived,
Sing loudly, cuckoo!
The seed is growing
And the meadow is blooming,
And the wood is coming into leaf now,
Sing, cuckoo!
In England, William Shakespeare alludes to the common cuckoo's association with spring, and with cuckoldry, in the courtly springtime song in his play Love's Labours Lost:
When daisies pied and violets blue
And lady-smocks all silver-white
And cuckoo-buds of yellow hue
Do paint the meadows with delight,
The cuckoo then, on every tree,
Mocks married men; for thus sings he:
"Cuckoo;
Cuckoo, cuckoo!" O, word of fear,
Unpleasing to a married ear!
In Europe, hearing the call of the common cuckoo is regarded as the first harbinger of spring. Many local legends and traditions are based on this. In Scotland, gowk stanes (cuckoo stones) sometimes associated with the arrival of the first cuckoo of spring. "Gowk" is an old name for the common cuckoo in northern England, derived from the harsh repeated "gowk" call the bird makes when excited. The well-known cuckoo clock features a mechanical bird and is fitted with bellows and pipes that imitate the call of the common cuckoo. Cuckoos feature in traditional rhymes, such as '"In April the cuckoo comes, In May she'll stay, In June she changes her tune, In July she prepares to fly, Come August, go she must,"' quoted Peggy. 'But you haven't said it all,' put in Bobby. '"And if the cuckoo stays till September, It's as much as the oldest man can remember."'
On Hearing the First Cuckoo in Spring is a symphonic poem from Norway composed for orchestra by Frederick Delius.
Two English folk songs feature cuckoos. One usually called The Cuckoo starts:
The cuckoo is a fine bird and she sings as she flies,
She brings us good tidings, she tells us no lies.
She sucks little birds' eggs to make her voice clear,
And never sings cuckoo till the summer draws near
The second, "The Cuckoo's Nest" is a song about a courtship, with the eponymous (and of course, non-existent) nest serving as a metaphor for the vulva and its tangled "nest" of pubic hair.
Some like a girl who is pretty in the face
and some like a girl who is slender in the waist
But give me a girl who will wriggle and will twist
At the bottom of the belly lies the cuckoo's nest...
...Me darling, says she, I can do no such thing
For me mother often told me it was committing sin
Me maidenhead to lose and me sex to be abused
So have no more to do with me cuckoo's nest
One of the tales of the Wise Men of Gotham tells how they built a hedge round a tree in order to trap a cuckoo so that it would always be summer.
The theme music for film comedians Laurel and Hardy, titled "Dance of The Cuckoos" and composed by Marvin Hatley, was based on the call of the common cuckoo.
| Biology and health sciences | Cuculiformes and relatives | Animals |
207336 | https://en.wikipedia.org/wiki/Salivary%20gland | Salivary gland | The salivary glands in many vertebrates including mammals are exocrine glands that produce saliva through a system of ducts. Humans have three paired major salivary glands (parotid, submandibular, and sublingual), as well as hundreds of minor salivary glands. Salivary glands can be classified as serous, mucous, or seromucous (mixed).
In serous secretions, the main type of protein secreted is alpha-amylase, an enzyme that breaks down starch into maltose and glucose, whereas in mucous secretions, the main protein secreted is mucin, which acts as a lubricant.
In humans, 1200 to 1500 ml of saliva are produced every day. The secretion of saliva (salivation) is mediated by parasympathetic stimulation; acetylcholine is the active neurotransmitter and binds to muscarinic receptors in the glands, leading to increased salivation.
A proposed fourth pair of salivary glands, the tubarial glands, were first identified in 2020. They are named for their location, being positioned in front of and over the torus tubarius. However, this finding from one study is yet to be confirmed.
Structure
Parotid glands
The two parotid glands are major salivary glands wrapped around the mandibular ramus in humans. These are largest of the salivary glands, secreting saliva to facilitate mastication and swallowing, and amylase to begin the digestion of starches. It is the serous type of gland which secretes alpha-amylase (also known as ptyalin). It enters the oral cavity via the parotid duct. The glands are located posterior to the mandibular ramus and anterior to the mastoid process of the temporal bone. They are clinically relevant in dissections of facial nerve branches while exposing the different lobes, since any iatrogenic lesion will result in either loss of action or strength of muscles involved in facial expression. They produce 20% of the total salivary content in the oral cavity. Mumps is a viral infection, caused by infection in the parotid gland.
Submandibular glands
The submandibular glands (previously known as submaxillary glands) are a pair of major salivary glands located beneath the lower jaws, superior to the digastric muscles. The secretion produced is a mixture of both serous fluid and mucus, and enters the oral cavity via the submandibular duct or Wharton duct. Around 70% of saliva in the oral cavity is produced by the submandibular glands, though they are much smaller than the parotid glands. This gland can usually be felt via palpation of the neck, as it is in the superficial cervical region and feels like a rounded ball. It is located about two fingers above the Adam's apple (laryngeal prominence) and about two inches apart under the chin.
Sublingual glands
The sublingual glands are a pair of major salivary glands located inferior to the tongue, anterior to the submandibular glands. The secretion produced is mainly mucous in nature, but it is categorized as a mixed gland. Unlike the other two major glands, the ductal system of the sublingual glands does not have intercalated ducts and usually does not have striated ducts, either, so saliva exits directly from 8-20 excretory ducts known as the Rivinus ducts. About 5% of saliva entering the oral cavity comes from these glands.
Tubarial salivary glands
The tubarial glands are suggested as a fourth pair of salivary glands situated posteriorly in the nasopharynx and nasal cavity, predominantly with mucous glands, and its ducts opening into the dorsolateral pharyngeal wall. The glands were unknown until September 2020, when they were discovered by a group of Dutch scientists using prostate-specific membrane antigen PET-CT. This discovery may explain mouth dryness after radiotherapy despite the avoidance of the three major glands. However, these findings from just one study need to be confirmed. On the other hand, an interdisciplinary group of scientists disagree with this new discovery. They believe that an accumulation of minor salivary glands has been described.
Minor salivary glands
Around 800 to 1,000 minor salivary glands are located throughout the oral cavity within the submucosa of the oral mucosa in the tissue of the buccal, labial, and lingual mucosa, the soft palate, the lateral parts of the hard palate, and the floor of the mouth or between muscle fibers of the tongue. They are 1 to 2 mm in diameter and unlike the major glands, they are not encapsulated by connective tissue, only surrounded by it. The gland has usually a number of acini connected in a tiny lobule. A minor salivary gland may have a common excretory duct with another gland, or may have its own excretory duct. Their secretion is mainly mucous in nature and have many functions such as coating the oral cavity with saliva. Problems with dentures are sometimes associated with minor salivary glands if dry mouth is present. The minor salivary glands are innervated by the facial nerve (cranial nerve CN VII).
Von Ebner's glands
Von Ebner's glands are found in a trough circling the circumvallate papillae on the dorsal surface of the tongue near the terminal sulcus. They secrete a purely serous fluid that begins lipid hydrolysis. They also facilitate the perception of taste through secretion of digestive enzymes and proteins.
The arrangement of these glands around the circumvallate papillae provides a continuous flow of fluid over the great number of taste buds lining the sides of the papillae, and is important for dissolving the food particles to be tasted.
Nerve supply
Salivary glands are innervated, either directly or indirectly, by the parasympathetic and sympathetic arms of the autonomic nervous system. Parasympathetic stimulation evokes a copious flow of saliva.
Parasympathetic innervation to the salivary glands is carried via cranial nerves. The parotid gland receives its parasympathetic input from the glossopharyngeal nerve (CN IX) via the otic ganglion, while the submandibular and sublingual glands receive their parasympathetic input from the facial nerve (CN VII) via the submandibular ganglion. These nerves release acetylcholine and substance P, which activate the IP3 and DAG pathways respectively.
Direct sympathetic innervation of the salivary glands takes place via preganglionic nerves in the thoracic segments T1-T3 which synapse in the superior cervical ganglion with postganglionic neurons that release norepinephrine, which is then received by β1-adrenergic receptors on the acinar and ductal cells of the salivary glands, leading to an increase in cyclic adenosine monophosphate (cAMP) levels and the corresponding increase of saliva secretion. Note that in this regard both parasympathetic and sympathetic stimuli result in an increase in salivary gland secretions, the difference lies on the composition of this saliva, once sympathetic stimulus results particularly in the increase of amylase secretion, which is produced by serous glands. The sympathetic nervous system also affects salivary gland secretions indirectly by innervating the blood vessels that supply the glands, resulting in vasoconstriction through the activation of α1 adrenergic receptors, lessening the saliva's water content.
Microanatomy
The gland is internally divided into lobules. Blood vessels and nerves enter the glands at the hilum and gradually branch out into the lobules.
Acini
Secretory cells are found in a group, or acinus. Each acinus is located at the terminal part of the gland connected to the ductal system, with many acini within each lobule of the gland. Each acinus consists of a single layer of cuboidal epithelial cells surrounding a lumen, a central opening where the saliva is deposited after being produced by the secretory cells. The three forms of acini are classified in terms of the type of epithelial cell present and the secretory product being produced - serous, mucoserous, and mucous.
Ducts
In the duct system, the lumina are formed by intercalated ducts, which in turn join to form striated ducts. These drain into ducts situated between the lobes of the gland (called interlobular ducts or secretory ducts). These are found on most major and minor glands (exception may be the sublingual gland).
All of the human salivary glands terminate in the mouth, where the saliva proceeds to aid in digestion. The released saliva is quickly inactivated in the stomach by the acid that is present, but saliva also contains enzymes that are actually activated by stomach acid.
Gene and protein expression
About 20,000 protein-coding genes are expressed in human cells and 60% of these genes are expressed in normal, adult salivary glands. Less than 100 genes are more specifically expressed in the salivary gland. The salivary gland specific genes are mainly genes that encode for secreted proteins and compared to other organs in the human body; the salivary gland has the highest fraction of secreted genes. The heterogeneous family of proline-rich, human salivary glycoproteins, such as PRB1 and PRH1, are salivary gland-specific proteins with highest level of expression. Examples of other specifically expressed proteins include the digestive amylase enzyme AMY1A, the mucin MUC7 and statherin, all of major importance for specific characteristics of saliva.
Aging
Aging of salivary glands shows some structural changes, such as:
Decrease in volume of acinar tissue
Increase in fibrous tissue
Increase in adipose tissue
Ductal hyperplasia and dilation
In addition, changes occur in salivary contents:
Decrease in concentration of secretory IgE
Decrease in the amount of mucin
However, no overall change in the amount of saliva secreted is seen.
Function
Salivary glands secrete saliva, which has many benefits for the oral cavity and health in general. The knowledge of normal salivary flow rate (SFR) is extremely important when treating dental patients. These benefits include:
Protection: Saliva consists of proteins (for example; mucins) that lubricate and protect both the soft and hard tissues of the oral cavity. Mucins are the principal organic constituents of mucus, the slimy viscoelastic material that coats all mucosal surfaces.
Buffering: In general, the higher the saliva flow rate, the faster the clearance and the higher the buffer capacity, hence better protection from dental caries. Therefore, people with a slower rate of saliva secretion, combined with a low buffer capacity, have lessened salivary protection against microbes.
Pellicle formation: Saliva forms a pellicle on the surface of the tooth to prevent wearing. The film contains mucins and proline-rich glycoprotein from the saliva.
The proteins (statherin and proline-rich proteins) within the salivary pellicle inhibit demineralization and promote remineralization by attracting calcium ions.
Maintenance of tooth integrity: Demineralization occurs when enamel disintegrates due to the presence of acid. When this occurs, the buffering capacity effect of saliva (increases saliva flow rate) inhibits demineralization. Saliva can then begin to promote the remineralization of the tooth by strengthening the enamel with calcium and phosphate minerals.
Antimicrobial action: Saliva can prevent microbial growth based on the elements it contains. For example, lactoferrin in saliva binds naturally with iron. Since iron is a major component of bacterial cell walls, removal of iron breaks down the cell wall, which in turn breaks down the bacterium. Antimicrobial peptides such as histatins inhibit the growth of Candida albicans and Streptococcus mutans. Salivary immunoglobulin A serves to aggregate oral bacteria such as S. mutans and prevent the formation of dental plaque.
Tissue repair: Saliva can encourage soft-tissue repair by decreasing clotting time and increasing wound contraction.
Digestion: Saliva contains amylase, which hydrolyses starch into glucose, maltose, and dextrin. As a result, saliva allows some digestion to occur before the food reaches the stomach.
Taste: Saliva acts as a solvent in which solid particles can dissolve and enter the taste buds through oral mucosa located on the tongue. These taste buds are found within foliate and circumvallate papillae, where minor salivary glands secrete saliva.
Clinical significance
A sialolithiasis (a salivary calculus or stone) may cause blockage of the ducts, most commonly the submandibular ducts, causing pain and swelling of the gland.
Salivary gland dysfunction refers to either xerostomia (the symptom of dry mouth) or salivary gland hypofunction (reduced production of saliva); it is associated with significant impairment of quality of life. Following radiotherapy of the head and neck region, salivary gland dysfunction is a predictable side-effect. Saliva production may be pharmacologically stimulated by sialagogues such as pilocarpine and cevimeline. It can also be suppressed by so-called antisialagogues such as tricyclic antidepressants, SSRIs, antihypertensives, and polypharmacy. A Cochrane review found there was no strong evidence that topical therapies are effective in relieving the symptoms of dry mouth.
Cancer treatments including chemotherapy and radiation therapy may impair salivary flow. Radiotherapy can cause permanent hyposalivation due to injury to the oral mucosa containing the salivary glands, resulting in xerostomia, whereas chemotherapy may cause only temporary salivary impairment. Furthermore surgical removal because of benign or malignant lesions may also impair function.
Graft versus host disease after allogeneic bone marrow transplantation may manifest as dry mouth and many small mucoceles. Salivary gland tumours may occur, including mucoepidermoid carcinoma, a malignant growth.
Clinical tests/investigations
A sialogram is a radiocontrast study of a salivary duct that may be used to investigate its function and for diagnosing Sjögren syndrome.
Other animals
The salivary glands of some species are modified to produce proteins; salivary amylase is found in many bird and mammal species (including humans, as noted above). Furthermore, the venom glands of venomous snakes, Gila monsters, and some shrews, are actually modified salivary glands. In other organisms such as insects, salivary glands are often used to produce biologically important proteins such as silk or glues, whilst fly salivary glands contain polytene chromosomes that have been useful in genetic research.
| Biology and health sciences | Gastrointestinal tract | Biology |
207397 | https://en.wikipedia.org/wiki/Blue%20straggler | Blue straggler | A blue straggler is a type of star that is more luminous and bluer than expected. Typically identified in a stellar cluster, they have a higher effective temperature than the main sequence turnoff point for the cluster, where ordinary stars begin to evolve towards the red giant branch. Blue stragglers were first discovered by Allan Sandage in 1953 while performing photometry of the stars in the globular cluster M3.
Description
Standard theories of stellar evolution hold that the position of a star on the Hertzsprung–Russell diagram should be determined almost entirely by the initial mass of the star and its age. In a cluster, stars all formed at approximately the same time, and thus in an H–R diagram for a cluster, all stars should lie along a clearly defined curve set by the age of the cluster, with the positions of individual stars on that curve determined solely by their initial mass. With masses two to three times that of the rest of the main-sequence cluster stars, blue stragglers seem to be exceptions to this rule. The resolution of this problem is likely related to interactions between two or more stars in the dense confines of the clusters in which blue stragglers are found. Blue stragglers are also found among field stars, although their detection is more difficult to disentangle from genuine massive main sequence stars. Field blue stragglers can however be identified in the Galactic halo, since all surviving main sequence stars are low mass.
Formation
Several explanations have been put forth to explain the existence of blue stragglers. The simplest is that blue stragglers formed later than the rest of the stars in the cluster, but evidence for this is limited. Another simple proposal is that blue stragglers are either field stars which are not actually members of the clusters to which they seem to belong, or are field stars which were captured by the cluster. This too seems unlikely, as blue stragglers often reside at the very center of the clusters to which they belong. The most likely explanation is that blue stragglers are the result of stars that come too close to another star or similar mass object and collide. The newly formed star has thus a higher mass, and occupies a position on the HR diagram which would be populated by genuinely young stars.
Cluster interactions
The two most viable explanations put forth for the existence of blue stragglers both involve interactions between cluster members. One explanation is that they are current or former binary stars that are in the process of merging or have already done so. The merger of two stars would create a single more massive star, potentially with a mass larger than that of stars at the main-sequence turn-off point. While a star born with a mass larger than that of stars at the turn-off point would evolve quickly off the main sequence, the components forming a more massive star (via merger) would thereby delay such a change. There is evidence in favor of this view, notably that blue stragglers appear to be much more common in dense regions of clusters, especially in the cores of globular clusters. Since there are more stars per unit volume, collisions and close encounters are far more likely in clusters than among field stars and calculations of the expected number of collisions are consistent with the observed number of blue stragglers.
One way to test this hypothesis is to study the Stellar pulsation of variable blue stragglers. The asteroseismological properties of merged stars may be measurably different from those of typical pulsating variables of similar mass and luminosity. However, the measurement of pulsations is very difficult, given the scarcity of variable blue stragglers, the small photometric amplitudes of their pulsations and the crowded fields in which these stars are often found. Some blue stragglers have been observed to rotate quickly, with one example in 47 Tucanae observed to rotate 75 times faster than the Sun, which is consistent with formation by collision.
The other explanation relies on mass transfer between two stars born in a binary star system. The more massive of the two stars in the system will evolve first and as it expands, will overflow its Roche lobe. Mass will quickly transfer from the initially more massive companion onto the less massive; like the collision hypothesis, this would explain why there are main-sequence stars more massive than other stars in the cluster which have already evolved off the main sequence. Observations of blue stragglers have found that some have significantly less carbon and oxygen in their photospheres than is typical, which is evidence of their outer material having been dredged up from the interior of a companion.
Overall, there is evidence in favor of both collisions and mass transfer between binary stars. In M3, 47 Tucanae, and NGC 6752, both mechanisms seem to be operating, with collisional blue stragglers occupying the cluster cores and mass transfer blue stragglers at the outskirts. The discovery of low-mass white dwarf companions around two blue stragglers in the Kepler field suggests these two blue stragglers gained mass via stable mass transfer.
Field formation
Blue stragglers are also found among field stars, as a result of close binary interaction. Since the fraction of close binaries increases with decreasing metallicity, blue stragglers are increasingly likely to be found across metal poor stellar populations. The identification of blue stragglers among field stars however is more difficult than in stellar clusters, because of the mix of stellar ages and metallicities among field stars. Field blue stragglers however can be identified among old stellar populations, like the Galactic halo, or dwarf galaxies.
Red and yellow stragglers
"Yellow stragglers" or "red stragglers" are stars with colors between that of the turnoff and the red-giant branch but brighter than the subgiant branch. Such stars have been identified in open and globular star clusters. These stars may be former blue straggler stars that are now evolving toward the giant branch.
| Physical sciences | Stellar astronomy | Astronomy |
19019270 | https://en.wikipedia.org/wiki/Sexually%20transmitted%20infection | Sexually transmitted infection | A sexually transmitted infection (STI), also referred to as a sexually transmitted disease (STD) and the older term venereal disease (VD), is an infection that is spread by sexual activity, especially vaginal intercourse, anal sex, oral sex, or sometimes manual sex. STIs often do not initially cause symptoms, which results in a risk of transmitting them on to others. The term sexually transmitted infection is generally preferred over sexually transmitted disease or venereal disease, as it includes cases with no symptomatic disease. Symptoms and signs of STIs may include vaginal discharge, penile discharge, ulcers on or around the genitals, and pelvic pain. Some STIs can cause infertility.
Bacterial STIs include chlamydia, gonorrhea, and syphilis. Viral STIs include genital warts, genital herpes, and HIV/AIDS. Parasitic STIs include trichomoniasis. Most STIs are treatable and curable; of the most common infections, syphilis, gonorrhea, chlamydia, and trichomoniasis are curable, while HIV/AIDS and genital herpes are not curable. Some vaccinations may decrease the risk of certain infections including hepatitis B and few types of HPV. Safe sex practices such as use of condoms, having smaller number of sexual partners, and being in a relationship in which each person only has sex with the other also decreases STIs risk. Comprehensive sex education may also be useful.
STI diagnostic tests are usually easily available in the developed world, but they are often unavailable in the developing world. There is often shame and stigma associated with STIs. In 2015, STIs other than HIV resulted in 108,000 deaths worldwide. Globally, in 2015, about 1.1 billion people had STIs other than HIV/AIDS. About 500 million have either syphilis, gonorrhea, chlamydia or trichomoniasis. At least an additional 530 million have genital herpes, and 290 million women have human papillomavirus. Historical documentation of STIs in antiquity dates back to at least the Ebers Papyrus () and the Hebrew Bible/Old Testament (8th/7th C. BCE).
Signs and symptoms
Not all STIs are symptomatic, and symptoms may not appear immediately after infection. In some instances a disease can be carried with no symptoms, which leaves a greater risk of passing the disease on to others. Depending on the disease, some untreated STIs can lead to infertility, chronic pain or death.
The presence of an STI in prepubescent children may indicate sexual abuse.
Cause
Transmission
A sexually transmitted infection present in a pregnant woman may be passed on to the infant before or after birth.
Bacterial
Chancroid (Haemophilus ducreyi)
Chlamydia (Chlamydia trachomatis)
Gonorrhea (Neisseria gonorrhoeae)
Granuloma inguinale or (Klebsiella granulomatis)
Mycoplasma genitalium
Mycoplasma hominis
Syphilis (Treponema pallidum)
Ureaplasma infection
Viral
Viral hepatitis (hepatitis B virus)—saliva, venereal fluids.(Note: hepatitis A and hepatitis E are transmitted via the fecal–oral route; hepatitis C is rarely sexually transmittable, and the route of transmission of hepatitis D (only if infected with B) is uncertain, but may include sexual transmission.)
Herpes simplex (Herpes simplex virus 1, 2) skin and mucosal, transmissible with or without visible blisters
HIV (Human Immunodeficiency Virus)—venereal fluids, semen, breast milk, blood
HPV (Human Papillomavirus)—skin and mucosal contact. 'High risk' types of HPV cause almost all cervical cancers, as well as some anal, penile, and vulvar cancer. Some other types of HPV cause genital warts.
Molluscum contagiosum (molluscum contagiosum virus MCV)—close contact
Zika virus
Parasites
Crab louse, colloquially known as "crabs" or "pubic lice" (Pthirus pubis). The infestation and accompanying inflammation is Pediculosis pubis
Scabies (Sarcoptes scabiei)
Trichomoniasis (Trichomonas vaginalis), colloquially known as "trich"
Main types
Sexually transmitted infections include:
Chlamydia is a sexually transmitted infection caused by the bacterium Chlamydia trachomatis. In women, symptoms may include abnormal vaginal discharge, burning during urination, and bleeding in between periods, although most women do not experience any symptoms. Symptoms in men include pain when urinating, and abnormal discharge from their penis. If left untreated in both men and women, chlamydia can infect the urinary tract and potentially lead to pelvic inflammatory disease (PID). PID can cause serious problems during pregnancy and even has the potential to cause infertility. It can cause a woman to have a potentially deadly ectopic pregnancy, in which the egg implants outside of the uterus. However, chlamydia can be cured with antibiotics.
The two most common forms of herpes are caused by infection with herpes simplex virus (HSV). HSV-1 is typically acquired orally and causes cold sores; HSV-2 is usually acquired during sexual contact and affects the genitals; however, either strain may affect either site. Some people are asymptomatic or have very mild symptoms. Those that do experience symptoms usually notice them 2 to 20 days after exposure which lasts 2 to 4 weeks. Symptoms can include small fluid-filled blisters, headaches, backaches, itching or tingling sensations in the genital or anal area, pain during urination, flu like symptoms, swollen glands, or fever. Herpes is spread through skin contact with a person infected with the virus. The virus affects the areas where it entered the body. This can occur through kissing, vaginal intercourse, oral sex or anal sex. The virus is most infectious during times when there are visible symptoms; however, those who are asymptomatic can still spread the virus through skin contact. The initial infection and symptoms are usually the most severe because the body does not have any antibodies built up. After the primary attack, one might have recurring attacks that are milder or might not even have future attacks. There is no cure for the disease but there are antiviral medications that treat its symptoms and lower the risk of transmission (Valtrex). Although HSV-1 is typically the "oral" version of the virus, and HSV-2 is typically the "genital" version of the virus, a person with HSV-1 orally can transmit that virus to their partner genitally. The virus, either type, will settle into a nerve bundle either at the top of the spine, producing the "oral" outbreak, or a second nerve bundle at the base of the spine, producing the genital outbreak.
The human papillomavirus (HPV) is the most common STI in the United States. There are more than 40 different strands of HPV and many do not cause any health problems. In 90% of cases, the body's immune system clears the infection naturally within two years. Some cases may not be cleared and can lead to genital warts (bumps around the genitals that can be small or large, raised or flat, or shaped like cauliflower) or cervical cancer and other HPV related cancers. Symptoms might not show up until advanced stages. It is important for women to get pap smears in order to check for and treat cancers. There are also two vaccines available for women (Cervarix and Gardasil) that protect against the types of HPV that cause cervical cancer. HPV can be passed through genital-to-genital contact as well as during oral sex. The infected partner might not have any symptoms.
Gonorrhea is caused by bacterium that lives on moist mucous membranes in the urethra, vagina, rectum, mouth, throat, and eyes. The infection can spread through contact with the penis, vagina, mouth, or anus. Symptoms of gonorrhea usually appear two to five days after contact with an infected partner; however, some men might not notice symptoms for up to a month. Symptoms in men include burning and pain while urinating, increased urinary frequency, discharge from the penis (white, green, or yellow in color), red or swollen urethra, swollen or tender testicles, or sore throat. Symptoms in women may include vaginal discharge, burning or itching while urinating, painful sexual intercourse, severe pain in lower abdomen (if infection spreads to fallopian tubes), or fever (if infection spreads to fallopian tubes); however, many women do not show any symptoms. Antibiotic resistant strains of Gonorrhea are a significant concern, but most cases can be cured with existing antibiotics.
Syphilis is an STI caused by a bacterium. Untreated, it can lead to complications and death. Clinical manifestations of syphilis include the ulceration of the uro-genital tract, mouth or rectum; if left untreated the symptoms worsen. In recent years, the prevalence of syphilis has declined in Western Europe, but it has increased in Eastern Europe (former Soviet states). A high incidence of syphilis can be found in places such as Cameroon, Cambodia, Papua New Guinea. Syphilis infections are increasing in the United States.
Trichomoniasis is a common STI that is caused by infection with a protozoan parasite called Trichomonas vaginalis. Trichomoniasis affects both women and men, but symptoms are more common in women. Most patients are treated with an antibiotic called metronidazole, which is very effective.
HIV (human immunodeficiency virus) damages the body's immune system, which interferes with its ability to fight off disease-causing agents. The virus kills CD4 cells, which are white blood cells that help fight off various infections. HIV is carried in body fluids and is spread by sexual activity. It can also be spread by contact with infected blood, breastfeeding, childbirth, and from mother to child during pregnancy. When HIV is at its most advanced stage, an individual is said to have AIDS (acquired immunodeficiency syndrome). There are different stages of the progression of and HIV infection. The stages include primary infection, asymptomatic infection, symptomatic infection, and AIDS. In the primary infection stage, an individual will have flu-like symptoms (headache, fatigue, fever, muscle aches) for about two weeks. In the asymptomatic stage, symptoms usually disappear, and the patient can remain asymptomatic for years. When HIV progresses to the symptomatic stage, the immune system is weakened and has a low cell count of CD4+ T cells. When the HIV infection becomes life-threatening, it is called AIDS. People with AIDS fall prey to opportunistic infections and die as a result. When the disease was first discovered in the 1980s, those who had AIDS were not likely to live longer than a few years. There are now antiretroviral drugs (ARVs) available to treat HIV infections. There is no known cure for HIV or AIDS but the drugs help suppress the virus. By suppressing the amount of virus in the body, people can lead longer and healthier lives. Even though their virus levels may be low they can still spread the virus to others.
Viruses in semen
Twenty-seven different viruses have been identified in semen. Information on whether or not transmission occurs or whether the viruses cause disease is uncertain. Some of these microbes are known to be sexually transmitted.
Pathophysiology
Many STIs are (more easily) transmitted through the mucous membranes of the penis, vulva, rectum, urinary tract and (less often—depending on type of infection) the mouth, throat, respiratory tract and eyes. The visible membrane covering the head of the penis is a mucous membrane, though it produces no mucus (similar to the lips of the mouth). Mucous membranes differ from skin in that they allow certain pathogens into the body. The amount of contact with infective sources which causes infection varies with each pathogen but in all cases, a disease may result from even light contact from fluid carriers like venereal fluids onto a mucous membrane.
Some STIs such as HIV can be transmitted from mother to child either during pregnancy or breastfeeding.
Healthcare professionals suggest safer sex, such as the use of condoms, as a reliable way of decreasing the risk of contracting sexually transmitted infections during sexual activity, but safer sex cannot be considered to provide complete protection from an STI. The transfer of and exposure to bodily fluids, such as blood transfusions and other blood products, sharing injection needles, needle-stick injuries (when medical staff are inadvertently jabbed or pricked with needles during medical procedures), sharing tattoo needles, and childbirth are other avenues of transmission. These different means put certain groups, such as medical workers, and haemophiliacs and drug users, particularly at risk.
It is possible to be an asymptomatic carrier of sexually transmitted infections. In particular, sexually transmitted infections in women often cause the serious condition of pelvic inflammatory disease.
Diagnosis
Testing may be for a single infection, or consist of a number of tests for a range of STIs, including tests for syphilis, trichomonas, gonorrhea, chlamydia, herpes, hepatitis, and HIV. No procedure tests for all infectious agents.
STI tests may be used for a number of reasons:
as a diagnostic test to determine the cause of symptoms or illness
as a screening test to detect asymptomatic or presymptomatic infections
as a check that prospective sexual partners are free of disease before they engage in sex without safer sex precautions (for example, when starting a long term mutually monogamous sexual relationship, in fluid bonding, or for procreation).
as a check prior to or during pregnancy, to prevent harm to the baby
as a check after birth, to check that the baby has not caught an STI from the mother
to prevent the use of infected donated blood or organs
as part of the process of contact tracing from a known infected individual
as part of mass epidemiological surveillance
Early identification and treatment results in less chance to spread disease, and for some conditions may improve the outcomes of treatment. There is often a window period after initial infection during which an STI test will be negative. During this period, the infection may be transmissible. The duration of this period varies depending on the infection and the test. Diagnosis may also be delayed by reluctance of the infected person to seek a medical professional. One report indicated that people turn to the Internet rather than to a medical professional for information on STIs to a higher degree than for other sexual problems.
Classification
Until the 1990s, STIs were commonly known as venereal diseases, an antiquated euphemism derived from the Latin , being the adjectival form of Venus, the Roman goddess of love. However, in the post-classical education era the euphemistic effect was entirely lost, and the common abbreviation "VD" held only negative connotations. Other former euphemisms for STIs include "blood diseases" and "social diseases". The present euphemism is in the use of the initials "STI" rather than in the words they represent. The World Health Organization (WHO) has recommended the more inclusive term sexually transmitted infection since 1999. Public health officials originally introduced the term sexually transmitted infection, which clinicians are increasingly using alongside the term sexually transmitted disease in order to distinguish it from the former.
Prevention
Strategies for reducing STI risk include: vaccination, mutual monogamy, reducing the number of sexual partners, and abstinence. Also potentially helpful is behavioral counseling for sexually active adolescents and for adults who are at increased risk. Such interactive counseling, which can be resource-intensive, is directed at a person's risk, the situations in which risk occurs, and the use of personalized goal-setting strategies.
The most effective way to prevent sexual transmission of STIs is to avoid contact of body parts or fluids which can lead to transfer with an infected partner. Not all sexual activities involve contact: cybersex, phone sex or masturbation from a distance are methods of avoiding contact. Proper use of condoms reduces contact and risk. Although a condom is effective in limiting exposure, some disease transmission may occur even with a condom.Both partners can get tested for STIs before initiating sexual contact, or before resuming contact if a partner engaged in contact with someone else. Many infections are not detectable immediately after exposure, so enough time must be allowed between possible exposures and testing for the tests to be accurate. Certain STIs, particularly certain persistent viruses like HPV, may be impossible to detect.
Some treatment facilities use in-home test kits and have the person return the test for follow-up. Other facilities strongly encourage that those previously infected return to ensure that the infection has been eliminated. Novel strategies to foster re-testing have been the use of text messaging and email as reminders. These types of reminders are now used in addition to phone calls and letters. After obtaining a sexual history, a healthcare provider can encourage risk reduction by providing prevention counseling. Prevention counseling is most effective if provided in a nonjudgmental and empathetic manner appropriate to the person's culture, language, gender, sexual orientation, age, and developmental level. Prevention counseling for STIs is usually offered to all sexually active adolescents and to all adults who have received a diagnosis, have had an STI in the past year, or have multiple sex partners.
Vaccines
Vaccines are available that protect against some viral STIs, such as hepatitis A, hepatitis B, and some types of HPV. Vaccination before initiation of sexual contact is advised to assure maximal protection. The development of vaccines to protect against gonorrhea is ongoing.
Condoms
Condoms and female condoms only provide protection when used properly as a barrier, and only to and from the area that they cover. Uncovered areas are still susceptible to many STIs.
In the case of HIV, sexual transmission routes almost always involve the penis, as HIV cannot spread through unbroken skin; therefore, properly shielding the penis with a properly worn condom from the vagina or anus effectively stops HIV transmission. An infected fluid to broken skin borne direct transmission of HIV would not be considered "sexually transmitted", but can still theoretically occur during sexual contact. This can be avoided simply by not engaging in sexual contact when presenting open, bleeding wounds.
Other STIs, even viral infections, can be prevented with the use of latex, polyurethane or polyisoprene condoms as a barrier. Some microorganisms and viruses are small enough to pass through the pores in natural skin condoms but are still too large to pass through latex or synthetic condoms.
Proper male condom usage entails:
Not putting the condom on too tight at the tip by leaving room for ejaculation. Putting the condom on too tightly can and often does lead to failure.
Wearing a condom too loose can defeat the barrier
Avoiding inverting or spilling a condom once worn, whether it has ejaculate in it or not
If a user attempts to unroll the condom, but realizes they have it on the wrong side, then this condom may not be effective
Being careful with the condom if handling it with long nails
Avoiding the use of oil-based lubricants (or anything with oil in it) with latex condoms, as oil can eat holes into them
Using flavored condoms for oral sex only, as the sugar in the flavoring can lead to yeast infections if used to penetrate
In order to best protect oneself and the partner from STIs, the old condom and its contents are to be treated as infectious and properly disposed of. A new condom is used for each act of intercourse, as multiple usages increase the chance of breakage, defeating the effectiveness as a barrier.
In the case of female condoms, the device consists of two rings, one in each terminal portion. The larger ring should fit snugly over the cervix and the smaller ring remains outside the vagina, covering the vulva. This system provides some protection of the external genitalia.
Other
The cap was developed after the cervical diaphragm. Both cover the cervix and the main difference between the diaphragm and the cap is that the latter must be used only once, using a new one in each sexual act. The diaphragm, however, can be used more than once. These two devices partially protect against STIs (they do not protect against HIV).
Researchers had hoped that nonoxynol-9, a vaginal microbicide would help decrease STI risk. Trials, however, have found it ineffective and it may put women at a higher risk of HIV infection. There is evidence that vaginal dapivirine probably reduces HIV in women who have sex with men, other types of vaginal microbicides have not demonstrated effectiveness for HIV or STIs.
There is little evidence that school-based interventions such as sexual and reproductive health education programmes on contraceptive choices and condoms are effective on improving the sexual and reproductive health of adolescents. Incentive-based programmes may reduce adolescent pregnancy but more data is needed to confirm this.
Screening
Specific age groups, persons who participate in risky sexual behavior, or those have certain health conditions may require screening. The CDC recommends that sexually active women under the age of 25 and those over 25 at risk should be screened for chlamydia and gonorrhea yearly. Appropriate times for screening are during regular pelvic examinations and preconception evaluations. Nucleic acid amplification tests are the recommended method of diagnosis for gonorrhea and chlamydia. This can be done on either urine in both men and women, vaginal or cervical swabs in women, or urethral swabs in men. Screening can be performed:
to assess the presence of infection and prevent tubal infertility in women
during the initial evaluation before infertility treatment
to identify HIV infection
for men who have sex with men
for those who may have been exposed to hepatitis C
for HCV
Management
In the case of rape, the person can be treated prophylacticly with antibiotics.
An option for treating partners of patients (index cases) diagnosed with chlamydia or gonorrhea is patient-delivered partner therapy, which is the clinical practice of treating the sex partners of index cases by providing prescriptions or medications to the patient to take to their partner without the health care provider first examining the partner. In term of preventing reinfection in sexually transmitted infection, treatment with both patient and the sexual partner of patient resulted in more successful than treatment of the patient without the sexual partner. There is no difference in reinfection prevention whether the sexual partner treated with medication without medical examination or after notification by patient.
Epidemiology
In 2008, it was estimated that 500 million people were infected with either syphilis, gonorrhea, chlamydia or trichomoniasis. At least an additional 530 million people have genital herpes and 290 million women have human papillomavirus (HPV). STIs other than HIV resulted in 142,000 deaths in 2013. In the United States there were 19 million new cases of sexually transmitted infections in 2010.
In 2010, 19 million new cases of sexually transmitted infections occurred in women in the United States. A 2008 CDC study found that 25–40% of U.S. teenage girls has a sexually transmitted infection. Out of a population of almost 295,270,000 people there were 110 million new and existing cases of eight sexually transmitted infections.
Over 400,000 sexually transmitted infections were reported in England in 2017, about the same as in 2016, but there were more than 20% increases in confirmed cases of gonorrhoea and syphilis. Since 2008 syphilis cases have risen by 148%, from 2,874 to 7,137, mostly among men who have sex with men. The number of first cases of genital warts in 2017 among girls aged 15–17 years was just 441, 90% less than in 2009 – attributed to the national HPV immunisation programme.
AIDS is among the leading causes of death in present-day Sub-Saharan Africa. HIV/AIDS is transmitted primarily via unprotected sexual intercourse. More than 1.1 million persons are living with HIV/AIDS in the United States, and it disproportionately impacts African Americans. Hepatitis B is also considered a sexually transmitted infection because it can be spread through sexual contact. The highest rates are found in Asia and Africa and lower rates are in the Americas and Europe. Approximately two billion people worldwide have been infected with the hepatitis B virus.
History
The first well-recorded European outbreak of what is now known as syphilis occurred in 1494 when it broke out among French troops besieging Naples in the Italian War of 1494–98. The disease may have originated from the Columbian Exchange. From Naples, the disease swept across Europe, killing more than five million people. As Jared Diamond describes it, "[W]hen syphilis was first definitely recorded in Europe in 1495, its pustules often covered the body from the head to the knees, caused flesh to fall from people's faces, and led to death within a few months," rendering it far more fatal than it is today. Diamond concludes, "[B]y 1546, the disease had evolved into the disease with the symptoms so well known to us today." Gonorrhea is recorded at least up to 700 years ago and associated with a district in Paris formerly known as "Le Clapiers". This is where the prostitutes were to be found at that time.
Prior to the invention of modern medicines, sexually transmitted infections were generally incurable, and treatment was limited to treating the symptoms of the infection. The first voluntary hospital for STIs was founded in 1746 at London Lock Hospital. Treatment was not always voluntary: in the second half of the 19th century, the Contagious Diseases Acts were used to arrest suspected prostitutes. In 1924, a number of states concluded the Brussels Agreement, whereby states agreed to provide free or low-cost medical treatment at ports for merchant seamen with STIs. A proponent of these approaches was Nora Wattie, OBE, Venereal Diseases Officer in Glasgow from 1929, encouraged contact tracing and volunteering for treatment, rather than the prevailing more judgemental view and published her own research on improving sex education and maternity care.
The first effective treatment for a sexually transmitted infection was salvarsan, a treatment for syphilis. With the discovery of antibiotics, a large number of sexually transmitted infections became easily curable, and this, combined with effective public health campaigns against STIs, led to a public perception during the 1960s and 1970s that they have ceased to be a serious medical threat.
During this period, the importance of contact tracing in treating STIs was recognized. By tracing the sexual partners of infected individuals, testing them for infection, treating the infected and tracing their contacts, in turn, STI clinics could effectively suppress infections in the general population.
In the 1980s, first genital herpes and then AIDS emerged into the public consciousness as sexually transmitted infections that could not be cured by modern medicine. AIDS, in particular, has a long asymptomatic period—during which time HIV (the human immunodeficiency virus, which causes AIDS) can replicate and the disease can be transmitted to others—followed by a symptomatic period, which leads rapidly to death unless treated. HIV/AIDS entered the United States from Haiti in about 1969. Recognition that AIDS threatened a global pandemic led to public information campaigns and the development of treatments that allow AIDS to be managed by suppressing the replication of HIV for as long as possible. Contact tracing continues to be an important measure, even when diseases are incurable, as it helps to contain infection.
| Biology and health sciences | Illness and injury | null |
1705831 | https://en.wikipedia.org/wiki/Earth%27s%20crust | Earth's crust | Earth's crust is its thick outer shell of rock, referring to less than one percent of the planet's radius and volume. It is the top component of the lithosphere, a solidified division of Earth's layers that includes the crust and the upper part of the mantle. The lithosphere is broken into tectonic plates whose motion allows heat to escape the interior of Earth into space.
The crust lies on top of the mantle, a configuration that is stable because the upper mantle is made of peridotite and is therefore significantly denser than the crust. The boundary between the crust and mantle is conventionally placed at the Mohorovičić discontinuity, a boundary defined by a contrast in seismic velocity.
The temperature of the crust increases with depth, reaching values typically in the range from about to at the boundary with the underlying mantle. The temperature increases by as much as for every kilometer locally in the upper part of the crust.
Composition
The crust of Earth is of two distinct types:
Continental: 25–70km ( about 15–44 mi) thick and mostly composed of less dense, more felsic rocks, such as granite. In a few places, such as the Tibetan Plateau, the Altiplano, and the eastern Baltic Shield, the continental crust is thicker (50 – 80 km (30 – 50 mi)).
Oceanic: 5 – 10 km (3 – 6 mi) thick and composed primarily of denser, more mafic rocks, such as basalt, diabase, and gabbro.
The average thickness of the crust is about 15 – 20 km (9 – 12 mi).
Because both the continental and oceanic crust are less dense than the mantle below, both types of crust "float" on the mantle. The surface of the continental crust is significantly higher than the surface of the oceanic crust, due to the greater buoyancy of the thicker, less dense continental crust (an example of isostasy). As a result, the continents form high ground surrounded by deep ocean basins.
The continental crust has an average composition similar to that of andesite, though the composition is not uniform, with the upper crust averaging a more felsic composition similar to that of dacite, while the lower crust averages a more mafic composition resembling basalt. The most abundant minerals in Earth's continental crust are feldspars, which make up about 41% of the crust by weight, followed by quartz at 12%, and pyroxenes at 11%.
All the other constituents except water occur only in very small quantities and total less than 1%.
Continental crust is enriched in incompatible elements compared to the basaltic ocean crust and much enriched compared to the underlying mantle. The most incompatible elements are enriched by a factor of 50 to 100 in the continental crust relative to primitive mantle rock, while oceanic crust is enriched with incompatible elements by a factor of about 10.
The estimated average density of the continental crust is 2.835 g/cm3, with density increasing with depth from an average of 2.66 g/cm3 in the uppermost crust to 3.1 g/cm3 at the base of the crust.
In contrast to the continental crust, the oceanic crust is composed predominantly of pillow lava and sheeted dikes with the composition of mid-ocean ridge basalt, with a thin upper layer of sediments and a lower layer of gabbro.
Formation and evolution
Earth formed approximately 4.6 billion years ago from a disk of dust and gas orbiting the newly formed Sun. It formed via accretion, where planetesimals and other smaller rocky bodies collided and stuck, gradually growing into a planet. This process generated an enormous amount of heat, which caused early Earth to melt completely. As planetary accretion slowed, Earth began to cool, forming its first crust, called a primary or primordial crust. This crust was likely repeatedly destroyed by large impacts, then reformed from the magma ocean left by the impact. None of Earth's primary crust has survived to today; all was destroyed by erosion, impacts, and plate tectonics over the past several billion years.
Since then, Earth has been forming a secondary and tertiary crust, which correspond to oceanic and continental crust, respectively. Secondary crust forms at mid-ocean spreading centers, where partial-melting of the underlying mantle yields basaltic magmas and new ocean crust forms. This "ridge push" is one of the driving forces of plate tectonics, and it is constantly creating new ocean crust. Consequently, old crust must be destroyed, so opposite a spreading center, there is usually a subduction zone: a trench where an ocean plate is sinking back into the mantle. This constant process of creating a new ocean crust and destroying the old ocean crust means that the oldest ocean crust on Earth today is only about 200 million years old.
In contrast, the bulk of the continental crust is much older. The oldest continental crustal rocks on Earth have ages in the range from about 3.7 to 4.28 billion years and have been found in the Narryer Gneiss Terrane in Western Australia, in the Acasta Gneiss in the Northwest Territories on the Canadian Shield, and on other cratonic regions such as those on the Fennoscandian Shield. Some zircon with age as great as 4.3 billion years has been found in the Narryer Gneiss Terrane. Continental crust is a tertiary crust, formed at subduction zones through recycling of subducted secondary (oceanic) crust.
The average age of Earth's current continental crust has been estimated to be about 2.0 billion years. Most crustal rocks formed before 2.5 billion years ago are located in cratons. Such an old continental crust and the underlying mantle asthenosphere are less dense than elsewhere on Earth and so are not readily destroyed by subduction. Formation of new continental crust is linked to periods of intense orogeny, which coincide with the formation of the supercontinents such as Rodinia, Pangaea and Gondwana. The crust forms in part by aggregation of island arcs including granite and metamorphic fold belts, and it is preserved in part by depletion of the underlying mantle to form buoyant lithospheric mantle. Crustal movement on continents may result in earthquakes, while movement under the seabed can lead to tidal waves.
| Physical sciences | Geology: General | Earth science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.